Jump to content
You must now use your email address to sign in [click for more info] ×

James Ritson

Staff
  • Posts

    854
  • Joined

  • Last visited

Everything posted by James Ritson

  1. Hi again Gary, Sorry for the confusion, I'm referring to Affinity Photo's RGB/32 format which is 32-bit per channel. It composites in a linear colour space and supports unbounded pixel values (e.g greater/less than the 0-1 range). The Curves, Levels and Channel Mixer adjustments have the feature you've discovered whereby you can operate in a different colour model whilst the document remains in its source format (with a relative cost in performance due to the realtime colour space conversion). The RGB entry for this only applies to the bounded 8-bit and 16-bit RGB formats, not 32-bit unbounded linear. That's why I said it's a UI oversight: at the moment, if you add one of these adjustments in 32-bit, the colour model dropdown doesn't have an entry for this format—and it wouldn't make sense to expose it for typical bounded formats which most people will be working in. RGB/32 enables HDR/EDR compositing, certainly, but using HDR is not a requirement. You can composite in 32-bit per channel precision to take advantage of the other benefits: increased precision/resolution, linear compositing, lossless colour profile conversions etc. You've been in 32-bit every time you open a FIT file or perform an astrophotography stack, and nearly all filters and adjustments will work in this format, so unless your machine is struggling performance-wise there is no need to flatten and convert to 16-bit.
  2. @Gary.JONES hope this helps: FIT files and documents created via the Astrophotography Stack Persona are in RGB/32, which is a linear colour space operating with unbounded float pixel values. It's a UI issue, but there's no colour space model on the Curves, Levels and Channel Mixer dialogs for this format, which is why they appear blank. If you purposefully switch to one of the provided colour models, this will reset the adjustment parameters and will also operate in a bounded format, which you will want to avoid. Just leave the adjustments as-is. They are not the same for every image: the gamma value will likely be some median based on an arbitrary scale to perform the non-linear stretching. On a monochrome stack of blue data, I've had a value of 0.582—another stack where I've added LRGB data simultaneously has resulted in 0.651. The initial Levels and Curves adjustments are just a convenient initial tone stretch, you don't have to use them if you prefer to perform your own tone stretching, perhaps using a different method (e.g. colour preserving tone stretch). Because you are switching colour model—e.g. the most common use case is for manipulating tones using LAB/CMYK whilst still in an RGB document and vice versa. There is no feasible way to translate parameter values from one model to another, so they are reset instead. As mentioned above though, there is no need to do this for your workflow. Could be a number of factors. Photo will debayer using an inferred pattern from the FIT metadata (the file you provided is RGGB). There is no colour profile information in the metadata according to ExifTool, so Photo will then have to assign your working space, which presumably you have changed to Adobe RGB in Preferences>Colour? Regardless of the FIT file's bit depth (yours appears to be 16-bit), it will translate the values to unbounded floats in 32-bit linear. The values in the FIT file should already be linear anyway. In terms of white balance, there is nothing mandated in the FIT data. When performing the colour space conversion and white point mapping, Photo will likely be using a D65 white point. I'll check this. The initial colour cast that is revealed when tone stretching is fortunately straightforward to neutralise or remove, however. As far as a muddy or red/green cast, I tried your FIT file in Graphic Converter and got this: Photo will not match this initially, since this result has been significantly tone stretched, but the red bias is present with both applications if I do some additional stretching in Photo: So I'm not really sure what to say here. I've never noticed any particular issue with Photo displaying tendency to bias colour data any more than other applications... how did you produce the two previews in your second post? Just to clarify, you mention that your FIT images should be tagged with the Adobe RGB colour space. I've not used this model of ZWO astro camera, is that an in-camera option? Noted—however, it's hardly unusable as-is. I've been using it since it was introduced in the 1.9 beta builds and have thrown all manner of data at it. We've just worked through any roadblocks as they've come up. There are a variety of methods for neutralising colour biases, balancing monochrome data and removing background colour casts, have you tried watching any of the video tutorials on the official Photo YouTube channel or website? Thanks, James
  3. Morning @Gerry D, no worries, although as mentioned above it would be a huge help to try and understand what difference has occurred between your old and new flat frames that is causing issues for Photo's stacking—e.g. is it a different camera, different lens, different method of diffusing/providing the light? Thanks again!
  4. Hi again @Gerry D, I wouldn't worry too much about the sensor defect (if it is such a thing), many cameras will likely exhibit symptoms when their base sensor profile is pushed to such limits. All part of why we shoot calibration frames to begin with! Is that a screen grab of the flat when viewed in the Astrophotography Stack Persona? If so, that's very interesting: notice that it's not fully saturated like the results you would be seeing with your newer flat frames. There is a more gradual transition to the edge vignetting as well. This is how a flat frame should look. Was there anything different about your setup in this instance, e.g. lens, camera, how you were capturing the flats etc? It might be quite useful to get hold of one of your earlier flat frames in its original CR2 format if that's possible? Not sure if the original Dropbox file request link would still work but I could grab it from there? Thanks again.
  5. Hi @Gerry D, upon further inspection the flat frames look OK as single exposures opened in the main Develop Persona. I was initially looking at the representation in the Astrophotography Stack Persona, which tone stretches the exposures to an extreme degree, and thought that they looked incorrect—this assumption was based off seeing many flat frames from different sources. It doesn't sound like there's anything wrong with the way you've shot them—in A/AV mode do you increase exposure compensation at all? I usually bring mine up to +1 or +1.3 to maximise the light gathered without overexposing. Regarding the dark frames, if it's not light leakage then perhaps it's just bias/defect on the sensor. Again, you'll only see this in the Astrophotography Stack Persona because the images are extremely tone stretched beyond anything the user would do manually. My concern at the moment is how your flat frames are being treated in the stacking persona: they seem to be completely saturated with very harsh vignette transitions at the edges. For example, here's one of my flats which stacks successfully: Ignore the green tint as I was shooting on a full spectrum camera with clip-in astro filter. Notice that no pixel values are saturated—there are some bright pixels in the middle but nothing like how your flat frames are being rendered. The master flat is divided by the light frames during the calibration process, so if your master flat looks wrong it will be rendering the light frames unusable (probably clipping the majority of the pixel colour values) and preventing alignment. You say you've used the stacking option for months—do you have any older flat frames to compare against? I would open them in the stacking persona, not individually, to get an idea of how they will be used. Do your older flat frames contain any more nuances in tone compared to the solid red of the newer ones? I wonder if it's the way we're handling RAW files from the 6D Mk2. If you were able to compare with some older flat frames that would be really appreciated so we can try and work out why these new ones aren't being handled correctly. Thanks!
  6. @AVsupport you haven't mentioned anything about 32-bit TIFFs, are you stacking multiple exposures to create an HDR panorama? If so, then yes, you will require tone mapping. If not, however, I would stay away from tone mapping and use adjustment layers/live filters instead to adjust your tones. You've only mentioned using 16-bit TIFFs so it sounds like you are just using bounded SDR imagery. Affinity Photo's Tone Mapping persona is not seam-aware. I have however released some non-destructive HDR tone mapping macros that are seam aware and work well with 360 HDR imagery. The results are typically more natural than using the Tone Mapping persona and are applied as a group of layers, so you can tweak the parameters at any time. There's a preview on Instagram here: https://www.instagram.com/p/CNxsBVpj5hr/ Hope the above helps!
  7. Hi @charr, I've tried out your two sample TIFF files and the HDR merge seems to work fine for me. You don't need to change anything about your shooting process, Photo will equalise all exposures you give it then align by feature matching. If you only need two images to capture the whole dynamic range of the scene, that's fine. On the HDR Merge dialog, I unchecked Denoise and Tone map HDR image. I noticed you said you left Denoise on, so I tried with this option but still couldn't reproduce the colour cast. The result from the HDR merge, bizarrely, looks like the red channel has had a 50/50 mix from R+B applied. You can recreate this by adding a Channel Mixer adjustment, then on the Red channel setting Red to 50% and Blue to 50%. How did you get the TIFF files, did you use the Olympus Workspace application? They are using a generic sRGB profile which shouldn't cause any issues here. If it was a colour management issue, the issue typically wouldn't be visible through screen grabs. Just to check, however, please could you try this: File>New... Choose any preset you wish, but change the Colour format to RGB/32 and create your new document. Go to Layer>Add Fill Layer Does the document look pure white, or is there some form of colour cast? With your fill layer, you could also try setting it to pure blue through the Colour panel and see if the colour appears correct. One final thing to try, as I'm not sure which GPU you have: go to Edit>Preferences>Performance and disable OpenCL hardware acceleration (you will need to restart the app). I believe parts of the HDR merge procedure are hardware accelerated so this may have an effect. Hope the above helps, let us know how you get on.
  8. Hi @Atikguy, there are a couple of things to mention with the example file you've provided. The issue isn't with stars or hot pixels, which is reassuring as the actual stacking appears to be fine. The primary issue appears to be with your luminosity layer (Lum). I'm not sure if this is an isolated issue as I currently can't reproduce it on any other documents, but there seems to be a rendering problem because the layer isn't fully opaque. Alpha values, if mapped to 0-255, are in the region of 240-252, so no pixels appear to be fully opaque. How did you create the Luminosity layer, was it just via Photo's astrophotography stacking? Did you do anything to the layer, like physically modify it with tools/filters, that may have altered alpha values? As a workaround while we try and understand the issue better, you can clip any kind of layer into your Lum layer—this will change the rendering path and make it display correctly. As an example, you could add a new Pixel layer or adjustment layer (e.g. Levels), then click-drag it over the Lum layer's text/label and release the mouse button. You should instantly see the difference. Don't forget to hide/delete your merged pixel layer at the top otherwise you won't see any difference. The other thing is that your document colour profile is set to "OMEN 25 by HP", which I presume is your display? Ideally you should convert to sRGB (or another device profile like ROMM RGB/Adobe RGB) to avoid colour management issues when you export further into the process. Always avoid using a display profile as your document's colour profile. Thanks and hope the above helps.
  9. Hi @TonyG55, you can do this in Affinity Photo using a very similar approach—better still, you can use a live Minimum Blur filter so the effect is non-destructive. Try this: Make sure your pixel layer is selected (don't forget you can use Layer>Merge Visible to create a temporary layer if you're working with several greyscale layers and tone stretching them) Select>Select Sampled Colour Single click on a bright star to sample it For the colour model, try Intensity—alternatively, if you're dealing with rich colour stars, you may want to try RGB or CIELAB. Adjust tolerance as required, click Apply Select>Grow/Shrink, grow selection as required Optionally, Select>Feather to feather the selection Now go to Layer>New Live Filter Layer>Blur>Minimum Blur and adjust Radius to suit At any point, you can change the Minimum Blur layer's Radius parameter, or even change its Opacity if the effect is too strong. Just one caveat: by default, live filters child layer into whichever layer you have selected—which might not be ideal if you're working with several composite layers. Just drag the live filter layer out onto the parent stack if it's in the wrong place, or alternatively click the suit icon on the top toolbar (Assistant Options) and you can change 'Adding filter layer to selection' to 'Add filter as new layer'. Hope that helps!
  10. Hello @Florida, a couple of things to mention that may help identify what's happening: Are you opening up PNG files, then working directly on them without saving to the .afphoto document format first? If so, could you try immediately saving into the native document format, then carrying on with your usual workflow? We identified an issue with TIFFs and autosaving of the compressed raster data, it may apply to PNG as well. Do you have Metal Compute enabled under Preferences>Performance? If so, you are also at the mercy of available VRAM on your GPU (8GB in your case) since it is being used for compositing. Bear in mind that there will be OS overheads, other applications may require GPU memory, and Photo of course will have a certain overhead for mipmaps and other tasks. That said, it is very unlikely you would be filling the VRAM with just a single 9000x9000px layer. Are you working with multiple layers of the same resolution? If you exceeded the available VRAM, it would start using swap memory which would incur a performance penalty—it shouldn't crash, however, just cause an inconvenience whilst you wait for data to be transferred in and out. It may be worth temporarily disabling Metal Compute, thereby making compositing entirely reliant on your 16GB RAM, to see if that helps the issue. Hope the above helps!
  11. Hi @bbx, I imagine you are using the default sigma clipping method when stacking? That rejects outlier pixels based on a given threshold (the sigma clipping value), so is useful for removing hot pixels, light trails etc—anything that is inconsistent between the set of images you provide. It won't reject the frames, but rather will reject specific pixel information. If you switched over to Mean or Median you would perhaps see more of these trails but in a dimmer/semi-transparent form. Try experimenting with the sigma clipping value and re-stacking (it doesn't take as long when simply re-stacking with a different threshold) to see if you can remove the trails entirely without compromising the SNR of your image, e.g. try 2 instead of 3. Bear in mind that with 1.9.3/1.9.2 there's currently an issue with background calibration when stacking multiple times. If your next stack result looks very noisy, just try stacking again and it will look correct. This has been fixed in 1.9.10.0 which is currently in beta. Hope that helps!
  12. Hi @Mark Oehlschlager, this is a current issue that has been fixed in the 1.9.4 betas—it's probably because your document is in 16-bit and the assets are 8-bit. During the colour format conversion, all pixels in the adjustment mask data are also converted, despite being opaque, and stored as new layer information. This unfortunately uses a large amount of memory as well (in case you're noticing any performance issues) and causes the mask bordering issue you're noticing. MEB's solution of filling the mask alpha should suffice, but you generally shouldn't have any functional issues with this happening. What pixel resolution is your document before cropping out of interest? Is it just over 6000px, e.g. 6024px by any chance?
  13. Hi @BeauRX, to expand on what Gabe said, if you've modified the adjustment's mask in any way you should be able to 'fill' it—either before recording the macro or during—which should then help with this issue. Select your adjustment layer, go to the Channels panel, right click the respective Alpha channel and choose Fill. Hope that helps!
  14. Hi @Givnik, it looks like the initial result from the pre-processed TIFF files is much brighter than the stack that uses the RAW files. Look at the (Pixel) layer thumbnail for both screen grabs: you'll see the TIFF stack one is brighter, whereas the RAW stack looks pure black. Did you stack with 16-bit TIFF files? If so, they would have been gamma corrected with a non-linear colour space, which would explain the difference—Photo will still convert them to 32-bit linear for the stacking process but the brightness will have been retained. When you stack the RAW data it's processed directly in linear colour space, so there's no gamma transform applied. Bear in mind that the Curves dialog you have open is representing tones from just the (Pixel) layer—not what you're seeing on screen which is a result of two further Levels and Curves adjustment layers being composited. Also, although the result you see on screen is being gamma corrected and is non-linear, all compositing in 32-bit is performed in linear space—and the Curves graph is also representing those linear tones. Therefore it's quite plausible that without any tone stretching, the linear pixel values could be so close to 0 that they would barely register on the histogram graph. What might be interesting to investigate is if you added another Curves adjustment at the top of the layer stack so you're working with the tone stretched data—do you see the appropriate tones on the graph then? Bear in mind that they may still appear darker than expected compared to what you see on screen, since they're the linear values and not gamma corrected. If you still don't see anything at all then it could be a bug and we can have a further look. Hope the above helps–if your result on screen looks as expected I wouldn't worry too much about the histogram graphs. Trust your eyes at the end of the day (and hopefully your calibrated and profiled display too 😁).
  15. @Bill Alpert I stacked your three ORF files using Photo and they merged successfully, producing the result I would expect—therefore I suspect your main issue is with the softness? That's just part of how Photo develops raw files, as it doesn't add any additional sharpening post-demosaicing. This produces softer results compared to other raw developers, but it's not symptomatic of any deficiency—the amount of sharpening is left up to the user. I do prefer this, especially with images where you don't want background detail sharpened as it brings out additional noise (especially with blue sky detail). If you wanted sharp source images, you could always run them through the Olympus Workspace app which will apply the level of sharpening that is set in-camera (among all the other various adjustments). Just to show that it shouldn't pose any serious issues, I've done an edit of your three images below with some sharpening and retouching applied—have you tried the focus bracketing feature yet? It's really useful, and works especially well for handheld photography, which Photo will align and focus merge—I shot some close-ups of dragonflies using the 300mm f4 a couple of years ago and used focus bracketing to capture front-to-back sharpness, it worked quite well. It might be worth experimenting with as I notice that the set could have used a fourth exposure for the very back of the bowl, it's not quite tack sharp.
  16. Hey @jruzic, it depends on what you're referring to with the creation process really. Affinity Photo cannot stitch a series of images into a 360 equirectangular image (you would want Hugin or PTGui for that), but it can HDR merge and retouch in 32-bit linear unbounded colour format. For example, have you taken a series of bracketed exposures with a 360 camera? If so: Use File>New HDR Merge and add your images. Uncheck "Tone map HDR image" and possibly choose not to align images either (since this may crop the image slightly which you need to avoid for a full 2:1 ratio 360 image) Once it has completed, you will be presented with your unmapped 360 image with no tone mapping At this point, you can retouch the image if required using various tools like the Inpainting Brush Tool. You can enter live projection via Layer>Live Projection>Equirectangular Projection which makes it easier to edit in perspective. Don't forget to remove the projection once you've finished editing. You can also change the base exposure of the HDRI by adding an Exposure adjustment. This won't clip any bright tones above the 0-1 range so don't worry about this. Now use File>Export and choose OpenEXR or Radiance HDR for your export format. You can reduce file size with EXR by choosing the half-float preset, which will encode to 16-bit half float rather than 32-bit. For most HDRIs the loss in precision is negligible. Then load that HDRI into blender with the usual methods (World surface texture etc). If you're not working with 360 imagery, you can also apply the above method to standard bracketed photography (don't tone map the image) and export it as EXR/HDR. Then in blender you can load this HDRI as an emission texture onto a plane or something similar, which will use it to illuminate your scene. PS if you need to stitch the 360 image initially and decide to use Hugin/PTGui or another app, don't forget to save it out as a linear unbounded 32-bit or 16-bit format. They will probably provide a TIFF output for this, or OpenEXR. You can load this into Photo and do any retouching work, then export back out to EXR/HDR. Hope that helps!
  17. Hi @Xzenor, I was interested in what the RAW file might look like so I had a quick go at processing it. The out of camera JPEG has some very harsh highlight rendering which is completely fixable in the DNG file. I actually ended up doing everything with layers in the main Photo Persona to keep it all non-destructive: I would recommend trying this approach rather than trying to do too much in the Develop Persona, you can have a lot of fun experimenting once you change your mindset slightly. I've attached the JPEG of my attempt, which I tried to keep fairly natural in appearance: All I did for the RAW development was bring the Highlights slider down to -81. The rest was all layer work which involved masked negative clarity, masked sharpening, various adjustment layers, some brush work to selectively darken areas of the image, and a little bit of live perspective to straighten up the background slightly. I could probably sit and type out a more detailed breakdown at some point, but if you wanted to check the .afphoto document out I've put it on Dropbox here: https://www.dropbox.com/s/5gy11lu4ptu6qi1/IMG_20210414_165109.afphoto?dl=0 (it's around 100MB, too big for a forum attachment). All the layers are labelled and everything is non-destructive so you can tweak everything and have a look at the parameters and blending options used. Hope that helps! If you wanted a more hyper-real edit that's also achievable, but I tried to keep it neutral. It's a slightly tricky edit since the bright background and water detail both fight for attention in the image, so everything has to be done in moderation..
  18. Hi @amathys, have you updated to version 1.9.2? 1.9.1 and 1.9.0 had an unfortunate bug with asset importing that caused huge memory usage—chances are your physical RAM has been maxed out and the application is eating into swap memory. The assets may eventually import after a long wait, but it gives the impression of having frozen. The easiest way to fix this is to simply update to version 1.9.2, hope that helps. ^^ The above is based on what you've written about not being able to actually install the assets into the Assets panel—please let us know if you're referring to actually dragging the assets onto your document instead.
  19. Having read your previous reply, I’m not entirely sure what response you might have wanted—long term, RAW development functionality may improve, but it seemed from your post that you had already decided the Affinity Photo workflow wasn’t for you, which is fair enough. I did already explain and demonstrate that you can achieve comparable results just by tweaking a few sliders. Unfortunately, analysing and adapting RAW development settings for individual cameras (especially camera phone models) isn’t realistic given the team size. If you’re willing to adapt an open mindset, you can get some very good results with Photo, but it won’t give you an instant starting point like other software that matches the in-camera processing. Therefore, perhaps the software simply isn’t for you as it stands currently, and your needs are better served by other software? I gather that you do not really want to be working with much manual editing or processing? I’m not sure that’s the take-away from the article. It simply highlights how two RAW development applications will differ in their initial treatment of RAW files, then points out some small strengths and weaknesses between them. Affinity Photo treats the majority of RAW files very well—camera phones like the Pixel however need more initial work to look acceptable, which Photo can absolutely do, it just doesn’t present any of those suggestions by default. Again, this just goes back to my point above about whether the software is suitable for your requirements. For example, regardless of what camera I’m using, I want minimal interference with my RAW development—I want the starting point tonally flat so that I can develop the image then build up the image using layers in the main Photo Persona. That’s my way of working, and I’ve become so used to it that I now find it very difficult when software offers me suggestive starting points for my images (especially AI/machine learning-powered functionality). So in a way, it doesn’t bother me if an image comes in looking underexposed, lacking in colour, having too much contrast etc. I will always adapt an image to be flat so that I can work on it further with non-destructive layers. That approach works really well because it also plays to the strengths of Affinity Photo. It can be adapted for many other workflows and approaches, of course, but how it currently functions may simply just not work for you, and that is something that can’t be addressed overnight. In the future, however, we may be able to expand the RAW development functionality and also ‘friendliness’ as regards to how it handles various outlier RAW files (especially from camera phones).
  20. Hi @Arceom, I’ve been using a Mac Mini M1 for almost a month now so I’ve given it a fair test (but not exhaustive). I would definitely go for the 16GB model—the swap memory is very efficient with these new Apple chips, but even so, it’s better to have the headroom especially as it’s now shared memory. Depending on the complexity of your editing, the GPU can easily vacuum up 8GB of that memory alone (from profiling, there seems to be a cap that prevents GPU memory usage from increasing beyond this. I’m not sure if this is OS-mandated or something we do within the apps). The good news is that 16GB is sufficient for most of my more esoteric editing, including most 32-bit work, astrophotography, HDR workflows, compositions etc. I rarely miss having more RAM day to day. Now that’s out of the way, I can move on to performance: it’s pretty exceptional! I was very skeptical about M1, but I was fed up with the fan noise from my MPB (especially for tutorials) and just wanted a quiet home machine that was reasonably powerful but not overkill, so I decided to give it a try. It’s snappy and fast, especially in the Affinity apps. RAW files load almost instantly, selections and other raster tool tasks feel instant (as opposed to slight hitching even with a fully specced i9 MacBook), it just feels and performs faster in day to day use. The upcoming 1.9.2 release, currently in beta, has some further optimisations that increase performance as well. Basically, I’m very impressed with it, and it’s now replaced my top end MPB for work as well, where I’m using the apps intensively all day. Not bad for a machine 1/4 of the price! I appreciate portability is attractive (regarding the Thinkpad you mentioned) but I would definitely recommend trying M1, even if it’s in MacBook form which I gather gives you basically the same performance as the Mac Mini. It’s not easy for me to recommend, as I’ve always thought of Apple’s Intel offerings as overpriced, but the M1 models have gone some way to changing my opinion on that... (you could always wait to see what they do for the higher end ARM models too)
  21. Hey @3d illusions, if your naming convention always stays the same (e.g. View Layer.Render Pass.RGB, View Layer.DiffCol.RGB, View Layer.DiffInd.RGB etc) for your multichannel data you could record a macro which you would either run upon opening the EXR document, or as part of a batch job if you want to pre-process a set of EXR files and have them ready to edit. Unfortunately there's currently no way of accounting for different naming conventions however.
  22. Hi @zxspectrun, from examining your attached RAW file I would say this is to be expected—Photo is actually reading your RAW files as intended, it's just not applying any kind of adaptive starting point. I'm unfamiliar with Pixel RAW files, but I would assume they either have some EXIF tags that can be read so that RAW development software can emulate processing, or LRM is simply applying the same adjustments as a starting point that you would get with in-camera processing. There's no need to use 32-bit—this just opens you up to issues if you're not familiar with unbounded pixel values and linear vs non-linear colour space editing. You don't have to remove the tone curve either really—you can get close to how the image looks by making a few adjustments in the Develop Persona. To clarify—the original RAW data from the Pixel is underexposed so as to avoid completely clipping the bright detail from the sun. It needs some reasonably aggressive shadow tone enhancement and also some highlight compression/remapping to squeeze out what little dynamic range the phone's sensor actually has. Noise isn't actually too bad considering, but the background tree detail has unfortunately suffered from the underexposure. If you try adjusting the settings like this, you can get close to the in-camera image: It won't look the same, but I would objectively argue that it's nicer in some aspects. The processing that's applied in LRM (and presumably to the in-camera image) has ugly haloing around the background tree line and mountain regions from local contrast enhancement. The processed version does have some more intense reflections in the water, and the highlight reconstruction is slightly more successful however. That said, if you develop the image and then simply do a small amount of work with HSL (targeting Red/Yellows) and Curves adjustments, you can make the colours in the image richer and enhance that warmth from the sun: As for Windows Photos looking even better, that is probably just reading an embedded JPEG rather than decoding the RAW data. I hope the above helps! I know it's not ideal, but this is pretty much the Photo workflow—you start with a flat RAW file and use all the editing tools available to craft something to your taste, rather than having a series of suggested adjustments automatically applied.
  23. Hi @Colin Red, are you doing any HDR authoring (as in, genuine high dynamic range, not tone mapped)? If not, you can safely ignore the panel as it doesn't apply to regular photo/image editing—it was introduced primarily for VFX workflows, then expanded upon when we brought in support for HDR display mapping. It's designed to allow people to visualise different areas of the large dynamic range that a RAW file (or other file formats) may contain—for example, bringing the exposure slider down will reveal bright highlight information that could not be shown on a traditional SDR (standard dynamic range) display, allowing you to examine it. Authors may also want to see how their images look under different gamma transform values. Although this seems what's happening, it's because the 32-bit preview panel options carry over to the Photo Persona once you develop your RAW file. The Exposure and Gamma sliders are still non-destructive and only having an effect on the final screen presentation. Basically, unless you specifically need this panel for the workflow mentioned above, I'd recommend you to keep the sliders at their default values and use the Basic panel exposures to modify exposure, black point, brightness etc of your RAW files. Hope that helps!
  24. Hi @ashf, this isn't a bug but rather the Windows behaviour is now consistent with Mac. Adding an enclosure layer should, by default, expand the parent layer so you can see the layer you have just added. If you prefer not to have this behaviour you can go to Edit>Preferences>User Interface and disable the option "Auto-scroll to show selection in Layers panel". Hope that helps!
  25. @the_tux I would certainly lean towards RAM being the issue. From profiling the GPU, it can use roughly 5.5GB of allocated memory during the merging process with all of your files selected, and will actually shoot up temporarily to over 8GB during the initial alignment process whilst the images are being decoded and held in memory. If I select just three files (picking a dark, middle and bright exposure) that usage reduces to around 2GB. Because of the unified memory architecture (as opposed to GPU having its own pool of dedicated memory), you would have to factor in other apps, general overheads and other requirements that will easily push the memory over 8GB and therefore into swap. I've generally found the swap is very efficient with M1—in most cases I haven't even realised it's being used—but it appears there is likely a bottleneck when using the GPU for this task. I'm sorry your experience with M1 has been frustrating—I got one a couple of weeks ago and I was very skeptical, but was happy to be proven wrong. It's quiet, fast and my hugely expensive MPB 16" is now gathering dust! Apart from a few scenarios with CLI apps that are going through the translation layer, everything is basically quicker and snappier, especially in the Affinity apps. Again, I do wonder whether this is either an optimisation issue or a memory issue. Have you tried FCP at all? For me, it's just as fast (if not faster) than when it's being used on a high-end MPB, and without obnoxiously loud fans too which is a welcome change.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.