Jump to content

James Ritson

  • Posts

  • Joined

  • Last visited

About James Ritson

Contact Methods

  • Website URL
  • Twitter

Profile Information

  • Gender
  • Member Title
    Product Expert (Affinity Photo) & Product Expert Team Leader

Recent Profile Visitors

13,468 profile views
  1. Hi @Reflex, hope I can help. This right here: Is where you might be going wrong—applying the tone curve is a bounded operation and it will clamp values to between 0 and 1, so you won't be seeing any extended brightness values. This is why it defaults to "Take no action" when you change from RGB/16 to RGB/32—it's not intended to be applied if you wish to retain HDR values. Secondly: Will also be wrong: you want ICC Display Transform otherwise your document will not be colour managed. Unmanaged means you are seeing the linear scene referred values, which you don't want in this case as things will look dark and "crushed". Have you tried using the Apple Core Image RAW engine but with no tone curve and ICC display management applied? You mention that the tone curve being off results in the image looking too dark, but then at the same time you're using an unmanaged linear view—which will indeed look dark because it doesn't have a gamma transform correction. Try: Apple Core Image RAW engine / SerifLabs engine Tone curve set to Take no action 32-bit Preview set to ICC Display Transform And see how you go from there—it's possible that we don't apply some exposure corrections or other parameters stored in the ProRAW format, so you may have to experiment. Possibly the Apple Core Image engine will handle this better than the SerifLabs engine. It's no bad thing that you should have to push the exposure slider up: it's kind of the intention that users should shoot to capture highlight detail (underexposed if needs be), then be able to simply push the pixel values linearly and still see those bright highlights rather than having to tone map them using highlight recovery and other techniques.
  2. Are you using a bespoke display profile, perhaps one that you have created with a calibration device using DisplayCAL or i1Profiler? Affinity Photo itself isn't doing anything outrageous to corrupt the display like that—I previously had an issue with a Monterey beta that corrupted the screen if a custom display profile was in use. Looks like they are still ironing out issues with the M1 GPU. In System Preferences>Display do you have any other settings that are not at their default, e.g. have you changed from the default P3 1600nit profile? I believe the only thing we're doing different from other apps is querying the Metal view for the panel's maximum brightness so that 32-bit HDR documents can be mapped to the display accordingly—if we were able to reproduce it here, that would be a good step towards identifying and solving the issue.
  3. Hey all, just wanted to give you a heads-up that tomorrow (Thursday May 26th) I'll be doing an Affinity Photo live stream and it's going to be an astrophotography special. Even if you're not interested in astrophotography, it may be worth a watch anyway as I'll be covering techniques that could be applicable to other workflows. It's also a chance just to have a good geek-out! I'm really looking forward to it. Please do feel free to come along and interact in the chat if you're interested, it would be lovely to spot some familiar usernames 😉 Here's the link, you can also set a reminder ahead of 4pm tomorrow: Thanks, James
  4. Hi @talktogreg, thanks for posting and hope you're finding the macros useful. There are two things Affinity Photo does that will account for the result on-screen looking stretched: Internally, within the Astrophotography Stack persona, it adds Curves and Levels nodes to the invisible layer 'stack' for the FIT file preview. This is just to help boost the tones so you can inspect the data. A non-linear gamma transform is performed during the view presentation. Everything is composited internally in 32-bit float linear, but by default the view is colour managed (based on your display profile) with a gamma transform so that you're seeing the result you would get when exporting to a gamma-encoded format such as 8-bit JPEG, 16-bit TIFF etc. The actual data itself is always treated linearly. To test this, you could try the following: Load your f_EXTREME.fit file up individually, and you will see the Levels and Curves layers added. Turn these off. Go to View>Studio>32-bit Preview to bring out the 32-bit Preview panel if it isn't already active. Switch from ICC Display Transform to Unmanaged: you are now seeing the pixel values in linear light with no colour management. The result should hopefully be in-line with what you're seeing with other software. Be aware, however, that you should keep ICC Display Transform on. This ensures that your view when compositing in 32-bit linear will match the exported result when you finally save out a JPEG or other interchange format. It's also worth noting that the histogram will always display the linear values, even though the view presentation is being managed with the non-linear transform. We should perhaps have a toggle for this... Apologies for the slight technical ramble, but it's difficult to clarify without venturing into linear/non-linear and colour management discussions. I think the bottom line is that your data files are being treated as linear internally, so there shouldn't be anything to worry about. Hopefully your master flat should just calibrate as expected with the light frames and everything just works! Hope that helps, James
  5. Hi @Marcel Korth, both of these features are present in the Affinity apps. Alt-clicking (Windows) or Option-clicking (macOS) on a layer isolates it: https://affinity.help/photo/English.lproj/pages/LayerOperations/isolating.html — the only difference here is that you must click onto a different layer to exit isolation mode. As regards alt-clicking layers to clip them, Affinity has a concept called child layers. You can click-drag a layer and offer it to another layer, but there are two distinct behaviours here: Offering your source layer over the target layer's thumbnail makes the source layer act as a mask. Offering your source layer over the target layer's text/label will clip the source layer to the bounds of the target layer. Once a layer is inside another layer, it is referred to as a child layer, and the top layer is referred to as the parent layer. To replicate the example in your video, you would click-drag that image layer onto the text/label of the text layer and release the mouse button. An alternative way to do this is to use the menu option Arrange>Move Inside. You can also create a shortcut for this—I use Shift+Alt+I—to speed up compositing workflows. You will also find an option in the shortcuts for Insert Inside—I like to shortcut this to Shift+I. Once it is toggled, the next layer created will automatically be placed inside the currently selected layer, so this is really useful for quickly clipping adjustments/live filters inside the current layer. Hope the above helps!
  6. Hi @Laurens, if you show the 32-bit Preview Panel (View>Studio>32-bit Preview), is Enable EDR checked? If so, this will use your display's extended brightness range to display HDR colour values (i.e. outside the range of 0-1). Whilst this is a really nice feature, you currently can't export to any kind of HDR format outside of OpenEXR/Radiance HDR, so you would instead need to work in SDR and tone map the HDR values down to that range. The reason your images look different, I suspect, is because you're able to see values >1 on your display, but when you export to a bounded format such as JPEG, those values are simply clipped at 1. To solve this problem, start by disabling Enable EDR, then use the tone mapping persona or a set of tone mapping macros to bring those brightness values to within SDR range. Once you have done the tone mapping, do any further edits as usual, then try exporting—you should find that the exported image now looks the same as the working copy in Photo. In Preferences>Colour, you will find a checkbox called "Enable EDR by default in 32bit RGB views". Unchecking this will prevent the issue from reoccurring in the future. Hope that helps!
  7. For anyone in the thread experiencing beach-balling, it's worth noting that there is currently an issue with variable brightness on M1 MacBooks and I presume M1 iMacs as well—anything that uses a sensor to gradually change the brightness based on lighting conditions. Photo, with its Metal view, queries maximum brightness for HDR purposes, and this causes a bottlenecking issue at the moment when macOS gradually changes the panel brightness. The solution is to go to System Preferences > Displays and disable "Automatically adjust brightness". Disabling Metal Compute will severely hamper performance—the reason it works for most people is because it also switches the view back to OpenGL, which does not query the brightness as it does not support HDR/EDR views. The issue will be fixed in the future, but the workaround for now is to disable the automatic brightness adjustment within macOS. Hope that helps!
  8. Hi @eobet, if you're just taking SDR-encoded images and using them as a lighting model, it almost seems pointless encoding them in an HDR format—I think the majority of 3D authoring software lets you use SDR-encoded formats as well? (With the understanding that lighting will be substandard) However, if you want to artificially expand the range into HDR, you could convert your document to 32-bit HDR using Document>Convert Format / ICC Profile. This will convert the values to floating point, with 0-1 being the range for SDR values. What you could then do is add an Exposure adjustment and push the exposure up linearly for all pixel values—try +1 or +2 to start. You could then use a Brightness / Contrast adjustment and bring the brightness down and also experiment with the contrast. The Brightness / Contrast adjustment only works on the 0-1 range of pixels, so you leave the HDR >1 pixels as they are. This would allow you to artificially expand the dynamic range of the scene. The Curves adjustment, by default, also works between 0-1, so you could also use this to manipulate contrast of just the 0-1 SDR pixel values. However, on the Curves dialog, you can set the min/max values—so you can use a max greater than 1 and it will now also affect HDR values. Then of course you would export to OpenEXR or Radiance HDR and bring that into your 3D software. The 32-bit Preview Panel options are purely for the final display presentation only, so as you've discovered they will not modify the pixel values in your document at all. If you have an HDR/EDR display you can enable HDR/EDR preview to see pixel values >1. Otherwise, you can move the Exposure slider down to see those bright values. Hope the above helps!
  9. Hi @lysynski, is this when running the macro from the Library panel? Try taking your mouse off the desk/surface and single clicking on a macro entry. There's a really fine tolerance on macOS for testing between a single click and a drag, and it's probably this that is causing the issue. The developers are aware of it so hopefully it can be addressed.
  10. Hi @MxHeppa, the selection tools do have anti-aliasing options available on the context toolbar—see "Soft Edges" for the Selection Brush Tool, and "Antialias" for the marquee and freehand/polygonal/magnetic selection tools. Hope that helps!
  11. Hi, yes, it's an intentional behaviour as we don't know explicitly which colour space to convert from—at the time of implementing EXR support, we were aware that the format could specify chromaticity values and white point but didn't have any documents that contained them. It was always designed to be used in conjunction with OCIO for explicit colour management. Essentially, Photo doesn't touch the pixel values and just sets the document profile to whatever is specified in Preferences>Colour, which by default is sRGB. This only bounds the pixel values at the final presentation stage, so internally everything is being composited with linear unbounded floats. Doing a specific profile conversion via Document>Convert Format / ICC Profile, however, will change the pixel values (but remain unbounded). Ideally, you would have an OCIO configuration set up within Photo, then append the colour space of your EXR to the end of its file name. Since Photo always works internally in scene linear, it would then convert from that colour space to scene linear. Again, though, this is mainly intended to be used with the OCIO device and view transforms to ensure accurate colour rendering. tldr; Photo doesn't touch pixel values unless there's a valid OCIO configuration and the EXR filename contains a colour space name (e.g. "filename acescg.exr")—if this is the case, the pixel values will be converted from that colour space to scene linear. Changing the 32-bit colour profile in Preferences>Colour is essentially the same as assigning a colour profile. Hope that helps!
  12. Hi @anim, I'll do my best to break down the reasoning for this approach, hope it makes sense. I think this is the key part to explore: EXR files should usually contain scene linear encoded values—the colour space is arbitrary here and Photo doesn't touch the colour values. Those linear values are then view transformed based on your document profile (when using ICC Display Transform) or based on a mandated device and view transform combination (when using OpenColorIO transforms). We don't do anything with this and always bring EXR files in as scene linear. OpenColorIO integration is designed to help with colour management—for example, you would set up the configuration that you wish to work with, then you can 'tag' your EXR files by appending a colour space to their filename. If I had an EXR whose primaries were in the ACEScg colour space, I would append "acescg" to the filename. Affinity Photo would then give me a toast to let me know that it had converted from 'acescg' to scene linear. OCIO isn't really applicable in your case, so it leaves us with the mismatch you are seeing when using the traditional document-to-display profile colour management. Sorry if I'm just being slow (it is the morning here 😉) but you've said here: Then later on have said: I'm not sure where ROMM comes into the equation—why are you wanting to use ROMM as a default working colour space? It sounds like within iRay you were just working with scene linear values and previewing with a standard sRGB non-linear device transform. The EXR should not contain any particular colour space encoded values. I believe this should be evident when you switch to Unmanaged and non-destructively transform the gamma and exposure values. If the EXR contained values encoded in a wider colour space, the result should look very different to the tone mapped PNG in sRGB, but it doesn't (there are some slight differences, probably due to small variations in the transform curve etc for the gamma-encoded sRGB values in the PNG). I'm checking with development whether it's just an oversight that we don't explicitly 'convert' the linear colour values to the working profile that is set in Preferences>Colour (and seemingly assign the profile instead). I'll hopefully be able to update you on this.. You likely know this already, but it's worth mentioning that the EXR document is imported and converted to a working format of 32-bit unbounded float—all compositing is done in linear space (as it should be), then those linear values are view-transformed based on the document colour profile, which defaults to sRGB. This will 'bound' the linear values during the transform, so you will only see colours inside the range of that profile. You can bring in an EXR document using the default sRGB profile, then convert to a wider profile such as ROMM if you wish. Internally, the linear values are stored as unbounded floats, so you don't lose any information by importing with an sRGB profile and can simply choose to display a wider range of values when converting to a wider profile. Hope the above kind of helps—I think the takeaway is that you should just use the default sRGB working profile and convert to a wider profile if you want to use it for compositing. Just be aware, however, that if you choose to export back to EXR those linear values will now be encoded specifically for that linear profile, and other software may not recognise this. If you are simply exporting from Photo to a final gamma-encoded delivery format however, this will be fine. For more advanced colour management, such as using Photo in a VFX pipeline, you should definitely look into using the OpenColorIO integration...
  13. Hi @udoo, a couple of things to note: For sigma clipping to be effective, you need to provide at least three images so that the pixel ranges can be evaluated, and outlier pixels can be rejected The default sigma clipping threshold of 3 is usually too conservative for most mono image data—try reducing it to 2 before stacking Pre-calibrated data does not require additional calibration frames, but you will often find that the calibrated frames (e.g. from iTelescope, Telescope Live) still contain a myriad of hot pixels and inconsistent pixels. That's where sigma clipping comes in—as noted above, the default value of 3 will probably leave you with unwanted hot pixels, so try reducing the value and stacking again until you get a good result. When you repeat the stacking process, it will be significantly quicker since Photo has already registered and aligned the data for stacking. You will really need more than three images for effective outlier pixel rejection—have you got a sufficient amount of captured data for this? Hope that helps, James
  14. Hey again. I wouldn't go anywhere near OpenColorIO to be honest, that's more intended for a VFX/rendering pipeline. The OpenColorIO view transform in Affinity Photo is non-destructive anyway, and will not match the view you get from EXR previews in Finder or other macOS software that can interpret the format. I'm afraid there is no elegant solution at the moment!
  15. Hi @julianv, unfortunately I believe your saturation issue does come down to the issue that I brought up in a previous post: lack of a decent delivery method for HDR/EDR images with appropriate metadata. The formats are kind of half-there, but OS level support needs to mature significantly. EXR documents are a way of interchanging lossless information and allow unbounded, floating point precision for pixel values, making them ideal for storing HDR content but not particularly for visualising it. EXR files have no real concept of colour space, and are not tagged with colour profiles. The pixel values are usually encoded as linear scene-referred values, as opposed to gamma-corrected values in a particular colour space. This is why colour management solutions like OpenColorIO are typically employed when dealing with EXR files, to ensure accurate colour conversions between the linear values and the intended output space. I'm not sure what macOS does with its EXR integration—I'm assuming it would just convert from the linear values to sRGB in order to preview on-screen, which may explain the loss in colour intensity because you are working with the Display P3 profile within Affinity Photo. As a quick experiment, you could try using Document>Flatten, then Document>Convert Format / ICC Profile and choosing sRGB (stay in RGB/32 for the actual format). Now export your EXR and see if the result in Finder/Preview looks more consistent with what you are seeing in Affinity Photo. If so, it is unfortunately a limitation where you cannot mandate a colour profile for your exported EXR document—for now, the only real workaround would be converting to sRGB before exporting. Hope that helps!
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.