Jump to content
Our response time is longer than usual currently. We're working to answer users as quickly as possible and thank you for your continued patience.

James Ritson

Moderators
  • Posts

    806
  • Joined

  • Last visited

Everything posted by James Ritson

  1. This is inadvertently having the wrong effect: I believe the way Affinity colour manages on Windows means that the profile is applied during startup, but cannot be changed or refreshed during app use. Therefore what's happening is that you're disabling colour management entirely, launching Affinity with it disabled, then when you activate the ICC profile that change doesn't refresh within Affinity (until you restart). So the reason Affinity now looks the same as your other software is because nothing is being colour managed. In your first post, the screenshot comparison with the slight difference in saturation might be because Affinity is colour managing your image correctly. I had a quick search for ImageGlass and colour management and came up with this: https://github.com/d2phap/ImageGlass/issues/43 There seems to be some confusion there between image colour profiles and display colour profiles. The software author is talking about embedded/referenced image profiles and being able to change them, but the key issue here is managing between the image profile and display profile. This is what the Affinity apps do—they will take colour values from the document or image profile (e.g. sRGB, Adobe RGB) and translate them based on the active display profile so that they display correctly when viewed on the monitor. From reading the above issue on GitHub, it looks like the author implemented the ability to change the image profile, but hasn't implemented the actual translation from image to display profile. Therefore I wouldn't expect ImageGlass to be fully colour managed (only based on that observation above though, please don't just take my word for it). As far as I'm aware, the Windows desktop composition has nothing to do with the document view in the Affinity apps, so you wouldn't have a situation where colour management is inadvertently applied twice. It may be worth doing a quick test with an image that uses a wide colour space. I've attached a TIFF of one of my images that's been edited with a ROMM RGB document profile (I've compressed it in a ZIP to prevent the forum software from mangling it!). If you open this TIFF in Affinity and ImageGlass (plus any other software you use), do you notice a big difference in rendering? Affinity should be taking that ROMM RGB profile and translating the colour values based on the custom display profile you have created. Software that isn't colour managed will simply send those colour values to the screen with no translation. You should notice a big difference between a colour-managed and non-colour-managed result with this example. I'm not really sure about this—I use DisplayCAL on both macOS and Windows and wouldn't consider anything else. It sounds like you should maybe just calibrate and profile with DisplayCAL, use its own calibration loader and then assume what you see in Affinity is correct. You can experiment with other apps as well, but do check that they perform document-to-display (or image-to-display) colour management, and don't just offer an option to override the image profile being used. A useful diagnostic "hack" you can use within Affinity Photo is to go to Document>Assign ICC Profile (not Convert) and assign your display profile to the document—for example, mine might be UP3216Q #1 2022-09-19 11-11 D6500 2.2 F-S 1xCurve+MTX. This will effectively bypass colour management and show you what the image would look like if its colour values weren't being translated. If your document is only in sRGB, you might notice a very minimal change, if anything at all—you might see a small shift in saturation like with your first landscape image example. If it's in a wider space such as Adobe RGB or ROMM RGB, however, you should see a more noticeable difference. And on that note, a general rule to observe: always use device or standardised profiles for your document (sRGB, Adobe RGB etc), never display profiles. Display profiles should only be used by the OS and software to colour manage between the image/document and display. Hope the above helps in some way! JR ROMM RGB_6030007 8-bit.tiff.zip
  2. Hi @Dan_Valentin, I know which ones you're referring to, they still exist but are unlisted on YouTube and can be found on the V1 tutorials playlist: https://youtube.com/playlist?list=PLjZ7Y0kROWitoJtnw0pdvjPmS8mYGvrBR They're not organised however so that could take some time to sift through. I do still have them linked on my tutorials page: https://jamesritson.co.uk/tutorials.html Please be aware however that they're a little outdated: the stacking process with bad pixel map generation is a bit needlessly complex, and the default stacking settings have changed for V2 which can make this part unnecessary for a majority of data. You are better off stacking with sigma clipping and lowering the default clipping value of 2 before trying to manually alter the bad pixel map. Hope that helps!
  3. Hi Kulh, as DM1 mentioned above this is not a loss of resolution—you will find that most RAW files contain a handful of pixels around the edges, but these are cropped away as per the mandated resolution in the metadata. Photo ignores this, however, and always processes the full width and height of the image, which is why you end up with a marginally higher resolution than you would expect. The aspect ratio is still 3:2. A good example of this is the Olympus cameras: e.g. the E-M1 mk3's actual RAW resolution is 5240x3912, but the metadata instructs software to crop it to 5184x3888. It doesn't sound like much, but the extra resolution is actually quite welcome when dealing with wider angle lenses. If Photo applies automatic lens corrections, you can actually go into the Lens tab and bring Scale down—you'll often find there is pixel data being pushed outside of the crop bounds that you can bring back, and this is made partially possible by Photo not cropping those edge pixels. Hope that helps!
  4. And one more, again for architects: Archviz Composition Workflow.
  5. I've added Elevation Rendering Workflow to the list—this one focuses on a particular architecture workflow but may be useful for anyone wanting to see more examples of the flood select tool, masking and cloning layers...
  6. Another one for iPad (from Katy, our iPad product expert): Pencil Tool
  7. Happy New Year all! I've just made a quick update to the Command Controller iPad video—it now reflects the change made in 2.0.3 where the toggle has been moved to the Document menu.
  8. Hope everyone had a lovely Christmas! I wanted to sneak in another video before the new year, so here you go: High Dynamic Range Workflows HDR/EDR technology has actually been present in Affinity Photo for a few years now (since around 1.7 I believe), but with V2 introducing JPEG-XL export I thought it was about time to do a proper video on high dynamic range editing. This video was shot in HDR and mastered to 1600 nits of peak brightness, so if you have a display capable of that brightness (for example, the M1 Pro/Max MacBook displays), you're in for a visual treat! For those viewing on SDR displays, the content will be tone mapped and will of course lose a bit of its impact...
  9. The picture going black will be because: Your original EXR document contains pixel values greater than 1 Unless you merge down the Exposure adjustment before converting to RGB/16 or RGB/8, you will then lose (clip) those >1 values This is because the Exposure adjustment is then being applied to the bounded range of pixel values, so it is taking the maximum value (255 in 8-bit, 65535 in 16-bit) and reducing it down to 0. The solution is either to merge the Exposure adjustment before converting, or to simply stay in RGB/32 and use a tone mapping method after the Exposure adjustment (e.g. procedural texture, or Levels adjustment followed by other adjustments). Hope that helps!
  10. Hi @Gary.JONES, the procedure for selecting the best light frames is based on the quality of the 40 best stars found in each sub, where a good star is bright relative to the noise level, round, sufficient size (but not too big) with peaks in the middle. The help documentation may need updating as I don't believe that information is completely accurate...
  11. Hey Claude, This could actually be by design—I haven't used DAZ, but could relate using blender. There's more of a focus on physically correct lighting nowadays, and many of blender's environment/sky lighting plugins (such as True Sky, Physical Starlight and Atmosphere etc) will use incredibly bright values that then require the Exposure to be reduced significantly, usually by 3 to 6 stops. This is fine when you are writing out gamma-corrected bitmap files such as TIFF and JPEG, but when saving to EXR this tone mapping isn't applied to the pixel values—instead, you're getting the unmodified pixel values in linear gamma. This is by design, as software shouldn't really be writing any kind of gamma encoded values to EXR. If you colour pick one of the white values when you first import your EXR, I imagine the value would be greater than 1. What are your tone mapping/view settings in DAZ? It may be you're using some kind of lighting that requires the Exposure value to be reduced, and you would then have to match that in Affinity Photo with an Exposure adjustment layer. Your solution is sound in practice, but I would be wary of clipped or 'harsh' highlight details because you may also need to apply some kind of tone mapping to give highlights a roll-off effect. This should be done in 32-bit before merging/flattening and converting to 16-bit or 8-bit, which are bounded gamma corrected formats. This video might be of help: Hope the above helps!
  12. ... and hot on its heels, a general tutorial for Pattern Layers as well: Pattern Layers
  13. One for the architects out there! Pattern Layers for Plans & Diagrams
  14. Update (07/12/22): Artistic & Frame Text Tools iPad: Knife & Scissor Tools
  15. Some more iPad tutorials for you all: Symmetry and Mirroring (New: 01/12/22) Pen & Node Tools (New: 05/12/22)
  16. I think it helps to simplify things as much as possible. In this case, clipped tones are essentially colour values that do not fit within the available output colour space (such as the default sRGB profile), usually because they are too intense. There are a couple of solutions to this: you could change the output profile to something wider such as ROMM RGB (notice how the histogram shifts when doing so), or you can use a combination of exposure, brightness, black point, saturation/vibrance, shadows and highlights to “compress” or make the tones fit within the available range of colour values. The first solution of using a wider colour profile does introduce potential pitfalls with colour management (such as the infamous colour shift when viewing images with a wider colour space in an unmanaged view). Most web browsers are colour managed nowadays, but it’s still often recommended to convert to sRGB during export to ensure maximum compatibility. The second solution usually means you end up compressing the tones more, typically when using the highlights slider. This is fine, but just keep an eye on the actual image rather than the histogram—this leads me onto the next point… Outside of some edge-case workflows (such as high end VFX), the whole concept of clipped tones lends itself to more of a creative decision—you might have some intense red and blue colour values, for example, that cannot be represented in sRGB, and so will simply be clipped to the maximum channel value. This isn’t necessarily the end of the world! You can most likely craft a perfectly good image within the constraints of sRGB as it does tend to represent the majority of colour intensity values seen in everyday life with reflected light. I wouldn’t spend too much time worrying about it, to be honest. Does the image look good on screen in sRGB? If not, and you’re happy to involve wider colour spaces in your workflow, try a wider profile such as ROMM RGB. But don’t get bogged down with it too much! The time worrying about whether some clipped tones are compromising image quality is arguably better spent learning some useful post-processing techniques for the kind of imagery most likely to have intense colours (e.g. low light, urban scenes with artificial lighting). HSL, Selective Colour, brush work on pixel layers with blend modes are just a few ideas, but there are many more. Hope the above is helpful!
×
×
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.