Jump to content

James Ritson

  • Content Count

  • Joined

  • Last visited

Everything posted by James Ritson

  1. Hopefully this can clarify the issue and also clear up the concerns that Photo is discarding or throwing away anything outside of sRGB: RAW files opened in Develop go through an image processing pipeline, during which the colour information is extracted and processed—the colour space used for these operations is ROMM RGB because it provides a suitably large resolution and enables colour values to be saturated and manipulated without clipping to white. This choice of colour space was introduced in version 1.5 where there was a marked improvement in RAW files with intense colour values (e.g. artificial lighting). However, the actual document profile is in sRGB—this means the final colour values sent to the screen are restricted to sRGB. Is this deficient? Yes, and there have been discussions about how to tackle it without risking further complication for people who don't use wide colour profiles. There is a silver lining though. RAW files are developed in an unbounded (float) colour space, which means all the values that fall outside of sRGB are not clipped or discarded. If you were to then set your output profile to a larger colour space like ROMM RGB, these out of bound values can be accommodated by the larger resolution of that colour space. Essentially, you can avoid clipping values outside of sRGB when clicking Develop, and you can get them back once you're in the Photo Persona: the issue is that you can't see these values within the Develop Persona. I've experimented with one of my photographs of some intense lighting to back this up, and have attached it to this post for people to experiment with. I've also compared the results versus Photoshop CC 2019 (where you can set the colour space and it will actually affect the view transform) and, minor processing differences aside such as sharpness and lens distortion, have been able to match the intensity of colours. For Photoshop I also used ROMM RGB and increased saturation directly in the Camera Raw module. Here's the RAW file for you to try out: _1190462.RW2 Steps for this experiment: Enable Shadows/Highlights and drag the Highlights slider to -100%. Avoid any colour or saturation adjustments, add other adjustments to taste (e.g. noise reduction). Enable the Profiles option and set the output profile to ROMM RGB. Click Develop. Once in the Photo Persona, add an HSL adjustment and increase Saturation all the way. You'll be able to dramatically saturate the image without losing resolution. If you close and re-open the RAW file and try to increase the saturation within Develop, you'll notice that the colour values are restricted to sRGB—however, even with values at the limit of sRGB, you can still set the output profile to ROMM RGB and then increase them further once in the Photo Persona. And below are two images, one still in ROMM RGB, the other converted to sRGB. I'm not sure how they will display on the forum (and whether the forum software will process and convert the colour profile—hopefully not!) but feel free to download both and view them in a colour-managed application or image viewer. If your display is capable of reproducing wide colour gamuts, you should see a noticeable difference between the two. [Edit] OK, that didn't work, the forum software converts to sRGB and ruins the comparison. Here's a Dropbox link to the JPEGs and RAW file where you can download the original files: https://www.dropbox.com/sh/aof74w94f6lm3d2/AABXE2OJMfk__kjA_jb6vwmia?dl=0 Hope that helps! James
  2. Live filters, particularly those related to sharpening and noise reduction, will generally need to be previewed at 100% to see their actual effect on the "final" image. This is because Photo uses mipmaps (lower resolution versions of the rendered document view) at lower zoom levels to improve performance—the result of convolution filters like unsharp mask, clarity and the current implementation of noise reduction will render differently when applied to a mipmapped version of the document view as it has fewer pixels. I've not looked at recent 2018 versions of the Adobe apps, but I'm aware that the Camera Raw/Lightroom developer may still have a caveat on the Details panel advising users to preview the sharpening and noise reduction results at 100% or larger—so it is likely these apps use a similar mipmap implementation to improve performance when zoomed out. There's been some discussion about how to tackle this, but with 1.7 the majority of the filters have been rewritten to support hardware acceleration, and noise reduction in particular now has a new algorithm which is hugely more effective. It's subject to testing, but we believe the rewriting of the filters may negate or at least reduce the difference between rendering at different zoom levels. Also, as you're into astrophotography, I would personally recommend experimenting with the 1.7 public beta which is available now: Mac: https://forum.affinity.serif.com/index.php?/forum/19-photo-beta-on-mac/ Windows: https://forum.affinity.serif.com/index.php?/forum/34-photo-beta-on-windows/ See the stickied thread at the top. I wouldn't normally advocate using a public beta in anger, but the RAW development and noise reduction are vastly improved and you will probably notice a huge difference with astrophotography. I've been editing wide field astro shots at ISO 6400/12800 and the new demosaicing, noise reduction and rewritten clarity filter are a huge benefit to editing my images. With 1.6, I would previously pre-process the RAW files in other software such as Sony Imaging Edge, but 1.7 is so vastly improved that I'll happily do all the editing exclusively in Photo now. Hope that helps!
  3. Hi RJH, it looks like you’ve got Metal compute acceleration enabled—in 1.6 this is in its infancy and may reduce performance (especially on your integrated Iris graphics). It’s worth disabling it then restarting Photo to see if performance improves. If you’re willing, you could also download and try out the 1.7 public beta, which has much wider Metal compute support, and even in Software many operations are quicker: Let us know how you get on!
  4. Something is happening 1.6's Metal compute support is only for a limited set of integrated GPUs typically found on the MacBook range (2016 and newer). There is currently no support for eGPUs. There is no need to disable the discrete GPU—Photo can still use this for presentation to screen whilst using the integrated GPU for compute. 1.7, however, which is currently in beta, does have extensive GPU compute support and is certainly at the point where you would see improvements in several areas. You're welcome to try the public beta here: https://forum.affinity.serif.com/index.php?/forum/19-photo-beta-on-mac/ (download link is in the latest stickied thread). It will make good use of multiple GPUs in many cases and will scale to however many devices are available to the system. Hope that helps!
  5. To clarify, the general Develop process is done in sRGB float—this includes the initial conversion from the camera's colour matrix to a designated colour space. However, the colour data is processed in ROMM RGB which produces enormous benefits for some type of imagery, particularly low light images with lots of artificial lighting sources. Despite this, the document is in sRGB and so this is the colour profile used when converting to the display profile. As the pipeline is in float, this does mean that you can convert to a wider colour profile on output and avoid clipping to sRGB, which is the recommended course of action for now. Do you mean the final developed image? This isn't my experience—choosing an output profile of ROMM RGB and then using adjustments and tools within the Photo Persona allows me to push colours right up to the limit of P3.
  6. Hi Beau, by the time you start editing your image in the Develop persona, it's already been demosaiced. Anything done in Develop offers two technical advantages: The development is done in 32-bit float, meaning pixel values aren't clipped until you click Develop and move to the Photo persona. It uses a wide colour space for development called ROMM RGB (essentially ProPhoto), meaning you can intensity colours without clipping them—very useful for imagery with lots of artificial lighting like night time/low light shots. Both of these can be carried over to the Photo persona, however. 32-bit processing can be specified in the Develop assistant (the little suit icon on the top toolbar), and you can also click the Profile option and change the final colour space from sRGB to ROMM RGB. Also, in Preferences>Colour, you can set the default colour profile to ROMM RGB so that processed images will always default to that colour space. Be careful with this if you bring in images that aren't tagged with a colour space (e.g. screenshots) as they will automatically be assigned the default profile. Honestly, I wouldn't recommend touching 32-bit unless you're doing something that demands precision above what 16-bit offers, e.g. close-ups of clouds or skies, HDR, subjects with very fine tonal gradations etc, but it brings about other issues into your typical image editing workflow. For 99% of imagery, 16-bit is more than sufficient. Editing in a wider colour profile I would recommend though, as you get benefits even if your final output is going to be sRGB. Check out the video I did for more info on this: Just as an insight, my typical workflow involves opening a RAW file, removing the tone curve (you can do this through the assistant menu) then using basic adjustments to create a flat image that makes the most of the available dynamic range. I'll then develop that so it becomes a 16-bit image with a ROMM RGB colour profile and do most of the editing in the main Photo persona. To that effect, I only use Develop to create a flat image, and sometimes add initial noise reduction depending on the image. Hope that helps!
  7. Hi, Photo doesn't use the preview JPEG in any capacity so that shouldn't be the issue. When you say 'a minute later' do you mean when you click Develop? If so, have you somehow changed the default colour profile to use your soft proofing profile instead? (through Preferences>Colour) This is inadvisable, I'd recommend just sticking to sRGB or a wider profile like Adobe RGB/ROMM RGB. You soft proof via an adjustment layer which you then disable before sending to print—it's a different method but the results are the same. By all means, you should be able to contact Apple for a refund—if you still have the app installed however, it would be really useful to see a couple of screenshots and get an idea of what's happening. Would you be able to take a screenshot of your main workspace with the document's colour profile listed in the top right? And additionally maybe a shot of the Preferences>Colour dialog? Thanks, James
  8. Hi, that's odd, have you got an example? What renderer do you use? I've rendered out plenty of EXRs and Photo correctly imports the channel data. Have you got alpha association enabled at all? (Instead of importing the alpha channel as a separate layer, it will merge it into the RGB layer's alpha channel) Hope to hear back from you!
  9. Hi, sorry, I was referring to white balancing in-camera, not using the white balance dropper in post—this way, not only do you get a more balanced exposure across the three colour channels, but you also don't have to use workarounds to correct the white balance later on. However, the good news is that I've just tried this—you can indeed use the white balance picker in Photo's Develop Persona to completely correct the colour cast. Here's a quick example that I deliberately shot with auto white balance using a 720nm filter: The image is dull as I don't apply the default tone curve when developing RAW files, but hopefully the above screenshots will make sense.
  10. Currently you can't do dramatic white balance adjustments using the slider in the Develop Persona—I've actually been using exiftool to help someone doing an infrared project as Photo will read the initial white balance even if it's extreme. However, have you tried custom white balancing off foliage or a white card? I'd recommend doing this because otherwise you'll expose for the red channel at the expense of the other channels (meaning they'll be noisier when you finally balance the image in post). 720nm and higher you should definitely white balance as it will neutralise the red cast and prevent the blue/green channels from being underexposed. Lower than 720nm—well, you're introducing red false colour, but you should still white balance to ensure foliage and other sources that emit IR light are correctly balanced. [Edit] Forgot to clarify—the whole point of me mentioning white balancing is that you wouldn't have to perform the drastic WB shift in post.
  11. Hello all, the 1.6.9 update made quite a few interface and gesture changes that rendered the tutorial videos out of date—thus, you will now find a new set of structured tutorials here: https://affinity.serif.com/tutorials/photo/ipad The first post of this thread has been updated, but the above link will take you straight to the new tutorials which feature localised subtitles for all 9 languages the app is supported in. Hope you find them useful!
  12. Hi Tapatio, are Bilateral and Median blur disabled as well? If so, you may be editing in 32-bit—these three filters won't function correctly and so were disabled for 32-bit work. You would need to convert your document to 16-bit or 8-bit for them to be accessible. If you're developing RAW files, you may have switched over to 32-bit development using the assistant. This video shows how to access it and set it back to 16-bit if required: Alternatively, if that's not your issue, is there any way you could attach a screen grab of what you're seeing and how the filter is disabled? Thanks in advance!
  13. Hi symisz, unfortunately (for now at least) the navigator view isn't colour managed, hence the yellow cast—this would make sense as it looks like you have a screen grab open with an embedded display colour profile (Q2270), a BenQ monitor I think? You could try converting the document's colour profile to sRGB (Document>Convert ICC Profile) to see if the colour cast goes away. Generally, however, we are aware that the navigator view isn't colour managed like the main document view, and it's hopefully something that will be fixed in a future version. Hope that helps!
  14. In this case, the smaller file (2MB) contains the compressed JPEG—I'm assuming you just opened a JPEG and saved it as an .afphoto file? After tone mapping, the pixel layer now has to be converted to uncompressed pixel data, which accounts for the new file size of 41MB. Live filters however shouldn't have much file size overhead, do you know for certain that they are increasing the file size or are you using them in conjunction with other filters/pixel editing work? Are you able to explain in more detail? I use an external drive with Dropbox for all my .afphoto documents and haven't had any issues, and before Dropbox I was using Google Drive without any issues...
  15. @owenr there is absolutely no need to be snarky. It's counterproductive to any discussion. @PhotonFlux live filters are indeed very taxing—in your screen grab you're using several stacked on top of one another including convolutions and distortions (plus something nested into your main pixel layer, not sure what that is?) This is why live filter layers are set to child layer by default, otherwise the temptation is to stack them as top level layers. This doesn't help much in your case, but if you were working on a multiple layer document you would typically try to child-layer live filters into just the pixel layers you were trying to affect. The Affinity apps use render caching in idle time, but this is invalidated any time the zoom level is changed. What you should be able to do in theory is set a particular zoom level, wait a couple of seconds (or longer depending on how many live filters you have active) and the render cache will kick in. Now if you start to use tools or adjustments on a new top level layer, the performance should be improved. In honesty, this works better for multiple pixel layers where you're using masking—less so for live filters that have to redraw on top of one another. The live filters are also calculated entirely in software—however, as you're running a MacBook with a recent Intel iGPU you may want to dabble with enabling Metal Compute in Preferences>Performance. This enables hardware acceleration for many of the filter effects. There are still some kinks to be worked out (I believe you might see some improvements to hardware acceleration in 1.7) as it's a port from the iPad implementation, but it may speed things up if you're consistently using multiple stacked live filters. It also significantly decreases export times too. There are some caveats (Unsharp Mask behaves differently and you may need to tweak values on existing documents) but I would say it's worth a try at least. Hope that helps!
  16. If you're not playing games or watching video then I definitely wouldn't recommend a TV just for production work. As far as colour accuracy, bear in mind that TVs are generally designed to enhance content, not portray it accurately. If you were determined on using a TV you would absolutely need to profile it using a colourimeter like the i1Display Pro and also disable the various features like contrast enhancement, "extended dynamic range", local dimming, etc—the downside is that these features are often used to reach claimed specifications like the high contrast ratio and peak brightness. By the time you have disabled all the TV's picture-based features, you may as well pay the same amount for a decent monitor. There are plenty of 4K 27" options (and some 32") that will cover 99/100% of sRGB and Adobe RGB, plus 60-80% of P3 and Rec.2020—more than enough for image editing. There's also the size—50" at a desk might look impressive at first, but after the first day will seem a bit ridiculous as you'll basically have to move your head just to pan across areas of the screen. I work on a Dell 32" and that's honestly the maximum I would recommend—I do find myself sometimes moving my head just to look at the right hand side of the screen. Gabriel also mentioned the pixel density, which is important too. With a typical 21-32" monitor, you'll get denser pixels (especially at 4K) which will give you a better representation of how your images look and allow you to evaluate their sharpness. A 50" isn't going to give you a typical view of how your images actually look—when you think about it, what size do you typically see images at, unless you print them at huge sizes? Definitely not 50"! A large TV may well skew your perception, unless you sit far away from it, but that doesn't sound like a great way to actually do detailed image and design work. Put it this way—speaking from personal experience, if you're only wanting to do production work in apps like Photo and Designer, I would honestly recommend a decent 24-32" monitor—there are plenty of options available. Profiling them is easy if you have a decent colourimeter, and you would be able to have a conventional desk setup where everything is within easy reach. A big TV just isn't the answer—and if you're looking at a smaller TV (eg the 24-32" range), then an actual computer monitor at an equivalent price would likely be just as good if not better.
  17. What Gabriel said, plus there are other factors such as input lag—even in Game mode, where extra pre-processing is disabled for quicker signal processing, most TVs have a response time of anywhere between 30-50ms. Monitor response times tend to be much quicker (mine is 8ms for example), especially if you're using a G-Sync or FreeSync monitor as they typically boast 1-3ms response times. Using a TV for workflow and productivity tasks could soon become quite tiresome due to the lack of instant feedback and response, especially if you're fast with a mouse and need to zip around the screen. I understand the temptation, since even mid-range TVs can boast wide colour spaces like P3, Rec.2020 etc—the accuracy and gamut coverage is another matter however. When I did a lot of video work I had a TV hooked up to a Blackmagic Intensity card to "monitor" the video output, but that was acceptable since my main editing interface was still on a monitor. Moving your entire workspace to a TV screen, though... you're gonna have a bad time (bonus points if you recognise the meme ) Disclaimer—some people use their TVs as their main computer display without any issues. Your mileage may vary, but there are both technical and productivity-based reasons to prefer an actual monitor.
  18. Hi dho, you shouldn't be setting Affinity Photo's colour profile to your new display profile, that's not how you colour manage. The software will automatically colour manage based off the current monitor profile, which should be set to your custom profile created by the Spyder5Pro software (is it i1Profiler?). The colour profile options in Affinity Photo's Preferences are for the document profile, not your display. Your document profile wants to be a standardised profile like sRGB (the default), Adobe RGB, ProPhoto etc. Photo will then colour manage by converting the colour values using your display profile so that what you're seeing is accurate. What you're currently doing is converting from the image's original colour profile to your display profile, which may explain why the colours look washed out and incorrect. Try setting Affinity Photo's colour dropdown options back to their defaults (e.g. sRGB) and you should see an improvement. In other words, you're overcomplicating things! All you need to do is make sure the OS (Windows or macOS) is using your display profile. In Windows, you'd right click the desktop and choose Display Settings, then make sure the Spyder profile is being used in the dropdown. On macOS, you'd go to System Preferences>Displays>Colour and set the display profile there. You shouldn't need to change anything in Affinity Photo, just leave the colour profile options on their default settings (e.g. sRGB). Hope that helps!
  19. Hi Chris, did you see my reply in your previous thread? (https://forum.affinity.serif.com/index.php?/topic/60677-colour-profile-recognition-and-handling/&tab=comments#comment-314285) It doesn't address your first point (.icm vs .icc)—I can't vouch for this as I haven't tried profiles with an .icm extension. It does however highlight that the profiles available are dependent on the document's current colour model (e.g. RGB or CMYK). I've also pointed out a specific directory unique to Affinity where you can put your custom profiles. Because you haven't replied to the thread I'm not sure if the suggestions have worked for you, or whether you're posting this thread because they haven't? Hope it helps!
  20. Hi Nick, Windows should automatically suggest a good interface scaling based on the screen size (you can get to the option by right clicking on the desktop and choosing Display Settings), but you can change it yourself if you're not happy with the default. It may be set to 175% for example, whereas you may want to change it to 200% which would increase the icon/text size but leave you with less "real estate" for your images. The Affinity apps scale dynamically with the global Windows display scale, so you can change it and see the results straight away. Hope that helps!
  21. Unfortunately, this statement is correct within context of the Windows builds. @dkrez, ordinarily you could double-click the active colour to bring up the Colour Chooser dialog—this has additional 32-bit options where you can set out of range values using input boxes and an Intensity slider. Currently, however, these don't work as expected on Windows, as the values will be clamped back to 1. The same issue applies when colour picking out of range values (as Gabriel has mentioned above). Apologies for this, it's a UI issue and the developers have been made aware. Additionally, saving colour values as swatches will also clamp (they're stored in LAB)—the developers are also aware of this and there's a desire to improve it.
  22. Hi Chris, ICC profiles can be custom-installed to Photo if you wish—a typical directory structure for this is C:/Program Files/Affinity/Affinity Photo/Resources/icc. It will also however pick up any system-installed profiles. It may be worth copying your profiles to this directory and seeing if they are detected. You've said you have over 200 profiles, but about 120 of them are RGB—so are the others CMYK or a different colour model? Possibly the reason you're not seeing them all through the Assign/Convert dialogs is because Photo only lists profiles for the document's colour model. So you should see all of your scanner device profiles that are RGB in an RGB/8 or RGB/16 document. If you want to use profiles that are CMYK, you'll need to either convert your document to CMYK and assign the relevant CMYK profile, or use a Soft Proof adjustment to stay in RGB but assign a profile that is CMYK. I would however ask whether the image files from the scanner (I'm assuming TIFF) are already tagged as CMYK if the scanner profile uses a CMYK model. If you add a Soft Proof adjustment layer to an open image, do you see all 200-odd profiles listed there? Additionally, if you were able to attach a sample scanner profile that doesn't appear in the profiles list, we may be able to examine it and understand what the issue is (this would really help).
  23. Hi, it's more of a preference based on how I prefer to work, but it also plays to Photo's strengths—the real flexibility comes from working with a flat "base" image using a wide colour profile and making use of all the non-destructive features like adjustment layers, live filter layers, pixel layers with blend modes/ranges for painting and retouching, etc. It's a case of experimenting and finding out what works best for you—tweaking sliders in the Develop persona is a more straightforward approach but I would only ever use it as a starting point. Others, however, will do 90% of their work in this persona and then maybe do a couple of edits in the Photo persona. As far as functionality, the Shadows/Highlights sliders in Develop are similar to the filter version of Shadows/Highlights in the Photo persona—the adjustment version simply compresses the tonal ranges and is more used for tonal effect rather than recovery. Clarity also behaves differently (I prefer its Develop version), and the noise reduction is also slightly different—in Develop, it works off a base scale that is calculated individually for each image when you load it. Saturation is also more conservative in Develop, so if you want to seriously saturate colours then you'll want to do that in the Photo persona. Think that's about it though! Hope that helps.
  24. Hmm, that's interesting—both Photo and exiftool can read the metadata... @icetype, you can use any software that will modify EXIF data. I've used exiftool (https://www.sno.phy.queensu.ca/~phil/exiftool/install.html) but you could apply these instructions to other software. You'll want to change the tag called "PhotometricInterpretation" to "BlackIsZero" or "1". To do this in exiftool, for example, you would type: exiftool -PhotometricInterpretation="BlackIsZero" "path/to/file.tif" Replace "path/to/file.tif" with your TIFF file (in both Windows and macOS I believe you can just click-drag the file onto the command prompt/terminal window) Now load the file into Affinity Photo. It will assign a greyscale D50 profile and the bit depth should still be 16bpc. Now just go to Layer>Invert (since we reversed the photometric model) to see your image as intended. Note that if you're using other software to modify the EXIF tags, it might be using friendly names—so you'd just look for Photometric Interpretation with a space rather than as one word. Hope that helps!
  25. There is a workaround by changing an EXIF tag, I posted an explanation above—it maintains the 16-bit format and correctly assigns a greyscale colour profile, so you don't lose any precision. If you can verify that my attached TIFF file opens as a 16-bit greyscale document (so it actually works for you) I can detail the instructions. This would enable you to use Affinity Photo in the interim before the issue is fixed.
  • Create New...

Important Information

Please note the Annual Company Closure section in the Terms of Use. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.