Jump to content

James Ritson

  • Content count

  • Joined

  • Last visited

About James Ritson

  • Rank
    Advanced Member

Contact Methods

  • Website URL

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

4,455 profile views
  1. James Ritson

    Strange Metal Problem

    Something is happening 1.6's Metal compute support is only for a limited set of integrated GPUs typically found on the MacBook range (2016 and newer). There is currently no support for eGPUs. There is no need to disable the discrete GPU—Photo can still use this for presentation to screen whilst using the integrated GPU for compute. 1.7, however, which is currently in beta, does have extensive GPU compute support and is certainly at the point where you would see improvements in several areas. You're welcome to try the public beta here: https://forum.affinity.serif.com/index.php?/forum/19-photo-beta-on-mac/ (download link is in the latest stickied thread). It will make good use of multiple GPUs in many cases and will scale to however many devices are available to the system. Hope that helps!
  2. If you want to achieve the same effect as the old Clarity filter, just use a live Unsharp Mask, drag its radius to 100px and set the blend mode to Lighten. Alternatively, create a luminosity mask (CMD-Shift-click on a pixel layer) then add the Unsharp Mask and drag its radius to 100px. That's basically what the old Clarity was: local contrast enhancement using unsharp mask with luminosity blending. The new version is more complex and is far more effective, but if you preferred the old look you should be able to follow the above instructions. Hope that helps!
  3. James Ritson

    Levels opacity dropdown crash

    Hi, this occurs with most adjustments at the moment, it's related to an HSL dialog fix—it should be fixed for the next build. In the meantime, if you wish to keep editing with the current build, just add an HSL adjustment before you add any other adjustments. You don't have to do anything with the HSL adjustment (in fact, you can just delete it), just adding it is enough to mitigate the crash. Thanks!
  4. To clarify, the general Develop process is done in sRGB float—this includes the initial conversion from the camera's colour matrix to a designated colour space. However, the colour data is processed in ROMM RGB which produces enormous benefits for some type of imagery, particularly low light images with lots of artificial lighting sources. Despite this, the document is in sRGB and so this is the colour profile used when converting to the display profile. As the pipeline is in float, this does mean that you can convert to a wider colour profile on output and avoid clipping to sRGB, which is the recommended course of action for now. Do you mean the final developed image? This isn't my experience—choosing an output profile of ROMM RGB and then using adjustments and tools within the Photo Persona allows me to push colours right up to the limit of P3.
  5. Hi Beau, by the time you start editing your image in the Develop persona, it's already been demosaiced. Anything done in Develop offers two technical advantages: The development is done in 32-bit float, meaning pixel values aren't clipped until you click Develop and move to the Photo persona. It uses a wide colour space for development called ROMM RGB (essentially ProPhoto), meaning you can intensity colours without clipping them—very useful for imagery with lots of artificial lighting like night time/low light shots. Both of these can be carried over to the Photo persona, however. 32-bit processing can be specified in the Develop assistant (the little suit icon on the top toolbar), and you can also click the Profile option and change the final colour space from sRGB to ROMM RGB. Also, in Preferences>Colour, you can set the default colour profile to ROMM RGB so that processed images will always default to that colour space. Be careful with this if you bring in images that aren't tagged with a colour space (e.g. screenshots) as they will automatically be assigned the default profile. Honestly, I wouldn't recommend touching 32-bit unless you're doing something that demands precision above what 16-bit offers, e.g. close-ups of clouds or skies, HDR, subjects with very fine tonal gradations etc, but it brings about other issues into your typical image editing workflow. For 99% of imagery, 16-bit is more than sufficient. Editing in a wider colour profile I would recommend though, as you get benefits even if your final output is going to be sRGB. Check out the video I did for more info on this: Just as an insight, my typical workflow involves opening a RAW file, removing the tone curve (you can do this through the assistant menu) then using basic adjustments to create a flat image that makes the most of the available dynamic range. I'll then develop that so it becomes a 16-bit image with a ROMM RGB colour profile and do most of the editing in the main Photo persona. To that effect, I only use Develop to create a flat image, and sometimes add initial noise reduction depending on the image. Hope that helps!
  6. Hi, Photo doesn't use the preview JPEG in any capacity so that shouldn't be the issue. When you say 'a minute later' do you mean when you click Develop? If so, have you somehow changed the default colour profile to use your soft proofing profile instead? (through Preferences>Colour) This is inadvisable, I'd recommend just sticking to sRGB or a wider profile like Adobe RGB/ROMM RGB. You soft proof via an adjustment layer which you then disable before sending to print—it's a different method but the results are the same. By all means, you should be able to contact Apple for a refund—if you still have the app installed however, it would be really useful to see a couple of screenshots and get an idea of what's happening. Would you be able to take a screenshot of your main workspace with the document's colour profile listed in the top right? And additionally maybe a shot of the Preferences>Colour dialog? Thanks, James
  7. James Ritson

    EXR support

    Hi, that's odd, have you got an example? What renderer do you use? I've rendered out plenty of EXRs and Photo correctly imports the channel data. Have you got alpha association enabled at all? (Instead of importing the alpha channel as a separate layer, it will merge it into the RGB layer's alpha channel) Hope to hear back from you!
  8. Hi, sorry, I was referring to white balancing in-camera, not using the white balance dropper in post—this way, not only do you get a more balanced exposure across the three colour channels, but you also don't have to use workarounds to correct the white balance later on. However, the good news is that I've just tried this—you can indeed use the white balance picker in Photo's Develop Persona to completely correct the colour cast. Here's a quick example that I deliberately shot with auto white balance using a 720nm filter: The image is dull as I don't apply the default tone curve when developing RAW files, but hopefully the above screenshots will make sense.
  9. Currently you can't do dramatic white balance adjustments using the slider in the Develop Persona—I've actually been using exiftool to help someone doing an infrared project as Photo will read the initial white balance even if it's extreme. However, have you tried custom white balancing off foliage or a white card? I'd recommend doing this because otherwise you'll expose for the red channel at the expense of the other channels (meaning they'll be noisier when you finally balance the image in post). 720nm and higher you should definitely white balance as it will neutralise the red cast and prevent the blue/green channels from being underexposed. Lower than 720nm—well, you're introducing red false colour, but you should still white balance to ensure foliage and other sources that emit IR light are correctly balanced. [Edit] Forgot to clarify—the whole point of me mentioning white balancing is that you wouldn't have to perform the drastic WB shift in post.
  10. Hello all, the 1.6.9 update made quite a few interface and gesture changes that rendered the tutorial videos out of date—thus, you will now find a new set of structured tutorials here: https://affinity.serif.com/tutorials/photo/ipad The first post of this thread has been updated, but the above link will take you straight to the new tutorials which feature localised subtitles for all 9 languages the app is supported in. Hope you find them useful!
  11. Hi Tapatio, are Bilateral and Median blur disabled as well? If so, you may be editing in 32-bit—these three filters won't function correctly and so were disabled for 32-bit work. You would need to convert your document to 16-bit or 8-bit for them to be accessible. If you're developing RAW files, you may have switched over to 32-bit development using the assistant. This video shows how to access it and set it back to 16-bit if required: Alternatively, if that's not your issue, is there any way you could attach a screen grab of what you're seeing and how the filter is disabled? Thanks in advance!
  12. Hi symisz, unfortunately (for now at least) the navigator view isn't colour managed, hence the yellow cast—this would make sense as it looks like you have a screen grab open with an embedded display colour profile (Q2270), a BenQ monitor I think? You could try converting the document's colour profile to sRGB (Document>Convert ICC Profile) to see if the colour cast goes away. Generally, however, we are aware that the navigator view isn't colour managed like the main document view, and it's hopefully something that will be fixed in a future version. Hope that helps!
  13. In this case, the smaller file (2MB) contains the compressed JPEG—I'm assuming you just opened a JPEG and saved it as an .afphoto file? After tone mapping, the pixel layer now has to be converted to uncompressed pixel data, which accounts for the new file size of 41MB. Live filters however shouldn't have much file size overhead, do you know for certain that they are increasing the file size or are you using them in conjunction with other filters/pixel editing work? Are you able to explain in more detail? I use an external drive with Dropbox for all my .afphoto documents and haven't had any issues, and before Dropbox I was using Google Drive without any issues...
  14. @owenr there is absolutely no need to be snarky. It's counterproductive to any discussion. @PhotonFlux live filters are indeed very taxing—in your screen grab you're using several stacked on top of one another including convolutions and distortions (plus something nested into your main pixel layer, not sure what that is?) This is why live filter layers are set to child layer by default, otherwise the temptation is to stack them as top level layers. This doesn't help much in your case, but if you were working on a multiple layer document you would typically try to child-layer live filters into just the pixel layers you were trying to affect. The Affinity apps use render caching in idle time, but this is invalidated any time the zoom level is changed. What you should be able to do in theory is set a particular zoom level, wait a couple of seconds (or longer depending on how many live filters you have active) and the render cache will kick in. Now if you start to use tools or adjustments on a new top level layer, the performance should be improved. In honesty, this works better for multiple pixel layers where you're using masking—less so for live filters that have to redraw on top of one another. The live filters are also calculated entirely in software—however, as you're running a MacBook with a recent Intel iGPU you may want to dabble with enabling Metal Compute in Preferences>Performance. This enables hardware acceleration for many of the filter effects. There are still some kinks to be worked out (I believe you might see some improvements to hardware acceleration in 1.7) as it's a port from the iPad implementation, but it may speed things up if you're consistently using multiple stacked live filters. It also significantly decreases export times too. There are some caveats (Unsharp Mask behaves differently and you may need to tweak values on existing documents) but I would say it's worth a try at least. Hope that helps!
  15. James Ritson

    TVs vs monitors

    If you're not playing games or watching video then I definitely wouldn't recommend a TV just for production work. As far as colour accuracy, bear in mind that TVs are generally designed to enhance content, not portray it accurately. If you were determined on using a TV you would absolutely need to profile it using a colourimeter like the i1Display Pro and also disable the various features like contrast enhancement, "extended dynamic range", local dimming, etc—the downside is that these features are often used to reach claimed specifications like the high contrast ratio and peak brightness. By the time you have disabled all the TV's picture-based features, you may as well pay the same amount for a decent monitor. There are plenty of 4K 27" options (and some 32") that will cover 99/100% of sRGB and Adobe RGB, plus 60-80% of P3 and Rec.2020—more than enough for image editing. There's also the size—50" at a desk might look impressive at first, but after the first day will seem a bit ridiculous as you'll basically have to move your head just to pan across areas of the screen. I work on a Dell 32" and that's honestly the maximum I would recommend—I do find myself sometimes moving my head just to look at the right hand side of the screen. Gabriel also mentioned the pixel density, which is important too. With a typical 21-32" monitor, you'll get denser pixels (especially at 4K) which will give you a better representation of how your images look and allow you to evaluate their sharpness. A 50" isn't going to give you a typical view of how your images actually look—when you think about it, what size do you typically see images at, unless you print them at huge sizes? Definitely not 50"! A large TV may well skew your perception, unless you sit far away from it, but that doesn't sound like a great way to actually do detailed image and design work. Put it this way—speaking from personal experience, if you're only wanting to do production work in apps like Photo and Designer, I would honestly recommend a decent 24-32" monitor—there are plenty of options available. Profiling them is easy if you have a decent colourimeter, and you would be able to have a conventional desk setup where everything is within easy reach. A big TV just isn't the answer—and if you're looking at a smaller TV (eg the 24-32" range), then an actual computer monitor at an equivalent price would likely be just as good if not better.