Jump to content

James Ritson

Moderators
  • Content Count

    665
  • Joined

  • Last visited

Everything posted by James Ritson

  1. The TIFF file actually contains ample EXIF information, it's the photometric model that is stopping Affinity Photo from recognising its format correctly. @icetype, the software (or device) that produces the TIFFs you're working with sets the photometric interpretation to "white is zero". Most TIFFs are either "RGB" (full colour) or "black is zero" (bilevel/greyscale). I've changed the EXIF tag, imported the TIFF and inverted it—it's attached to this post. Could you open it and make sure it's correct? If so, I can give you a workaround for now, but I'll also log it with the developers. Hope that helps! 16bitGray copy.tif
  2. Hi Rosomak, the reason you're seeing a difference when you export with "Use document profile" is because your document's colour profile is Display P3—I'm guessing you shot this image with an iPhone or something similar that has a "wide colour gamut" feature? When you look at an image with software that isn't colour managed, it will look incorrect. The reason your second image looks fine (where you've selected sRGB as the output profile) is because Photo has converted from Display P3 to sRGB upon export—sRGB is at least assumable and relatively "safe", even if the software has no concept of colour management. The solution here is to simply convert to sRGB on export like you did with the second image. The colour settings in the Preferences menu you've shown won't apply if the image you're importing into Photo already has a colour profile (either tagged or embedded), hence why it's still Display P3. Out of interest, what are you viewing the images in? If you bring the Display P3 exported image back into Photo, does it display correctly? (I hope so!)
  3. Outside of HDR and 3D, working in 32-bit has some benefits for edge cases e.g. astrophotography, heavy tonal manipulation, colour space conversions. For day to day editing, though, it can sometimes introduce extra issues into your workflow, especially if you use blend modes and filters like sharpening. The main difference is the pixel format. The Develop Persona works in unbounded 32-bit with a ROMM RGB colour profile. Unless the defaults are changed, clicking Develop converts the image to 16-bit integer with an sRGB colour profile. That's the key difference. The Develop Persona doesn't work with the RAW data since that would be meaningless—it has to be demosaiced, converted from the camera's native colour space, tone mapped, gamma corrected, have lens corrections/pixel remapping applied and so on. This is true of any RAW developer. I think the main confusion arises from how most RAW developers work "non-destructively", in the sense that they always re-develop the original RAW file using settings stored in a sidecar file (e.g. XMP). So you have this perception of working on the "RAW data", when really it's the same as when Photo creates something meaningful from the original RAW file. The difference is that Photo doesn't store develop settings—the Develop Persona is literally there to get from A to B so you can have a starting point to work on your image further in the Photo Persona. Once you open and develop the image, it's now in a raster format. Reloading the RAW file again would be like starting from scratch. There is a DAM in development (no further news yet) that would provide this kind of "revisiting your original RAW development settings" functionality, however.
  4. Hi again Iggy, Hmm, I'm not really advocating using 32-bit—it's useful for some scenarios, but I would actually recommend the opposite. 16-bit precision is often more than enough to contain all the meaningful information from RAW files. Unless you're using some specific medium format cameras, most RAW data is recorded at 14 or 12-bit precision (depending on the sensor and ADC). 32-bit is a different beast entirely, and you'll find adjustments and filters behave differently, plus some blend modes will look wrong or clamp values in unbounded. Bottom line, please believe me when I say 16-bit is enough for your images! I hope I haven't misrepresented myself too much, I did say that 32-bit isn't recommended for most users. Honestly, if you make sure you're not clipping tones in the image, you will find 16-bit more than enough for your images. I guess the whole Develop concept is a little strange to grasp, but here's an equivalent scenario using Lightroom and Photoshop as an example: Open an image in Lightroom Perform basic edits to the image (this is your Develop Persona) Send the file to Photoshop (this is when you click Develop and move to the Photo Persona) Perform edits in Photoshop, save as PSD (when you save your document as an .afphoto file) Export for sharing/delivery Hopefully that might clear it up? Cheers!
  5. Hi Iggy, sorry for the wall of text, I got into it and kept writing.. A RAW file is just digitised information captured from the camera's sensor. When you load a RAW file into Photo, it performs all the usual operations—demosaicing, converting from the camera's colour space to a standardised colour space, tone mapping, etc—in an unbounded 32-bit pixel format, which means you achieve a lossless editing workspace. This process is equivalent to developing a RAW file in any other software, and it all takes place in the Develop Persona. You mention the Canon RAW developer—not sure what you mean here, are you referring to how the RAW file is captured in camera, or Canon's own developing software? As long as you pass Affinity Photo the .CR2 file, you're working with what was captured in camera: no development has been done prior to that. The reality is that the RAW data has to be interpolated into something meaningful in order for us to edit it. I'd recommend checking out my article "RAW, actually" on the Affinity Spotlight site if you're interested, as I believe it'll help explain why the RAW information has to be "developed": https://affinityspotlight.com/article/raw-actually/ If you use Canon's RAW converter, it should allow you to export in several formats such as JPEG and TIFF. A 16-bit TIFF in a wide colour space would be an appropriate high quality option here, which you could then open in Affinity Photo and edit. However, as I've said above, Affinity Photo's Develop stage is the equivalent to any other software's RAW development stage—so no, you won't be losing any more information if you simply choose to develop your RAW files in Affinity Photo. When you actually click the blue Develop button, that's when things change. By default, you go from the unbounded 32-bit environment with a wide colour space to an integer 16-bit format with whatever your working colour space is set to—usually sRGB, which is a much more limited gamut. This is basically the same process as when you export from your RAW development software to a raster format like TIFF or JPEG. You can of course change both these default options if you wish to continue working in 32-bit (not recommended for the majority of users) and in a wider colour space (you can set the output profile to something like ROMM RGB which is a version of ProPhoto that ships with the app). As regards quality loss—technically, there will always be a loss, but it is exactly the same loss you would experience when you export from any other RAW development software. As soon as you export to JPEG or TIFF, you're converting the bit depth and also the colour profile, both of which are typically lossy operations. In most use cases, however, this loss is negligible. The exception to this might be if you don't recover extreme highlight detail in the Develop Persona before developing—this detail is then clipped and lost. Similarly, if you have a scene with very intense colours and you convert to sRGB, you're potentially throwing away the most intense colour values because they can't be represented in sRGB's smaller gamut. Here's a workflow I use for the majority of my images, and if you follow it I can pretty much promise you needn't worry about loss of quality: 1) Open RAW file in Develop Persona 2) Access the Develop Assistant and remove the default tone curve (see https://youtu.be/s3nCN4BZkzQ) 3) If required, pull the highlights slider back to recover extreme highlight detail (use the clipping indicators top right of the interface to determine this) 4) Check the Profiles option, and set the output profile to ROMM RGB (i.e. ProPhoto) 5) Add any other changes e.g. noise reduction, custom tone curve (I usually add a non-linear boost to the shadows and mid-tones) 6) Develop. This will now convert the image to 16-bit with a wide colour space 7) Edit further in the main Photo Persona. This is where I'll typically do the tonal work as well (instead of doing it in the Develop Persona) using Curves, Brightness/Contrast etc 8) Export! For sharing (e.g. if exporting to JPEG), don't forget to click the More button and set ICC Profile to sRGB—this will convert the output to sRGB and ensure it displays correctly under conditions that are not colour-managed. Hope that helps.
  6. Hi Stuhrer, are you by any chance editing in 32-bit? You can check by selecting the Hand Tool and looking at the context toolbar entry (it should list the document pixel format and resolution). In fact, if you move between snapshots, just keep an eye on that entry and see if it changes. I can reproduce this if I convert between 32-bit and either 16-bit or 8-bit and create snapshots, then toggle between the snapshots. Have you inadvertently changed the colour format between snapshots?
  7. Hi, infrared processing is fairly straightforward—hopefully you shot in RAW? Did you custom white balance (should produce a fairly neutral image with a purple or yellow cast) or leave it on auto (will look intensely red)? If it looks red, you'll need to white balance in-app, so hopefully you're working with RAW files! The white balance shift required for infrared is extreme, however, and Photo will only go down to 2000K. If your images are red, I'd really recommend white balancing at the scene—the issue with leaving it to post is most software won't let you shift white balance as much as is required (but will read and apply it from the image's metadata fine), and your camera will also expose for the red channel, which underexposes the blue and green channels. It's much better to take a white balance reading either from a neutral white card or from some foliage (even just the overall scene would be much better than leaving white balance on auto). If you have white balanced your images, all the better. From here, people usually mix and blend the channel information depending on the look they want. As a starting point, add a Channel Mixer adjustment. Target Reds, and set Red to 0 and Blue to 100. Now target Blues, and set Red to 100 and Blue to 0—so you're effectively swapping the red and blue channels. For another look, switch across to Greens and set Green to 0 and Blue to 100—so now you've eliminated the Green channel entirely and replaced it with Blue. You can also use a variety of other adjustments to increase the contrast, boost certain tones etc. Something you might also want to try is converting to black and white (for which you can use a Black & White adjustment) as infrared monochrome imagery has a distinct look compared to traditional black and white. The rest is more or less experimentation, just like if you were editing a traditional photograph. Hope that helps—there are several Photo for iPad video tutorials that you could look at to give you some more ideas, and even more desktop videos whose techniques are also applicable on iPad.
  8. Hi John, you shouldn't need to change any settings in Affinity Photo—it is colour managed based off the active monitor profile (which should be the profile created by the Spyder 4 software). If you're still seeing a difference between Affinity Photo and Preview, it may be that Preview is interpreting the profile incorrectly. Have you tried previewing the exported image in another app? Preview is famously picky about ICC profiles. For example, displayCal now changes its default settings to compensate: see https://hub.displaycal.net/forums/topic/dark-images-in-mac-photos-preview-with-displaycal-generated-profile/ and https://apple.stackexchange.com/questions/271825/bug-macos-sierra-preview-quick-look-issues-with-rendering-colors-of-images-when/272115 (the second link offers a solution). Let us know how you get on!
  9. Hi Mareck, I've replied in your other thread. Photo's OCIO implementation wasn't meant to be a final step for exporting 8 or 16-bit deliverables, it was so you could have a non-destructive view transform and edit your work whilst previewing in the correct colour space without resorting to fudgery. You will find that if you set the correct display device (e.g. sRGB) and view transform (e.g. Filmic) then the result you see in Photo will match blender's output. As mentioned, the complication here is if you want to export your result to an 8 or 16-bit format with a non-linear transform. In that case, the steps I posted in the other thread should allow you to use the ICC Display Transform whilst maintaining the look you expect. Hope that helps!
  10. Hi Mareck, all I can say here is that if you're looking to export from Photo to TIFF/JPEG or any other 8/16-bit format, don't work in Unmanaged. For the export to match what you're seeing on the canvas view, you must be using ICC Display Transform. If you're simply editing in Photo and passing the EXR to another app (e.g. Nuke, Fusion, Natron, Flame, Resolve), then you can work with an OCIO Transform. Don't forget what I mentioned above about colour space conversion—Photo works in scene linear. If you don't specify a colour space on import, it will assume the document you're importing is already in scene linear. With blender exports, this is the case, but if you have an EXR in another format (e.g. aces) and you're using the appropriate configuration, you can append "aces" to the filename and Photo will convert from that to scene linear. When you're exporting, don't forget to append "aces" to the filename if you need to convert back from scene linear to aces.
  11. Hi Mareck, I'm attempting to replicate your workflow, but I noticed in your previous thread you were using the blender Filmic config—now it looks like you're using a Nuke config. Could I just clarify what you're actually trying to achieve? Do you want Affinity Photo to be the final destination before you export for web, I.E. are you just looking to beautify your renders? Or is the goal to perform some editing in a colour-managed environment and pass an exported EXR to another app for compositing? Initially, I thought you were perhaps missing a step out of the workflow, which is to ensure that Photo converts from whatever colour space you were using in blender to scene linear. You would do this by appending the colour space to the filename—for example, "render Filmic Log.exr". However, it would seem blender simply exports in scene linear, except if you're compositing in the VSE where you can define the colour space (which I believe is irrelevant for your workflow). Apologies as I don't use blender very often, so I've been playing catchup here. From what I can gather, what you want to be able to do is have your starting point in Photo look exactly like the preview/render in blender when using the Filmic view transform, is that correct? Normally, this is easily achievable by either: a) Setting your view transform to OCIO managed on the 32-bit preview panel, then choosing the correct display output (sRGB) and view transform (Filmic) or b) Setting your view transform to unmanaged (linear light), then using an OCIO Adjustment layer to go from Linear to Filmic sRGB Both of these options work fine (option b effectively "baking" in a transform), I've tested both and they match the preview in blender. They both bypass the typical display transform that's applied (you can see this by using the "ICC Display Transform" option on the 32-bit preview panel). This is how you would work on a 32-bit document in Photo before exporting it back to EXR for use in other software. However, the complication arises when you want to export out of Photo to a 16 or 8-bit format—in this case, it has to include a non-linear transform using the document profile. It's not pretty, but I believe I've found a solution that will allow you to match the preview in blender whilst using the ICC Display Transform—and from then on, your exports will look correct. I've set up a similar workflow with one of my photogrammetry models and am using the default OCIO config that ships with blender (which is what I believe you were using originally). I've copied this config over to a separate folder in order to use it in Affinity Photo as well. Please note that I've only verified this with blender's standard OCIO configuration, which seems to include Filmic? I've tried piecing together this workflow by looking at the .ocio configuration file. Try following these steps and see if they help: In blender's colour management options, set display device to sRGB, view to Default and look to None. Import your EXR document into Photo. First, go to Document>Assign ICC Profile and choose your monitor profile. For example, I profile my monitor and I'm using a custom profile called Dell-D50, so I would choose that. If you're using a standard profile, it's probably named after your display—for example, my Dell monitor's profile is named DELL UP3216Q. You might see a noticeable shift in colour. If you now A/B this against the blender preview set to sRGB with a Default view (not Filmic yet), you should find they match closely. Photo is colour managed and will correctly convert based on the document profile, but if we assign a display profile we're effectively bypassing that and presenting the numbers straight to the monitor. Next, add an OCIO Adjustment layer and go from Linear to Filmic sRGB. Now, either: Add a LUT adjustment. You'll want to find srgb.spi1d in the blender OCIO config LUTs folder and apply that. Or add another OCIO adjustment. Go from sRGB (not Filmic sRGB) to Linear. Normally a Linear to Filmic sRGB then Filmic sRGB to Linear transform would be identity—but we're instead saying that on the transform back, the source colour space is sRGB, causing the numbers to change. If you change blender's view to Filmic and the look to None or Base Contrast, you should find the result in Photo now matches blender. When you're finished editing, don't forget to flatten and convert the ICC profile to sRGB (alternatively, make sure you assign sRGB as the document profile whilst exporting). Not pretty, but it appears to work. Additionally, you may want to apply one of the looks—looks aren't exposed in Photo, but you can still apply them through the LUT adjustment. It requires altering the workflow slightly. Work back to when you applied the monitor profile. Add an OCIO Adjustment layer and this time go from Linear to Filmic Log (not Filmic sRGB). Now add a LUT adjustment and browse for the "filmic" folder in the OCIO configuration folder. Choose the corresponding LUT depending on which look you want (note that you can always find out which looks point to which LUTs by checking the .ocio configuration file): filmic_to_0-35_1-30.spi1d - Very Low Contrast filmic_to_0-48_1-09.spi1d - Low Contrast filmic_to_0-60_1-04.spi1d - Medium Low Contrast filmic_to_0-70_1-03.spi1d - Base Contrast (this will give you the same result as choosing "None" for the look) filmic_to_0-85_1-011.spi1d - Medium High Contrast filmic_to_0.99_1-0075.spi1d - High Contrast filmic_to_1.20_1-00.spi1d - Very High Contrast Finally, either add the LUT with srgb.spi1d or add another OCIO Adjustment layer and once again go from sRGB to Linear. And that should be it. Sorry for the wall of text, but hopefully you'll be able to follow the instructions. I've attached a quick and dirty side-by-side of my photogrammetry render, I used the second method of applying a look (the very low contrast one). It's almost a 1:1 match on my screen. Hope that helps!
  12. Hi Mareck, could you check what your 32-bit preview options are set to? Go to View>Studio and choose 32-bit Preview Panel. Because you're using an OCIO configuration the display transform will be set to "OCIO Display Transform". If you could provide a couple of screenshots of this panel and the available options that would help. It's possible you need to be using a different view transform: most OCIO configurations will include linear options like "Non-Colour Data", "Linear Raw" etc, but these aren't suitable if your workflow is simply to edit in Photo and export to JPEG. Unless you're passing the render between different software and need explicit colour management, you may be better off simply using the ICC Display Transform option, as that will provide the best match when you export the result as JPEG with a non-linear sRGB profile. Hope that helps—any info you can give on your workflow would be useful, as we can then help you further!
  13. Hi Maarten, I'll tackle your last question first! This is exactly what happens already. On the Export dialog, click the More option. ICC profile is usually set to "Use document profile" but you can change this to sRGB or another option. Upon export, the document is flattened then converted automatically to whichever profile you specify. This process is exactly the same as going to Document>Flatten, then Document>Convert ICC Profile. I'm a little confused here—I don't think the Windows GUI should be appearing oversaturated at all. I profile my wide gamut monitors (both for Mac + Windows) using an i1DisplayPro and displayCal and I don't have an issue with oversaturated colours. Affinity Photo is colour managed in that it will use your active monitor profile, but it relies on that profile being accurate. Can I ask how you calibrate and profile your monitor? Hope that helps for now, look forward to hearing from you.
  14. Hey all, another video for you, this time covering the lens corrections in the Develop Persona to correct skew (and demonstrating a non-destructive approach using live filters): Perspective Skew Correction - YouTube / Vimeo (New: 20th April) Hope you find it useful!
  15. Hmm, unless I'm missing something here, Photo's Panorama persona has an equivalent of control points. In fact, it has two options--source image masking and source image transforming. I had a go with the JPEGs--it took a couple of minutes, but I was able to sort the alignment (as far as I can tell). I've attached a video to demonstrate using the two tools. Unfortunately, although you can stitch EXR files, it seems to be hit and miss depending on the subject material. I might suggest HDR merging and tone mapping each panorama segment, then exporting as 16-bit TIFF, which is not the greatest solution but would allow you to re-align successfully. Out of interest, which drone did you use? (The EXIF data lists a camera model but not the drone itself!) Here's the video file: bridge.mp4
  16. Hi Peter, this shouldn't have changed just because of an update, but check your assistant settings (the tuxedo icon). Photo allows you to toggle this behaviour separately for adjustments, masks and live filters. Hope that helps. [Edit] Forgot to add, but if you specifically deselect (either by clicking in a blank space within the Layers panel or by going to Select>Deselect Layers) then the behaviour is to add as a new layer.
  17. Browser autofill. It's a wonderful thing.
  18. Hey all, just a small update. I've gone back and revised a couple of videos (one has had a name change!). They are: 32-bit Raw Development - YouTube / Vimeo Wide Colour Profiles vs sRGB - YouTube / Vimeo The wide colour profiles video was previously called "ProPhoto vs sRGB". It's been updated with better information and also to highlight that Affinity Photo ships with a profile called "ROMM RGB" (ProPhoto, basically) that you can use--so there's no need to source and download a ProPhoto profile separately.
  19. Hi Gary, looks like there could be more than one thing at play here, but here are a few things to check: Are you exporting with that Wide Gamut RGB profile embedded? If so, you're then presumably reliant on the layout software (Acrobat in this case) to be aware of and perform colour management on the images. Have you tried converting to sRGB during export? You can set this through the ICC profile option on the Export Options panel if you're using the Export Persona. Do you absolutely have to use JPEG? You mentioned that TIFFs print as expected. JPEG is not lossless, even at 100% on the export slider - that value maps internally to something different for the JPEG library we use, and it's definitely quite lossy. There's some pretty terrible blotchiness and blockiness that I must confess I've not seen before, but I only ever use JPEG for web delivery or presentation, not for printing or further professional work. If you're exporting graphics and illustrations you also have the option of PNG, but TIFF has the advantage of supporting the CMYK colour model, so you could even convert to that during export if you're laying everything out in CMYK. You mention that importing JPEGs into your Acrobat document increases the file size more so than importing TIFFs. While this is bizarre, it could be that Acrobat is converting the images, which adds another complication and leads me onto... You've also mentioned that printing the JPEG on its own exhibits the same issue - can I just ask what you're using to print with--are you still using Acrobat? And if so, have you tried printing directly from Designer? Do you get the same issues when printing from other software? In terms of NDAs and samples, is there any possible way you can provide a tiny sample, perhaps by creating a new document and copy/pasting some content in there? Just having an example that exhibits the issue goes a long way towards helping to solve it. For example, is there a patch of the turquoise wall you could provide on its own? Hope the above helps!
  20. Hi Darin, the Unsharp Mask implementation on iPad is quite different compared to desktop (it's hardware accelerated using Metal compute). I'm not entirely sure why the developers changed the factor from value to percentage scaling, but I can tell you that the equivalents are: Factor 1: 20% Factor 2: 40% Factor 3: 60% Factor 4: 80% Be aware that whilst these figures will get you in the ballpark for similar sharpening, the Metal compute implementation appears to be slightly tamer, so be prepared to use a slightly higher Factor value (or indeed Radius). Hope that helps.
  21. Just to expand on this, there does appear to be a slight difference. Clarity has basically always been USM with a Luminosity blend mode (at least in the Photo Persona). The test imagery I was using, however, showed little to no change. I've tried some other images and there is a slight difference, but from memory of previous versions (1.4, 1.5) I was expecting a bigger difference. Applying a live USM layer and setting its blend mode to luminosity should yield the same result as Clarity. The eventual hope is that the Photo persona version will behave like it does in Develop, where you get less edge haloing and the ability to apply it negatively.
  22. Hey Sebastian, object removal is Median stacking, which should already produce a long exposure effect with the sea - is this not the case for you? You should get both results you're after from the same stack. If not, one approach might be to do a Merge Visible with a Median stack, hide it, change the stack to Mean and do another merge visible. You'll then have two merged pixel layers--one Median, one Mean. You could then add a mask layer to one and blend them together. This might not work if you've got people in the sea though...
  23. Hi djk, here's a brief rundown of each filter: Unsharp Mask uses the traditional approach of comparing and subtracting a blurred (or "unsharp") version of the image (or pixel layer). It's the most configurable of the sharpening filters as you can control the factor (strength) and threshold (how much to subtract based on the comparison). For most use cases, this is your go-to filter for sharpening. High Pass is a little more straightforward: it simply passes parts of the image (or signal) above a certain frequency and attenuates anything below it. This filter is used as part of Photo's automated Frequency Separation filter for retouching, but you can also use it for detail enhancement. Applying it as a Live Filter layer then altering its blend mode (e.g. to Overlay/Hard Light) allows you to be somewhat creative with your sharpening: use smaller radius values for fine detail sharpening and higher radius values for local contrast enhancement. Clarity has had an overhaul in the Develop Persona, so it now behaves differently compared to its implementation in the Photo Persona. In Develop, it performs quite a strong local contrast enhancement (punchy mid-tones, increase in "perceptual" sharpness). In Photo as a filter/live filter, it's basically Unsharp Mask with a blend mode - or at least it should be. As of the latest release version, it no longer appears to be any different from Unsharp Mask, which is a bug and will be reported... Detail Refinement in Develop is basically Unsharp Mask - the percentage suggests it might be adaptive but I don't believe this is the case, just treat it as the radius. Hope that helps!
  24. As of the most recent App Store version, Iris Pro integrated GPUs should be supported for Metal compute. No discrete GPUs are supported yet, but you can still run your R9 for canvas presentation whilst the Iris Pro is used for Metal compute. Go to Preferences>Performance and you should be able to check "Enable Metal compute acceleration". Hope that helps. Just be mindful that the brush-based tools seem to have some speed issues with Metal compute. It would be interesting to see if this is your experience too..
  25. Hey, on the context toolbar for the Dodge/Burn brush tools you should have a Tonal Range dropdown that allows you to target Shadows/Midtones/Highlights - think this is what you're after? Hope that helps! [Edit] Attached a screen grab
×
×
  • Create New...

Important Information

Please note the Annual Company Closure section in the Terms of Use. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.