-
Posts
851 -
Joined
-
Last visited
Reputation Activity
-
James Ritson got a reaction from Gabe in ACES Matte Painting Workflow
Unfortunately, this statement is correct within context of the Windows builds. @dkrez, ordinarily you could double-click the active colour to bring up the Colour Chooser dialog—this has additional 32-bit options where you can set out of range values using input boxes and an Intensity slider. Currently, however, these don't work as expected on Windows, as the values will be clamped back to 1. The same issue applies when colour picking out of range values (as Gabriel has mentioned above).
Apologies for this, it's a UI issue and the developers have been made aware. Additionally, saving colour values as swatches will also clamp (they're stored in LAB)—the developers are also aware of this and there's a desire to improve it.
-
James Ritson got a reaction from ekeen4 in How much editing in RAW development do you do in the Develop Persona compared to the Photo Persona?
Hi, it's more of a preference based on how I prefer to work, but it also plays to Photo's strengths—the real flexibility comes from working with a flat "base" image using a wide colour profile and making use of all the non-destructive features like adjustment layers, live filter layers, pixel layers with blend modes/ranges for painting and retouching, etc. It's a case of experimenting and finding out what works best for you—tweaking sliders in the Develop persona is a more straightforward approach but I would only ever use it as a starting point. Others, however, will do 90% of their work in this persona and then maybe do a couple of edits in the Photo persona.
As far as functionality, the Shadows/Highlights sliders in Develop are similar to the filter version of Shadows/Highlights in the Photo persona—the adjustment version simply compresses the tonal ranges and is more used for tonal effect rather than recovery. Clarity also behaves differently (I prefer its Develop version), and the noise reduction is also slightly different—in Develop, it works off a base scale that is calculated individually for each image when you load it. Saturation is also more conservative in Develop, so if you want to seriously saturate colours then you'll want to do that in the Photo persona. Think that's about it though!
Hope that helps.
-
James Ritson got a reaction from Alfred in How much editing in RAW development do you do in the Develop Persona compared to the Photo Persona?
Hi, it's more of a preference based on how I prefer to work, but it also plays to Photo's strengths—the real flexibility comes from working with a flat "base" image using a wide colour profile and making use of all the non-destructive features like adjustment layers, live filter layers, pixel layers with blend modes/ranges for painting and retouching, etc. It's a case of experimenting and finding out what works best for you—tweaking sliders in the Develop persona is a more straightforward approach but I would only ever use it as a starting point. Others, however, will do 90% of their work in this persona and then maybe do a couple of edits in the Photo persona.
As far as functionality, the Shadows/Highlights sliders in Develop are similar to the filter version of Shadows/Highlights in the Photo persona—the adjustment version simply compresses the tonal ranges and is more used for tonal effect rather than recovery. Clarity also behaves differently (I prefer its Develop version), and the noise reduction is also slightly different—in Develop, it works off a base scale that is calculated individually for each image when you load it. Saturation is also more conservative in Develop, so if you want to seriously saturate colours then you'll want to do that in the Photo persona. Think that's about it though!
Hope that helps.
-
James Ritson got a reaction from Przemysław in LEGACY: Official Affinity Photo (Desktop) Video Tutorials
Hello folks, three more tutorials this week for you, enjoy!
Curves Picker - https://vimeo.com/154293467
Vector Lighting - https://vimeo.com/152576339
Simple Gradients - https://vimeo.com/152558498
Thanks,
James
-
James Ritson got a reaction from Przemysław in Affinity Photo Windows example editing videos
Hey all,
The Photo for Windows beta has proven incredibly popular, and I appreciate that, like with any software, there's a learning curve and the requirement to invest time in understanding the software's functionality and idiosyncrasies.
With no firm commitment to delivering them consistently (that bit is important to note ;) ) I'd like to share some example editing/workflow videos. My aim is to demonstrate Photo's feature set and, hopefully, help viewers understand how the tools and features they're accustomed to using in other software can translate to Photo's implementations.
If you haven't caught them already, it's also worth noting that there's a huge set of video tutorials for Photo available at http://affin.co/PhotoTuts - around 165 at last count.
Here they are:
Hadrian's Wall
Link: https://vimeo.com/191642138
This video focuses on:
Raw noise reduction and dithering Marquee selections from tonal ranges Layer masking Blend modes Blend ranges Channel duplication/loading to alpha mask Live filter layers Configurable layer behaviour
Tiled Building
Link: https://vimeo.com/191981432
This video focuses on:
Accurate selections using selection brush tool Saving selections to spare channels Loading channel information into a layer's alpha mask Live brush previews Creative painting with blend modes Dynamic brush resizing on the fly Live Lighting filter Live filter layer mask painting Fast history scrubbing
Portrait Retouching
Link: https://vimeo.com/194985128
This video focuses on:
Automated Frequency Separation Healing Brush and Clone Brush Merging visible layers Live filter layers Live brush previews and blend modes Changing/adding colour tones Changeable workflow behaviours via Assistant dialog Live scrubbing History panel
Monochromatic Architecture
Link: https://vimeo.com/194986066
This video focuses on:
Adjustment Layers Live brush work with live previews Painting & Erasing Live filter layers Non-destructive noise/grain addition
Canary Wharf
Link: https://vimeo.com/203283705
This video focuses on:
RAW development Tone Curve option Apply Image with channel equations Black & White adjustment with Multiply blend mode to knock out sky colour Pixel painting to enhance colours Curves tonal adjustment Live Unsharp Mask filter for final sharpening
Banded Demoiselle
Link: https://vimeo.com/203294769
This video focuses on:
Creating selections with the Selection Brush Inverting a selection Masking an adjustment using a selection Tweaking layer opacity Live Lighting filter with blend mode Live High Pass filter with masking White Balance for tinting
I'll keep you posted as further videos appear. Hope they prove helpful to you!
-
James Ritson got a reaction from Gabe in [AP] exporting images - colour profile problem
Hi Rosomak, the reason you're seeing a difference when you export with "Use document profile" is because your document's colour profile is Display P3—I'm guessing you shot this image with an iPhone or something similar that has a "wide colour gamut" feature?
When you look at an image with software that isn't colour managed, it will look incorrect. The reason your second image looks fine (where you've selected sRGB as the output profile) is because Photo has converted from Display P3 to sRGB upon export—sRGB is at least assumable and relatively "safe", even if the software has no concept of colour management.
The solution here is to simply convert to sRGB on export like you did with the second image. The colour settings in the Preferences menu you've shown won't apply if the image you're importing into Photo already has a colour profile (either tagged or embedded), hence why it's still Display P3.
Out of interest, what are you viewing the images in? If you bring the Display P3 exported image back into Photo, does it display correctly? (I hope so!)
-
James Ritson got a reaction from ekeen4 in Confused with RAW/Affinity compatibility (split)
Hi Iggy, sorry for the wall of text, I got into it and kept writing..
A RAW file is just digitised information captured from the camera's sensor. When you load a RAW file into Photo, it performs all the usual operations—demosaicing, converting from the camera's colour space to a standardised colour space, tone mapping, etc—in an unbounded 32-bit pixel format, which means you achieve a lossless editing workspace.
This process is equivalent to developing a RAW file in any other software, and it all takes place in the Develop Persona. You mention the Canon RAW developer—not sure what you mean here, are you referring to how the RAW file is captured in camera, or Canon's own developing software? As long as you pass Affinity Photo the .CR2 file, you're working with what was captured in camera: no development has been done prior to that.
The reality is that the RAW data has to be interpolated into something meaningful in order for us to edit it. I'd recommend checking out my article "RAW, actually" on the Affinity Spotlight site if you're interested, as I believe it'll help explain why the RAW information has to be "developed": https://affinityspotlight.com/article/raw-actually/
If you use Canon's RAW converter, it should allow you to export in several formats such as JPEG and TIFF. A 16-bit TIFF in a wide colour space would be an appropriate high quality option here, which you could then open in Affinity Photo and edit. However, as I've said above, Affinity Photo's Develop stage is the equivalent to any other software's RAW development stage—so no, you won't be losing any more information if you simply choose to develop your RAW files in Affinity Photo.
When you actually click the blue Develop button, that's when things change. By default, you go from the unbounded 32-bit environment with a wide colour space to an integer 16-bit format with whatever your working colour space is set to—usually sRGB, which is a much more limited gamut. This is basically the same process as when you export from your RAW development software to a raster format like TIFF or JPEG. You can of course change both these default options if you wish to continue working in 32-bit (not recommended for the majority of users) and in a wider colour space (you can set the output profile to something like ROMM RGB which is a version of ProPhoto that ships with the app).
As regards quality loss—technically, there will always be a loss, but it is exactly the same loss you would experience when you export from any other RAW development software. As soon as you export to JPEG or TIFF, you're converting the bit depth and also the colour profile, both of which are typically lossy operations. In most use cases, however, this loss is negligible. The exception to this might be if you don't recover extreme highlight detail in the Develop Persona before developing—this detail is then clipped and lost. Similarly, if you have a scene with very intense colours and you convert to sRGB, you're potentially throwing away the most intense colour values because they can't be represented in sRGB's smaller gamut.
Here's a workflow I use for the majority of my images, and if you follow it I can pretty much promise you needn't worry about loss of quality:
1) Open RAW file in Develop Persona
2) Access the Develop Assistant and remove the default tone curve (see https://youtu.be/s3nCN4BZkzQ)
3) If required, pull the highlights slider back to recover extreme highlight detail (use the clipping indicators top right of the interface to determine this)
4) Check the Profiles option, and set the output profile to ROMM RGB (i.e. ProPhoto)
5) Add any other changes e.g. noise reduction, custom tone curve (I usually add a non-linear boost to the shadows and mid-tones)
6) Develop. This will now convert the image to 16-bit with a wide colour space
7) Edit further in the main Photo Persona. This is where I'll typically do the tonal work as well (instead of doing it in the Develop Persona) using Curves, Brightness/Contrast etc
8) Export! For sharing (e.g. if exporting to JPEG), don't forget to click the More button and set ICC Profile to sRGB—this will convert the output to sRGB and ensure it displays correctly under conditions that are not colour-managed.
Hope that helps.
-
James Ritson got a reaction from walt.farrell in Confused with RAW/Affinity compatibility (split)
Hi Iggy, sorry for the wall of text, I got into it and kept writing..
A RAW file is just digitised information captured from the camera's sensor. When you load a RAW file into Photo, it performs all the usual operations—demosaicing, converting from the camera's colour space to a standardised colour space, tone mapping, etc—in an unbounded 32-bit pixel format, which means you achieve a lossless editing workspace.
This process is equivalent to developing a RAW file in any other software, and it all takes place in the Develop Persona. You mention the Canon RAW developer—not sure what you mean here, are you referring to how the RAW file is captured in camera, or Canon's own developing software? As long as you pass Affinity Photo the .CR2 file, you're working with what was captured in camera: no development has been done prior to that.
The reality is that the RAW data has to be interpolated into something meaningful in order for us to edit it. I'd recommend checking out my article "RAW, actually" on the Affinity Spotlight site if you're interested, as I believe it'll help explain why the RAW information has to be "developed": https://affinityspotlight.com/article/raw-actually/
If you use Canon's RAW converter, it should allow you to export in several formats such as JPEG and TIFF. A 16-bit TIFF in a wide colour space would be an appropriate high quality option here, which you could then open in Affinity Photo and edit. However, as I've said above, Affinity Photo's Develop stage is the equivalent to any other software's RAW development stage—so no, you won't be losing any more information if you simply choose to develop your RAW files in Affinity Photo.
When you actually click the blue Develop button, that's when things change. By default, you go from the unbounded 32-bit environment with a wide colour space to an integer 16-bit format with whatever your working colour space is set to—usually sRGB, which is a much more limited gamut. This is basically the same process as when you export from your RAW development software to a raster format like TIFF or JPEG. You can of course change both these default options if you wish to continue working in 32-bit (not recommended for the majority of users) and in a wider colour space (you can set the output profile to something like ROMM RGB which is a version of ProPhoto that ships with the app).
As regards quality loss—technically, there will always be a loss, but it is exactly the same loss you would experience when you export from any other RAW development software. As soon as you export to JPEG or TIFF, you're converting the bit depth and also the colour profile, both of which are typically lossy operations. In most use cases, however, this loss is negligible. The exception to this might be if you don't recover extreme highlight detail in the Develop Persona before developing—this detail is then clipped and lost. Similarly, if you have a scene with very intense colours and you convert to sRGB, you're potentially throwing away the most intense colour values because they can't be represented in sRGB's smaller gamut.
Here's a workflow I use for the majority of my images, and if you follow it I can pretty much promise you needn't worry about loss of quality:
1) Open RAW file in Develop Persona
2) Access the Develop Assistant and remove the default tone curve (see https://youtu.be/s3nCN4BZkzQ)
3) If required, pull the highlights slider back to recover extreme highlight detail (use the clipping indicators top right of the interface to determine this)
4) Check the Profiles option, and set the output profile to ROMM RGB (i.e. ProPhoto)
5) Add any other changes e.g. noise reduction, custom tone curve (I usually add a non-linear boost to the shadows and mid-tones)
6) Develop. This will now convert the image to 16-bit with a wide colour space
7) Edit further in the main Photo Persona. This is where I'll typically do the tonal work as well (instead of doing it in the Develop Persona) using Curves, Brightness/Contrast etc
8) Export! For sharing (e.g. if exporting to JPEG), don't forget to click the More button and set ICC Profile to sRGB—this will convert the output to sRGB and ensure it displays correctly under conditions that are not colour-managed.
Hope that helps.
-
James Ritson got a reaction from R C-R in Confused with RAW/Affinity compatibility (split)
Hi Iggy, sorry for the wall of text, I got into it and kept writing..
A RAW file is just digitised information captured from the camera's sensor. When you load a RAW file into Photo, it performs all the usual operations—demosaicing, converting from the camera's colour space to a standardised colour space, tone mapping, etc—in an unbounded 32-bit pixel format, which means you achieve a lossless editing workspace.
This process is equivalent to developing a RAW file in any other software, and it all takes place in the Develop Persona. You mention the Canon RAW developer—not sure what you mean here, are you referring to how the RAW file is captured in camera, or Canon's own developing software? As long as you pass Affinity Photo the .CR2 file, you're working with what was captured in camera: no development has been done prior to that.
The reality is that the RAW data has to be interpolated into something meaningful in order for us to edit it. I'd recommend checking out my article "RAW, actually" on the Affinity Spotlight site if you're interested, as I believe it'll help explain why the RAW information has to be "developed": https://affinityspotlight.com/article/raw-actually/
If you use Canon's RAW converter, it should allow you to export in several formats such as JPEG and TIFF. A 16-bit TIFF in a wide colour space would be an appropriate high quality option here, which you could then open in Affinity Photo and edit. However, as I've said above, Affinity Photo's Develop stage is the equivalent to any other software's RAW development stage—so no, you won't be losing any more information if you simply choose to develop your RAW files in Affinity Photo.
When you actually click the blue Develop button, that's when things change. By default, you go from the unbounded 32-bit environment with a wide colour space to an integer 16-bit format with whatever your working colour space is set to—usually sRGB, which is a much more limited gamut. This is basically the same process as when you export from your RAW development software to a raster format like TIFF or JPEG. You can of course change both these default options if you wish to continue working in 32-bit (not recommended for the majority of users) and in a wider colour space (you can set the output profile to something like ROMM RGB which is a version of ProPhoto that ships with the app).
As regards quality loss—technically, there will always be a loss, but it is exactly the same loss you would experience when you export from any other RAW development software. As soon as you export to JPEG or TIFF, you're converting the bit depth and also the colour profile, both of which are typically lossy operations. In most use cases, however, this loss is negligible. The exception to this might be if you don't recover extreme highlight detail in the Develop Persona before developing—this detail is then clipped and lost. Similarly, if you have a scene with very intense colours and you convert to sRGB, you're potentially throwing away the most intense colour values because they can't be represented in sRGB's smaller gamut.
Here's a workflow I use for the majority of my images, and if you follow it I can pretty much promise you needn't worry about loss of quality:
1) Open RAW file in Develop Persona
2) Access the Develop Assistant and remove the default tone curve (see https://youtu.be/s3nCN4BZkzQ)
3) If required, pull the highlights slider back to recover extreme highlight detail (use the clipping indicators top right of the interface to determine this)
4) Check the Profiles option, and set the output profile to ROMM RGB (i.e. ProPhoto)
5) Add any other changes e.g. noise reduction, custom tone curve (I usually add a non-linear boost to the shadows and mid-tones)
6) Develop. This will now convert the image to 16-bit with a wide colour space
7) Edit further in the main Photo Persona. This is where I'll typically do the tonal work as well (instead of doing it in the Develop Persona) using Curves, Brightness/Contrast etc
8) Export! For sharing (e.g. if exporting to JPEG), don't forget to click the More button and set ICC Profile to sRGB—this will convert the output to sRGB and ensure it displays correctly under conditions that are not colour-managed.
Hope that helps.
-
James Ritson got a reaction from stokerg in Confused with RAW/Affinity compatibility (split)
Hi Iggy, sorry for the wall of text, I got into it and kept writing..
A RAW file is just digitised information captured from the camera's sensor. When you load a RAW file into Photo, it performs all the usual operations—demosaicing, converting from the camera's colour space to a standardised colour space, tone mapping, etc—in an unbounded 32-bit pixel format, which means you achieve a lossless editing workspace.
This process is equivalent to developing a RAW file in any other software, and it all takes place in the Develop Persona. You mention the Canon RAW developer—not sure what you mean here, are you referring to how the RAW file is captured in camera, or Canon's own developing software? As long as you pass Affinity Photo the .CR2 file, you're working with what was captured in camera: no development has been done prior to that.
The reality is that the RAW data has to be interpolated into something meaningful in order for us to edit it. I'd recommend checking out my article "RAW, actually" on the Affinity Spotlight site if you're interested, as I believe it'll help explain why the RAW information has to be "developed": https://affinityspotlight.com/article/raw-actually/
If you use Canon's RAW converter, it should allow you to export in several formats such as JPEG and TIFF. A 16-bit TIFF in a wide colour space would be an appropriate high quality option here, which you could then open in Affinity Photo and edit. However, as I've said above, Affinity Photo's Develop stage is the equivalent to any other software's RAW development stage—so no, you won't be losing any more information if you simply choose to develop your RAW files in Affinity Photo.
When you actually click the blue Develop button, that's when things change. By default, you go from the unbounded 32-bit environment with a wide colour space to an integer 16-bit format with whatever your working colour space is set to—usually sRGB, which is a much more limited gamut. This is basically the same process as when you export from your RAW development software to a raster format like TIFF or JPEG. You can of course change both these default options if you wish to continue working in 32-bit (not recommended for the majority of users) and in a wider colour space (you can set the output profile to something like ROMM RGB which is a version of ProPhoto that ships with the app).
As regards quality loss—technically, there will always be a loss, but it is exactly the same loss you would experience when you export from any other RAW development software. As soon as you export to JPEG or TIFF, you're converting the bit depth and also the colour profile, both of which are typically lossy operations. In most use cases, however, this loss is negligible. The exception to this might be if you don't recover extreme highlight detail in the Develop Persona before developing—this detail is then clipped and lost. Similarly, if you have a scene with very intense colours and you convert to sRGB, you're potentially throwing away the most intense colour values because they can't be represented in sRGB's smaller gamut.
Here's a workflow I use for the majority of my images, and if you follow it I can pretty much promise you needn't worry about loss of quality:
1) Open RAW file in Develop Persona
2) Access the Develop Assistant and remove the default tone curve (see https://youtu.be/s3nCN4BZkzQ)
3) If required, pull the highlights slider back to recover extreme highlight detail (use the clipping indicators top right of the interface to determine this)
4) Check the Profiles option, and set the output profile to ROMM RGB (i.e. ProPhoto)
5) Add any other changes e.g. noise reduction, custom tone curve (I usually add a non-linear boost to the shadows and mid-tones)
6) Develop. This will now convert the image to 16-bit with a wide colour space
7) Edit further in the main Photo Persona. This is where I'll typically do the tonal work as well (instead of doing it in the Develop Persona) using Curves, Brightness/Contrast etc
8) Export! For sharing (e.g. if exporting to JPEG), don't forget to click the More button and set ICC Profile to sRGB—this will convert the output to sRGB and ensure it displays correctly under conditions that are not colour-managed.
Hope that helps.
-
James Ritson got a reaction from Dan C in Confused with RAW/Affinity compatibility (split)
Hi Iggy, sorry for the wall of text, I got into it and kept writing..
A RAW file is just digitised information captured from the camera's sensor. When you load a RAW file into Photo, it performs all the usual operations—demosaicing, converting from the camera's colour space to a standardised colour space, tone mapping, etc—in an unbounded 32-bit pixel format, which means you achieve a lossless editing workspace.
This process is equivalent to developing a RAW file in any other software, and it all takes place in the Develop Persona. You mention the Canon RAW developer—not sure what you mean here, are you referring to how the RAW file is captured in camera, or Canon's own developing software? As long as you pass Affinity Photo the .CR2 file, you're working with what was captured in camera: no development has been done prior to that.
The reality is that the RAW data has to be interpolated into something meaningful in order for us to edit it. I'd recommend checking out my article "RAW, actually" on the Affinity Spotlight site if you're interested, as I believe it'll help explain why the RAW information has to be "developed": https://affinityspotlight.com/article/raw-actually/
If you use Canon's RAW converter, it should allow you to export in several formats such as JPEG and TIFF. A 16-bit TIFF in a wide colour space would be an appropriate high quality option here, which you could then open in Affinity Photo and edit. However, as I've said above, Affinity Photo's Develop stage is the equivalent to any other software's RAW development stage—so no, you won't be losing any more information if you simply choose to develop your RAW files in Affinity Photo.
When you actually click the blue Develop button, that's when things change. By default, you go from the unbounded 32-bit environment with a wide colour space to an integer 16-bit format with whatever your working colour space is set to—usually sRGB, which is a much more limited gamut. This is basically the same process as when you export from your RAW development software to a raster format like TIFF or JPEG. You can of course change both these default options if you wish to continue working in 32-bit (not recommended for the majority of users) and in a wider colour space (you can set the output profile to something like ROMM RGB which is a version of ProPhoto that ships with the app).
As regards quality loss—technically, there will always be a loss, but it is exactly the same loss you would experience when you export from any other RAW development software. As soon as you export to JPEG or TIFF, you're converting the bit depth and also the colour profile, both of which are typically lossy operations. In most use cases, however, this loss is negligible. The exception to this might be if you don't recover extreme highlight detail in the Develop Persona before developing—this detail is then clipped and lost. Similarly, if you have a scene with very intense colours and you convert to sRGB, you're potentially throwing away the most intense colour values because they can't be represented in sRGB's smaller gamut.
Here's a workflow I use for the majority of my images, and if you follow it I can pretty much promise you needn't worry about loss of quality:
1) Open RAW file in Develop Persona
2) Access the Develop Assistant and remove the default tone curve (see https://youtu.be/s3nCN4BZkzQ)
3) If required, pull the highlights slider back to recover extreme highlight detail (use the clipping indicators top right of the interface to determine this)
4) Check the Profiles option, and set the output profile to ROMM RGB (i.e. ProPhoto)
5) Add any other changes e.g. noise reduction, custom tone curve (I usually add a non-linear boost to the shadows and mid-tones)
6) Develop. This will now convert the image to 16-bit with a wide colour space
7) Edit further in the main Photo Persona. This is where I'll typically do the tonal work as well (instead of doing it in the Develop Persona) using Curves, Brightness/Contrast etc
8) Export! For sharing (e.g. if exporting to JPEG), don't forget to click the More button and set ICC Profile to sRGB—this will convert the output to sRGB and ensure it displays correctly under conditions that are not colour-managed.
Hope that helps.
-
James Ritson got a reaction from Patrick Connor in Confused with RAW/Affinity compatibility (split)
Hi Iggy, sorry for the wall of text, I got into it and kept writing..
A RAW file is just digitised information captured from the camera's sensor. When you load a RAW file into Photo, it performs all the usual operations—demosaicing, converting from the camera's colour space to a standardised colour space, tone mapping, etc—in an unbounded 32-bit pixel format, which means you achieve a lossless editing workspace.
This process is equivalent to developing a RAW file in any other software, and it all takes place in the Develop Persona. You mention the Canon RAW developer—not sure what you mean here, are you referring to how the RAW file is captured in camera, or Canon's own developing software? As long as you pass Affinity Photo the .CR2 file, you're working with what was captured in camera: no development has been done prior to that.
The reality is that the RAW data has to be interpolated into something meaningful in order for us to edit it. I'd recommend checking out my article "RAW, actually" on the Affinity Spotlight site if you're interested, as I believe it'll help explain why the RAW information has to be "developed": https://affinityspotlight.com/article/raw-actually/
If you use Canon's RAW converter, it should allow you to export in several formats such as JPEG and TIFF. A 16-bit TIFF in a wide colour space would be an appropriate high quality option here, which you could then open in Affinity Photo and edit. However, as I've said above, Affinity Photo's Develop stage is the equivalent to any other software's RAW development stage—so no, you won't be losing any more information if you simply choose to develop your RAW files in Affinity Photo.
When you actually click the blue Develop button, that's when things change. By default, you go from the unbounded 32-bit environment with a wide colour space to an integer 16-bit format with whatever your working colour space is set to—usually sRGB, which is a much more limited gamut. This is basically the same process as when you export from your RAW development software to a raster format like TIFF or JPEG. You can of course change both these default options if you wish to continue working in 32-bit (not recommended for the majority of users) and in a wider colour space (you can set the output profile to something like ROMM RGB which is a version of ProPhoto that ships with the app).
As regards quality loss—technically, there will always be a loss, but it is exactly the same loss you would experience when you export from any other RAW development software. As soon as you export to JPEG or TIFF, you're converting the bit depth and also the colour profile, both of which are typically lossy operations. In most use cases, however, this loss is negligible. The exception to this might be if you don't recover extreme highlight detail in the Develop Persona before developing—this detail is then clipped and lost. Similarly, if you have a scene with very intense colours and you convert to sRGB, you're potentially throwing away the most intense colour values because they can't be represented in sRGB's smaller gamut.
Here's a workflow I use for the majority of my images, and if you follow it I can pretty much promise you needn't worry about loss of quality:
1) Open RAW file in Develop Persona
2) Access the Develop Assistant and remove the default tone curve (see https://youtu.be/s3nCN4BZkzQ)
3) If required, pull the highlights slider back to recover extreme highlight detail (use the clipping indicators top right of the interface to determine this)
4) Check the Profiles option, and set the output profile to ROMM RGB (i.e. ProPhoto)
5) Add any other changes e.g. noise reduction, custom tone curve (I usually add a non-linear boost to the shadows and mid-tones)
6) Develop. This will now convert the image to 16-bit with a wide colour space
7) Edit further in the main Photo Persona. This is where I'll typically do the tonal work as well (instead of doing it in the Develop Persona) using Curves, Brightness/Contrast etc
8) Export! For sharing (e.g. if exporting to JPEG), don't forget to click the More button and set ICC Profile to sRGB—this will convert the output to sRGB and ensure it displays correctly under conditions that are not colour-managed.
Hope that helps.
-
James Ritson got a reaction from Dasyscneme in Infrared processing on iPad
Hi, infrared processing is fairly straightforward—hopefully you shot in RAW? Did you custom white balance (should produce a fairly neutral image with a purple or yellow cast) or leave it on auto (will look intensely red)?
If it looks red, you'll need to white balance in-app, so hopefully you're working with RAW files! The white balance shift required for infrared is extreme, however, and Photo will only go down to 2000K. If your images are red, I'd really recommend white balancing at the scene—the issue with leaving it to post is most software won't let you shift white balance as much as is required (but will read and apply it from the image's metadata fine), and your camera will also expose for the red channel, which underexposes the blue and green channels. It's much better to take a white balance reading either from a neutral white card or from some foliage (even just the overall scene would be much better than leaving white balance on auto).
If you have white balanced your images, all the better. From here, people usually mix and blend the channel information depending on the look they want.
As a starting point, add a Channel Mixer adjustment. Target Reds, and set Red to 0 and Blue to 100. Now target Blues, and set Red to 100 and Blue to 0—so you're effectively swapping the red and blue channels.
For another look, switch across to Greens and set Green to 0 and Blue to 100—so now you've eliminated the Green channel entirely and replaced it with Blue.
You can also use a variety of other adjustments to increase the contrast, boost certain tones etc. Something you might also want to try is converting to black and white (for which you can use a Black & White adjustment) as infrared monochrome imagery has a distinct look compared to traditional black and white.
The rest is more or less experimentation, just like if you were editing a traditional photograph. Hope that helps—there are several Photo for iPad video tutorials that you could look at to give you some more ideas, and even more desktop videos whose techniques are also applicable on iPad.
-
James Ritson got a reaction from The Sheriff in Infrared processing on iPad
Hi, infrared processing is fairly straightforward—hopefully you shot in RAW? Did you custom white balance (should produce a fairly neutral image with a purple or yellow cast) or leave it on auto (will look intensely red)?
If it looks red, you'll need to white balance in-app, so hopefully you're working with RAW files! The white balance shift required for infrared is extreme, however, and Photo will only go down to 2000K. If your images are red, I'd really recommend white balancing at the scene—the issue with leaving it to post is most software won't let you shift white balance as much as is required (but will read and apply it from the image's metadata fine), and your camera will also expose for the red channel, which underexposes the blue and green channels. It's much better to take a white balance reading either from a neutral white card or from some foliage (even just the overall scene would be much better than leaving white balance on auto).
If you have white balanced your images, all the better. From here, people usually mix and blend the channel information depending on the look they want.
As a starting point, add a Channel Mixer adjustment. Target Reds, and set Red to 0 and Blue to 100. Now target Blues, and set Red to 100 and Blue to 0—so you're effectively swapping the red and blue channels.
For another look, switch across to Greens and set Green to 0 and Blue to 100—so now you've eliminated the Green channel entirely and replaced it with Blue.
You can also use a variety of other adjustments to increase the contrast, boost certain tones etc. Something you might also want to try is converting to black and white (for which you can use a Black & White adjustment) as infrared monochrome imagery has a distinct look compared to traditional black and white.
The rest is more or less experimentation, just like if you were editing a traditional photograph. Hope that helps—there are several Photo for iPad video tutorials that you could look at to give you some more ideas, and even more desktop videos whose techniques are also applicable on iPad.
-
James Ritson got a reaction from firstdefence in Infrared processing on iPad
Hi, infrared processing is fairly straightforward—hopefully you shot in RAW? Did you custom white balance (should produce a fairly neutral image with a purple or yellow cast) or leave it on auto (will look intensely red)?
If it looks red, you'll need to white balance in-app, so hopefully you're working with RAW files! The white balance shift required for infrared is extreme, however, and Photo will only go down to 2000K. If your images are red, I'd really recommend white balancing at the scene—the issue with leaving it to post is most software won't let you shift white balance as much as is required (but will read and apply it from the image's metadata fine), and your camera will also expose for the red channel, which underexposes the blue and green channels. It's much better to take a white balance reading either from a neutral white card or from some foliage (even just the overall scene would be much better than leaving white balance on auto).
If you have white balanced your images, all the better. From here, people usually mix and blend the channel information depending on the look they want.
As a starting point, add a Channel Mixer adjustment. Target Reds, and set Red to 0 and Blue to 100. Now target Blues, and set Red to 100 and Blue to 0—so you're effectively swapping the red and blue channels.
For another look, switch across to Greens and set Green to 0 and Blue to 100—so now you've eliminated the Green channel entirely and replaced it with Blue.
You can also use a variety of other adjustments to increase the contrast, boost certain tones etc. Something you might also want to try is converting to black and white (for which you can use a Black & White adjustment) as infrared monochrome imagery has a distinct look compared to traditional black and white.
The rest is more or less experimentation, just like if you were editing a traditional photograph. Hope that helps—there are several Photo for iPad video tutorials that you could look at to give you some more ideas, and even more desktop videos whose techniques are also applicable on iPad.
-
James Ritson got a reaction from stokerg in Infrared processing on iPad
Hi, infrared processing is fairly straightforward—hopefully you shot in RAW? Did you custom white balance (should produce a fairly neutral image with a purple or yellow cast) or leave it on auto (will look intensely red)?
If it looks red, you'll need to white balance in-app, so hopefully you're working with RAW files! The white balance shift required for infrared is extreme, however, and Photo will only go down to 2000K. If your images are red, I'd really recommend white balancing at the scene—the issue with leaving it to post is most software won't let you shift white balance as much as is required (but will read and apply it from the image's metadata fine), and your camera will also expose for the red channel, which underexposes the blue and green channels. It's much better to take a white balance reading either from a neutral white card or from some foliage (even just the overall scene would be much better than leaving white balance on auto).
If you have white balanced your images, all the better. From here, people usually mix and blend the channel information depending on the look they want.
As a starting point, add a Channel Mixer adjustment. Target Reds, and set Red to 0 and Blue to 100. Now target Blues, and set Red to 100 and Blue to 0—so you're effectively swapping the red and blue channels.
For another look, switch across to Greens and set Green to 0 and Blue to 100—so now you've eliminated the Green channel entirely and replaced it with Blue.
You can also use a variety of other adjustments to increase the contrast, boost certain tones etc. Something you might also want to try is converting to black and white (for which you can use a Black & White adjustment) as infrared monochrome imagery has a distinct look compared to traditional black and white.
The rest is more or less experimentation, just like if you were editing a traditional photograph. Hope that helps—there are several Photo for iPad video tutorials that you could look at to give you some more ideas, and even more desktop videos whose techniques are also applicable on iPad.
-
James Ritson got a reaction from Dan C in Spyder 4 Calibration
Hi John, you shouldn't need to change any settings in Affinity Photo—it is colour managed based off the active monitor profile (which should be the profile created by the Spyder 4 software). If you're still seeing a difference between Affinity Photo and Preview, it may be that Preview is interpreting the profile incorrectly. Have you tried previewing the exported image in another app?
Preview is famously picky about ICC profiles. For example, displayCal now changes its default settings to compensate: see https://hub.displaycal.net/forums/topic/dark-images-in-mac-photos-preview-with-displaycal-generated-profile/ and https://apple.stackexchange.com/questions/271825/bug-macos-sierra-preview-quick-look-issues-with-rendering-colors-of-images-when/272115 (the second link offers a solution).
Let us know how you get on!
-
James Ritson got a reaction from Gabe in Export not match Affinity Photo preview
Hi Mareck, could you check what your 32-bit preview options are set to? Go to View>Studio and choose 32-bit Preview Panel. Because you're using an OCIO configuration the display transform will be set to "OCIO Display Transform". If you could provide a couple of screenshots of this panel and the available options that would help. It's possible you need to be using a different view transform: most OCIO configurations will include linear options like "Non-Colour Data", "Linear Raw" etc, but these aren't suitable if your workflow is simply to edit in Photo and export to JPEG. Unless you're passing the render between different software and need explicit colour management, you may be better off simply using the ICC Display Transform option, as that will provide the best match when you export the result as JPEG with a non-linear sRGB profile.
Hope that helps—any info you can give on your workflow would be useful, as we can then help you further!
-
James Ritson got a reaction from Bigby in LEGACY: Official Affinity Photo (Desktop) Video Tutorials
Hey all, another video for you, this time covering the lens corrections in the Develop Persona to correct skew (and demonstrating a non-destructive approach using live filters):
Perspective Skew Correction - YouTube / Vimeo (New: 20th April) Hope you find it useful!
-
James Ritson got a reaction from Maxxxworld in LEGACY: Official Affinity Photo (Desktop) Video Tutorials
Hey all, another video for you, this time covering the lens corrections in the Develop Persona to correct skew (and demonstrating a non-destructive approach using live filters):
Perspective Skew Correction - YouTube / Vimeo (New: 20th April) Hope you find it useful!
-
James Ritson got a reaction from stokerg in LEGACY: Official Affinity Photo (Desktop) Video Tutorials
Hey all, another video for you, this time covering the lens corrections in the Develop Persona to correct skew (and demonstrating a non-destructive approach using live filters):
Perspective Skew Correction - YouTube / Vimeo (New: 20th April) Hope you find it useful!
-
James Ritson got a reaction from Boxbrownie in Panorama Hell/Help
Hmm, unless I'm missing something here, Photo's Panorama persona has an equivalent of control points. In fact, it has two options--source image masking and source image transforming.
I had a go with the JPEGs--it took a couple of minutes, but I was able to sort the alignment (as far as I can tell). I've attached a video to demonstrate using the two tools.
Unfortunately, although you can stitch EXR files, it seems to be hit and miss depending on the subject material. I might suggest HDR merging and tone mapping each panorama segment, then exporting as 16-bit TIFF, which is not the greatest solution but would allow you to re-align successfully.
Out of interest, which drone did you use? (The EXIF data lists a camera model but not the drone itself!)
Here's the video file:
bridge.mp4
-
James Ritson got a reaction from firstdefence in Panorama Hell/Help
Hmm, unless I'm missing something here, Photo's Panorama persona has an equivalent of control points. In fact, it has two options--source image masking and source image transforming.
I had a go with the JPEGs--it took a couple of minutes, but I was able to sort the alignment (as far as I can tell). I've attached a video to demonstrate using the two tools.
Unfortunately, although you can stitch EXR files, it seems to be hit and miss depending on the subject material. I might suggest HDR merging and tone mapping each panorama segment, then exporting as 16-bit TIFF, which is not the greatest solution but would allow you to re-align successfully.
Out of interest, which drone did you use? (The EXIF data lists a camera model but not the drone itself!)
Here's the video file:
bridge.mp4
-
James Ritson got a reaction from carl123 in Panorama Hell/Help
Hmm, unless I'm missing something here, Photo's Panorama persona has an equivalent of control points. In fact, it has two options--source image masking and source image transforming.
I had a go with the JPEGs--it took a couple of minutes, but I was able to sort the alignment (as far as I can tell). I've attached a video to demonstrate using the two tools.
Unfortunately, although you can stitch EXR files, it seems to be hit and miss depending on the subject material. I might suggest HDR merging and tone mapping each panorama segment, then exporting as 16-bit TIFF, which is not the greatest solution but would allow you to re-align successfully.
Out of interest, which drone did you use? (The EXIF data lists a camera model but not the drone itself!)
Here's the video file:
bridge.mp4
-
James Ritson got a reaction from Mithferion in Your login sucks noobs
Browser autofill. It's a wonderful thing.
