-
Posts
851 -
Joined
-
Last visited
Reputation Activity
-
James Ritson got a reaction from walt.farrell in Colors in exported images differ from editor
Hopefully this can clarify the issue and also clear up the concerns that Photo is discarding or throwing away anything outside of sRGB:
RAW files opened in Develop go through an image processing pipeline, during which the colour information is extracted and processed—the colour space used for these operations is ROMM RGB because it provides a suitably large resolution and enables colour values to be saturated and manipulated without clipping to white. This choice of colour space was introduced in version 1.5 where there was a marked improvement in RAW files with intense colour values (e.g. artificial lighting).
However, the actual document profile is in sRGB—this means the final colour values sent to the screen are restricted to sRGB. Is this deficient? Yes, and there have been discussions about how to tackle it without risking further complication for people who don't use wide colour profiles.
There is a silver lining though. RAW files are developed in an unbounded (float) colour space, which means all the values that fall outside of sRGB are not clipped or discarded. If you were to then set your output profile to a larger colour space like ROMM RGB, these out of bound values can be accommodated by the larger resolution of that colour space. Essentially, you can avoid clipping values outside of sRGB when clicking Develop, and you can get them back once you're in the Photo Persona: the issue is that you can't see these values within the Develop Persona.
I've experimented with one of my photographs of some intense lighting to back this up, and have attached it to this post for people to experiment with. I've also compared the results versus Photoshop CC 2019 (where you can set the colour space and it will actually affect the view transform) and, minor processing differences aside such as sharpness and lens distortion, have been able to match the intensity of colours. For Photoshop I also used ROMM RGB and increased saturation directly in the Camera Raw module.
Here's the RAW file for you to try out:
_1190462.RW2
Steps for this experiment:
Enable Shadows/Highlights and drag the Highlights slider to -100%. Avoid any colour or saturation adjustments, add other adjustments to taste (e.g. noise reduction). Enable the Profiles option and set the output profile to ROMM RGB. Click Develop. Once in the Photo Persona, add an HSL adjustment and increase Saturation all the way. You'll be able to dramatically saturate the image without losing resolution. If you close and re-open the RAW file and try to increase the saturation within Develop, you'll notice that the colour values are restricted to sRGB—however, even with values at the limit of sRGB, you can still set the output profile to ROMM RGB and then increase them further once in the Photo Persona. And below are two images, one still in ROMM RGB, the other converted to sRGB. I'm not sure how they will display on the forum (and whether the forum software will process and convert the colour profile—hopefully not!) but feel free to download both and view them in a colour-managed application or image viewer. If your display is capable of reproducing wide colour gamuts, you should see a noticeable difference between the two.
[Edit] OK, that didn't work, the forum software converts to sRGB and ruins the comparison. Here's a Dropbox link to the JPEGs and RAW file where you can download the original files: https://www.dropbox.com/sh/aof74w94f6lm3d2/AABXE2OJMfk__kjA_jb6vwmia?dl=0
Hope that helps!
James
-
James Ritson got a reaction from MEB in Colors in exported images differ from editor
Hopefully this can clarify the issue and also clear up the concerns that Photo is discarding or throwing away anything outside of sRGB:
RAW files opened in Develop go through an image processing pipeline, during which the colour information is extracted and processed—the colour space used for these operations is ROMM RGB because it provides a suitably large resolution and enables colour values to be saturated and manipulated without clipping to white. This choice of colour space was introduced in version 1.5 where there was a marked improvement in RAW files with intense colour values (e.g. artificial lighting).
However, the actual document profile is in sRGB—this means the final colour values sent to the screen are restricted to sRGB. Is this deficient? Yes, and there have been discussions about how to tackle it without risking further complication for people who don't use wide colour profiles.
There is a silver lining though. RAW files are developed in an unbounded (float) colour space, which means all the values that fall outside of sRGB are not clipped or discarded. If you were to then set your output profile to a larger colour space like ROMM RGB, these out of bound values can be accommodated by the larger resolution of that colour space. Essentially, you can avoid clipping values outside of sRGB when clicking Develop, and you can get them back once you're in the Photo Persona: the issue is that you can't see these values within the Develop Persona.
I've experimented with one of my photographs of some intense lighting to back this up, and have attached it to this post for people to experiment with. I've also compared the results versus Photoshop CC 2019 (where you can set the colour space and it will actually affect the view transform) and, minor processing differences aside such as sharpness and lens distortion, have been able to match the intensity of colours. For Photoshop I also used ROMM RGB and increased saturation directly in the Camera Raw module.
Here's the RAW file for you to try out:
_1190462.RW2
Steps for this experiment:
Enable Shadows/Highlights and drag the Highlights slider to -100%. Avoid any colour or saturation adjustments, add other adjustments to taste (e.g. noise reduction). Enable the Profiles option and set the output profile to ROMM RGB. Click Develop. Once in the Photo Persona, add an HSL adjustment and increase Saturation all the way. You'll be able to dramatically saturate the image without losing resolution. If you close and re-open the RAW file and try to increase the saturation within Develop, you'll notice that the colour values are restricted to sRGB—however, even with values at the limit of sRGB, you can still set the output profile to ROMM RGB and then increase them further once in the Photo Persona. And below are two images, one still in ROMM RGB, the other converted to sRGB. I'm not sure how they will display on the forum (and whether the forum software will process and convert the colour profile—hopefully not!) but feel free to download both and view them in a colour-managed application or image viewer. If your display is capable of reproducing wide colour gamuts, you should see a noticeable difference between the two.
[Edit] OK, that didn't work, the forum software converts to sRGB and ruins the comparison. Here's a Dropbox link to the JPEGs and RAW file where you can download the original files: https://www.dropbox.com/sh/aof74w94f6lm3d2/AABXE2OJMfk__kjA_jb6vwmia?dl=0
Hope that helps!
James
-
James Ritson got a reaction from Athrul in Best Resampling Method
Hi Phil, the resampling methods are ordered by sharpness. For photographic purposes you can safely ignore Nearest Neighbour. Bilinear will produce the softest result, whereas Lanczos 3 non-separable will produce the sharpest (the difference between separable and non-separable is very slight). Bicubic is somewhere in the middle and will probably suffice for the majority of your images.
One reason to use a softer resampling method might be if your image has lots of high frequency content - from excessive noise to very fine detail like chainlink fences and difficult patterning in architecture. Using a sharper resampling method may exacerbate these parts of the image, and may even produce resampling artefacts (though they are more noticeable in video content to be honest - stills, not so much).
As always, experiment! But for a good safe bet with most of your images, try Bicubic. Hope that helps!
-
James Ritson got a reaction from MattP in Ap 1.7.0.100 Clarity Filter dosen't work ...
If you want to achieve the same effect as the old Clarity filter, just use a live Unsharp Mask, drag its radius to 100px and set the blend mode to Lighten.
Alternatively, create a luminosity mask (CMD-Shift-click on a pixel layer) then add the Unsharp Mask and drag its radius to 100px.
That's basically what the old Clarity was: local contrast enhancement using unsharp mask with luminosity blending. The new version is more complex and is far more effective, but if you preferred the old look you should be able to follow the above instructions. Hope that helps!
-
James Ritson got a reaction from Chris B in Ap 1.7.0.100 Clarity Filter dosen't work ...
If you want to achieve the same effect as the old Clarity filter, just use a live Unsharp Mask, drag its radius to 100px and set the blend mode to Lighten.
Alternatively, create a luminosity mask (CMD-Shift-click on a pixel layer) then add the Unsharp Mask and drag its radius to 100px.
That's basically what the old Clarity was: local contrast enhancement using unsharp mask with luminosity blending. The new version is more complex and is far more effective, but if you preferred the old look you should be able to follow the above instructions. Hope that helps!
-
James Ritson got a reaction from Patrick Connor in Ap 1.7.0.100 Clarity Filter dosen't work ...
If you want to achieve the same effect as the old Clarity filter, just use a live Unsharp Mask, drag its radius to 100px and set the blend mode to Lighten.
Alternatively, create a luminosity mask (CMD-Shift-click on a pixel layer) then add the Unsharp Mask and drag its radius to 100px.
That's basically what the old Clarity was: local contrast enhancement using unsharp mask with luminosity blending. The new version is more complex and is far more effective, but if you preferred the old look you should be able to follow the above instructions. Hope that helps!
-
James Ritson got a reaction from Polygonius in Levels opacity dropdown crash
Hi, this occurs with most adjustments at the moment, it's related to an HSL dialog fix—it should be fixed for the next build. In the meantime, if you wish to keep editing with the current build, just add an HSL adjustment before you add any other adjustments. You don't have to do anything with the HSL adjustment (in fact, you can just delete it), just adding it is enough to mitigate the crash.
Thanks!
-
James Ritson got a reaction from CliveYoung in Is there a way to select Highlights ?
Hi Anstellos, have you tried either:
Select the pixel layer from the Layers tab (eg Background), then go to Select - Tonal Range - Select Highlights. This will create a selection of just the highlights. Shift+CMD click the pixel layer to select areas of luminance. This will usually select more of the image than Select Highlights. You can then add adjustments/filters/masks to manipulate these highlight areas, e.g. I like to add a gaussian blur with an Overlay blend mode which creates a really nice diffuse light effect.
Hope that helps!
-
James Ritson got a reaction from mark117h in Official Affinity Photo V1 (iPad) Tutorials
Official Affinity Photo iPad Tutorials
New to the latest update of the app (1.6.9), we've got a brand new set of tutorials that follow a more structured approach. You can access them from the Tutorials option in-app or by following this link:
https://affinity.serif.com/tutorials/photo/ipad They are sorted into categories:
Basic Operations Advanced Corrective and Retouching Creative Tools Filters and Adjustments Export Persona Just some quick info about the new videos:
They're all shot in 4K resolution (supported on desktop machines) for extra clarity. There are localised subtitles for all the languages supported by the app (English, German, Spanish, Italian, French, Russian, Brazilian Portuguese, Chinese Simplified, Japanese).
Hope you find them useful!
James
-
James Ritson got a reaction from Dan C in Develop persona appears to be limited to the gamut of sRGB!
To clarify, the general Develop process is done in sRGB float—this includes the initial conversion from the camera's colour matrix to a designated colour space. However, the colour data is processed in ROMM RGB which produces enormous benefits for some type of imagery, particularly low light images with lots of artificial lighting sources. Despite this, the document is in sRGB and so this is the colour profile used when converting to the display profile.
As the pipeline is in float, this does mean that you can convert to a wider colour profile on output and avoid clipping to sRGB, which is the recommended course of action for now.
Do you mean the final developed image? This isn't my experience—choosing an output profile of ROMM RGB and then using adjustments and tools within the Photo Persona allows me to push colours right up to the limit of P3.
-
James Ritson got a reaction from Foura in Is there a way to select Highlights ?
Hi Anstellos, have you tried either:
Select the pixel layer from the Layers tab (eg Background), then go to Select - Tonal Range - Select Highlights. This will create a selection of just the highlights. Shift+CMD click the pixel layer to select areas of luminance. This will usually select more of the image than Select Highlights. You can then add adjustments/filters/masks to manipulate these highlight areas, e.g. I like to add a gaussian blur with an Overlay blend mode which creates a really nice diffuse light effect.
Hope that helps!
-
James Ritson got a reaction from LeonD in Questions on exposure merging?
Hi Timber, thanks for posting, hope these answer your questions:
Stacking automatically aligns your images and is a quicker process, that's all there is to it! Manual exposure merging involves aligning the images by hand then masking specific areas. It's more precise but also more time consuming.
HDR is a different beast entirely. With exposure merging you're simply merging exposures in an 8-bit or 16-bit integer document to "average" the exposures. HDR will work in 32-bit floating point and you'll be able to use tone mapping and exposure fusion to achieve the results HDR is more typically associated with.
Yes, you can stack any image format that Photo supports, including RAW. Stacking RAW files takes a little longer as Photo has to decode them first.
TIFFs are lossless (I'm a quality control freak!).
No, you can just add your RAW files straight into the Stacking file dialog, no need to spend time developing them beforehand.
None, it just takes a little longer. It's worth noting that your RAW files will be decoded using whichever RAW engine you've chosen in the Develop persona (see video Maximising Raw Latitude for more info).
Hope that helps!
Thanks
-
James Ritson got a reaction from Patrick Connor in Official Affinity Photo V1 (iPad) Tutorials
Hello all, the 1.6.9 update made quite a few interface and gesture changes that rendered the tutorial videos out of date—thus, you will now find a new set of structured tutorials here: https://affinity.serif.com/tutorials/photo/ipad
The first post of this thread has been updated, but the above link will take you straight to the new tutorials which feature localised subtitles for all 9 languages the app is supported in. Hope you find them useful!
-
James Ritson got a reaction from Bobnewboy in Official Affinity Photo V1 (iPad) Tutorials
Official Affinity Photo iPad Tutorials
New to the latest update of the app (1.6.9), we've got a brand new set of tutorials that follow a more structured approach. You can access them from the Tutorials option in-app or by following this link:
https://affinity.serif.com/tutorials/photo/ipad They are sorted into categories:
Basic Operations Advanced Corrective and Retouching Creative Tools Filters and Adjustments Export Persona Just some quick info about the new videos:
They're all shot in 4K resolution (supported on desktop machines) for extra clarity. There are localised subtitles for all the languages supported by the app (English, German, Spanish, Italian, French, Russian, Brazilian Portuguese, Chinese Simplified, Japanese).
Hope you find them useful!
James
-
James Ritson got a reaction from Dan C in Dust & Scratches filter is always disabled ( windows version )
Hi Tapatio, are Bilateral and Median blur disabled as well? If so, you may be editing in 32-bit—these three filters won't function correctly and so were disabled for 32-bit work. You would need to convert your document to 16-bit or 8-bit for them to be accessible.
If you're developing RAW files, you may have switched over to 32-bit development using the assistant. This video shows how to access it and set it back to 16-bit if required:
Alternatively, if that's not your issue, is there any way you could attach a screen grab of what you're seeing and how the filter is disabled? Thanks in advance!
-
James Ritson got a reaction from Mangolife in Official Affinity Photo V1 (iPad) Tutorials
Official Affinity Photo iPad Tutorials
New to the latest update of the app (1.6.9), we've got a brand new set of tutorials that follow a more structured approach. You can access them from the Tutorials option in-app or by following this link:
https://affinity.serif.com/tutorials/photo/ipad They are sorted into categories:
Basic Operations Advanced Corrective and Retouching Creative Tools Filters and Adjustments Export Persona Just some quick info about the new videos:
They're all shot in 4K resolution (supported on desktop machines) for extra clarity. There are localised subtitles for all the languages supported by the app (English, German, Spanish, Italian, French, Russian, Brazilian Portuguese, Chinese Simplified, Japanese).
Hope you find them useful!
James
-
James Ritson got a reaction from SrPx in Affinity Photo Using 700+% of CPU
@owenr there is absolutely no need to be snarky. It's counterproductive to any discussion.
@PhotonFlux live filters are indeed very taxing—in your screen grab you're using several stacked on top of one another including convolutions and distortions (plus something nested into your main pixel layer, not sure what that is?) This is why live filter layers are set to child layer by default, otherwise the temptation is to stack them as top level layers. This doesn't help much in your case, but if you were working on a multiple layer document you would typically try to child-layer live filters into just the pixel layers you were trying to affect.
The Affinity apps use render caching in idle time, but this is invalidated any time the zoom level is changed. What you should be able to do in theory is set a particular zoom level, wait a couple of seconds (or longer depending on how many live filters you have active) and the render cache will kick in. Now if you start to use tools or adjustments on a new top level layer, the performance should be improved. In honesty, this works better for multiple pixel layers where you're using masking—less so for live filters that have to redraw on top of one another.
The live filters are also calculated entirely in software—however, as you're running a MacBook with a recent Intel iGPU you may want to dabble with enabling Metal Compute in Preferences>Performance. This enables hardware acceleration for many of the filter effects. There are still some kinks to be worked out (I believe you might see some improvements to hardware acceleration in 1.7) as it's a port from the iPad implementation, but it may speed things up if you're consistently using multiple stacked live filters. It also significantly decreases export times too. There are some caveats (Unsharp Mask behaves differently and you may need to tweak values on existing documents) but I would say it's worth a try at least.
Hope that helps!
-
James Ritson got a reaction from Patrick Connor in Affinity Photo Using 700+% of CPU
@owenr there is absolutely no need to be snarky. It's counterproductive to any discussion.
@PhotonFlux live filters are indeed very taxing—in your screen grab you're using several stacked on top of one another including convolutions and distortions (plus something nested into your main pixel layer, not sure what that is?) This is why live filter layers are set to child layer by default, otherwise the temptation is to stack them as top level layers. This doesn't help much in your case, but if you were working on a multiple layer document you would typically try to child-layer live filters into just the pixel layers you were trying to affect.
The Affinity apps use render caching in idle time, but this is invalidated any time the zoom level is changed. What you should be able to do in theory is set a particular zoom level, wait a couple of seconds (or longer depending on how many live filters you have active) and the render cache will kick in. Now if you start to use tools or adjustments on a new top level layer, the performance should be improved. In honesty, this works better for multiple pixel layers where you're using masking—less so for live filters that have to redraw on top of one another.
The live filters are also calculated entirely in software—however, as you're running a MacBook with a recent Intel iGPU you may want to dabble with enabling Metal Compute in Preferences>Performance. This enables hardware acceleration for many of the filter effects. There are still some kinks to be worked out (I believe you might see some improvements to hardware acceleration in 1.7) as it's a port from the iPad implementation, but it may speed things up if you're consistently using multiple stacked live filters. It also significantly decreases export times too. There are some caveats (Unsharp Mask behaves differently and you may need to tweak values on existing documents) but I would say it's worth a try at least.
Hope that helps!
-
James Ritson got a reaction from R C-R in Affinity Photo Using 700+% of CPU
@owenr there is absolutely no need to be snarky. It's counterproductive to any discussion.
@PhotonFlux live filters are indeed very taxing—in your screen grab you're using several stacked on top of one another including convolutions and distortions (plus something nested into your main pixel layer, not sure what that is?) This is why live filter layers are set to child layer by default, otherwise the temptation is to stack them as top level layers. This doesn't help much in your case, but if you were working on a multiple layer document you would typically try to child-layer live filters into just the pixel layers you were trying to affect.
The Affinity apps use render caching in idle time, but this is invalidated any time the zoom level is changed. What you should be able to do in theory is set a particular zoom level, wait a couple of seconds (or longer depending on how many live filters you have active) and the render cache will kick in. Now if you start to use tools or adjustments on a new top level layer, the performance should be improved. In honesty, this works better for multiple pixel layers where you're using masking—less so for live filters that have to redraw on top of one another.
The live filters are also calculated entirely in software—however, as you're running a MacBook with a recent Intel iGPU you may want to dabble with enabling Metal Compute in Preferences>Performance. This enables hardware acceleration for many of the filter effects. There are still some kinks to be worked out (I believe you might see some improvements to hardware acceleration in 1.7) as it's a port from the iPad implementation, but it may speed things up if you're consistently using multiple stacked live filters. It also significantly decreases export times too. There are some caveats (Unsharp Mask behaves differently and you may need to tweak values on existing documents) but I would say it's worth a try at least.
Hope that helps!
-
James Ritson got a reaction from Alfred in Affinity Photo Using 700+% of CPU
@owenr there is absolutely no need to be snarky. It's counterproductive to any discussion.
@PhotonFlux live filters are indeed very taxing—in your screen grab you're using several stacked on top of one another including convolutions and distortions (plus something nested into your main pixel layer, not sure what that is?) This is why live filter layers are set to child layer by default, otherwise the temptation is to stack them as top level layers. This doesn't help much in your case, but if you were working on a multiple layer document you would typically try to child-layer live filters into just the pixel layers you were trying to affect.
The Affinity apps use render caching in idle time, but this is invalidated any time the zoom level is changed. What you should be able to do in theory is set a particular zoom level, wait a couple of seconds (or longer depending on how many live filters you have active) and the render cache will kick in. Now if you start to use tools or adjustments on a new top level layer, the performance should be improved. In honesty, this works better for multiple pixel layers where you're using masking—less so for live filters that have to redraw on top of one another.
The live filters are also calculated entirely in software—however, as you're running a MacBook with a recent Intel iGPU you may want to dabble with enabling Metal Compute in Preferences>Performance. This enables hardware acceleration for many of the filter effects. There are still some kinks to be worked out (I believe you might see some improvements to hardware acceleration in 1.7) as it's a port from the iPad implementation, but it may speed things up if you're consistently using multiple stacked live filters. It also significantly decreases export times too. There are some caveats (Unsharp Mask behaves differently and you may need to tweak values on existing documents) but I would say it's worth a try at least.
Hope that helps!
-
James Ritson got a reaction from ekeen4 in Confused with RAW/Affinity compatibility (split)
Hi Iggy, sorry for the wall of text, I got into it and kept writing..
A RAW file is just digitised information captured from the camera's sensor. When you load a RAW file into Photo, it performs all the usual operations—demosaicing, converting from the camera's colour space to a standardised colour space, tone mapping, etc—in an unbounded 32-bit pixel format, which means you achieve a lossless editing workspace.
This process is equivalent to developing a RAW file in any other software, and it all takes place in the Develop Persona. You mention the Canon RAW developer—not sure what you mean here, are you referring to how the RAW file is captured in camera, or Canon's own developing software? As long as you pass Affinity Photo the .CR2 file, you're working with what was captured in camera: no development has been done prior to that.
The reality is that the RAW data has to be interpolated into something meaningful in order for us to edit it. I'd recommend checking out my article "RAW, actually" on the Affinity Spotlight site if you're interested, as I believe it'll help explain why the RAW information has to be "developed": https://affinityspotlight.com/article/raw-actually/
If you use Canon's RAW converter, it should allow you to export in several formats such as JPEG and TIFF. A 16-bit TIFF in a wide colour space would be an appropriate high quality option here, which you could then open in Affinity Photo and edit. However, as I've said above, Affinity Photo's Develop stage is the equivalent to any other software's RAW development stage—so no, you won't be losing any more information if you simply choose to develop your RAW files in Affinity Photo.
When you actually click the blue Develop button, that's when things change. By default, you go from the unbounded 32-bit environment with a wide colour space to an integer 16-bit format with whatever your working colour space is set to—usually sRGB, which is a much more limited gamut. This is basically the same process as when you export from your RAW development software to a raster format like TIFF or JPEG. You can of course change both these default options if you wish to continue working in 32-bit (not recommended for the majority of users) and in a wider colour space (you can set the output profile to something like ROMM RGB which is a version of ProPhoto that ships with the app).
As regards quality loss—technically, there will always be a loss, but it is exactly the same loss you would experience when you export from any other RAW development software. As soon as you export to JPEG or TIFF, you're converting the bit depth and also the colour profile, both of which are typically lossy operations. In most use cases, however, this loss is negligible. The exception to this might be if you don't recover extreme highlight detail in the Develop Persona before developing—this detail is then clipped and lost. Similarly, if you have a scene with very intense colours and you convert to sRGB, you're potentially throwing away the most intense colour values because they can't be represented in sRGB's smaller gamut.
Here's a workflow I use for the majority of my images, and if you follow it I can pretty much promise you needn't worry about loss of quality:
1) Open RAW file in Develop Persona
2) Access the Develop Assistant and remove the default tone curve (see https://youtu.be/s3nCN4BZkzQ)
3) If required, pull the highlights slider back to recover extreme highlight detail (use the clipping indicators top right of the interface to determine this)
4) Check the Profiles option, and set the output profile to ROMM RGB (i.e. ProPhoto)
5) Add any other changes e.g. noise reduction, custom tone curve (I usually add a non-linear boost to the shadows and mid-tones)
6) Develop. This will now convert the image to 16-bit with a wide colour space
7) Edit further in the main Photo Persona. This is where I'll typically do the tonal work as well (instead of doing it in the Develop Persona) using Curves, Brightness/Contrast etc
8) Export! For sharing (e.g. if exporting to JPEG), don't forget to click the More button and set ICC Profile to sRGB—this will convert the output to sRGB and ensure it displays correctly under conditions that are not colour-managed.
Hope that helps.
-
James Ritson got a reaction from MEB in TVs vs monitors
If you're not playing games or watching video then I definitely wouldn't recommend a TV just for production work. As far as colour accuracy, bear in mind that TVs are generally designed to enhance content, not portray it accurately. If you were determined on using a TV you would absolutely need to profile it using a colourimeter like the i1Display Pro and also disable the various features like contrast enhancement, "extended dynamic range", local dimming, etc—the downside is that these features are often used to reach claimed specifications like the high contrast ratio and peak brightness. By the time you have disabled all the TV's picture-based features, you may as well pay the same amount for a decent monitor. There are plenty of 4K 27" options (and some 32") that will cover 99/100% of sRGB and Adobe RGB, plus 60-80% of P3 and Rec.2020—more than enough for image editing.
There's also the size—50" at a desk might look impressive at first, but after the first day will seem a bit ridiculous as you'll basically have to move your head just to pan across areas of the screen. I work on a Dell 32" and that's honestly the maximum I would recommend—I do find myself sometimes moving my head just to look at the right hand side of the screen.
Gabriel also mentioned the pixel density, which is important too. With a typical 21-32" monitor, you'll get denser pixels (especially at 4K) which will give you a better representation of how your images look and allow you to evaluate their sharpness. A 50" isn't going to give you a typical view of how your images actually look—when you think about it, what size do you typically see images at, unless you print them at huge sizes? Definitely not 50"! A large TV may well skew your perception, unless you sit far away from it, but that doesn't sound like a great way to actually do detailed image and design work.
Put it this way—speaking from personal experience, if you're only wanting to do production work in apps like Photo and Designer, I would honestly recommend a decent 24-32" monitor—there are plenty of options available. Profiling them is easy if you have a decent colourimeter, and you would be able to have a conventional desk setup where everything is within easy reach. A big TV just isn't the answer—and if you're looking at a smaller TV (eg the 24-32" range), then an actual computer monitor at an equivalent price would likely be just as good if not better.
-
James Ritson got a reaction from stokerg in TVs vs monitors
If you're not playing games or watching video then I definitely wouldn't recommend a TV just for production work. As far as colour accuracy, bear in mind that TVs are generally designed to enhance content, not portray it accurately. If you were determined on using a TV you would absolutely need to profile it using a colourimeter like the i1Display Pro and also disable the various features like contrast enhancement, "extended dynamic range", local dimming, etc—the downside is that these features are often used to reach claimed specifications like the high contrast ratio and peak brightness. By the time you have disabled all the TV's picture-based features, you may as well pay the same amount for a decent monitor. There are plenty of 4K 27" options (and some 32") that will cover 99/100% of sRGB and Adobe RGB, plus 60-80% of P3 and Rec.2020—more than enough for image editing.
There's also the size—50" at a desk might look impressive at first, but after the first day will seem a bit ridiculous as you'll basically have to move your head just to pan across areas of the screen. I work on a Dell 32" and that's honestly the maximum I would recommend—I do find myself sometimes moving my head just to look at the right hand side of the screen.
Gabriel also mentioned the pixel density, which is important too. With a typical 21-32" monitor, you'll get denser pixels (especially at 4K) which will give you a better representation of how your images look and allow you to evaluate their sharpness. A 50" isn't going to give you a typical view of how your images actually look—when you think about it, what size do you typically see images at, unless you print them at huge sizes? Definitely not 50"! A large TV may well skew your perception, unless you sit far away from it, but that doesn't sound like a great way to actually do detailed image and design work.
Put it this way—speaking from personal experience, if you're only wanting to do production work in apps like Photo and Designer, I would honestly recommend a decent 24-32" monitor—there are plenty of options available. Profiling them is easy if you have a decent colourimeter, and you would be able to have a conventional desk setup where everything is within easy reach. A big TV just isn't the answer—and if you're looking at a smaller TV (eg the 24-32" range), then an actual computer monitor at an equivalent price would likely be just as good if not better.
-
James Ritson got a reaction from Arp_148 in colours in Affinity Photo washed out
Hi dho, you shouldn't be setting Affinity Photo's colour profile to your new display profile, that's not how you colour manage. The software will automatically colour manage based off the current monitor profile, which should be set to your custom profile created by the Spyder5Pro software (is it i1Profiler?).
The colour profile options in Affinity Photo's Preferences are for the document profile, not your display. Your document profile wants to be a standardised profile like sRGB (the default), Adobe RGB, ProPhoto etc. Photo will then colour manage by converting the colour values using your display profile so that what you're seeing is accurate.
What you're currently doing is converting from the image's original colour profile to your display profile, which may explain why the colours look washed out and incorrect. Try setting Affinity Photo's colour dropdown options back to their defaults (e.g. sRGB) and you should see an improvement.
In other words, you're overcomplicating things! All you need to do is make sure the OS (Windows or macOS) is using your display profile. In Windows, you'd right click the desktop and choose Display Settings, then make sure the Spyder profile is being used in the dropdown. On macOS, you'd go to System Preferences>Displays>Colour and set the display profile there. You shouldn't need to change anything in Affinity Photo, just leave the colour profile options on their default settings (e.g. sRGB).
Hope that helps!
-
James Ritson got a reaction from PaulAffinity in Please sort out your ICC colour management
Hi Chris, did you see my reply in your previous thread? (https://forum.affinity.serif.com/index.php?/topic/60677-colour-profile-recognition-and-handling/&tab=comments#comment-314285)
It doesn't address your first point (.icm vs .icc)—I can't vouch for this as I haven't tried profiles with an .icm extension. It does however highlight that the profiles available are dependent on the document's current colour model (e.g. RGB or CMYK). I've also pointed out a specific directory unique to Affinity where you can put your custom profiles. Because you haven't replied to the thread I'm not sure if the suggestions have worked for you, or whether you're posting this thread because they haven't? Hope it helps!
