Jump to content
You must now use your email address to sign in [click for more info] ×

Saturation Mask


Recommended Posts

I've been tinkering with the idea of saturation masks for a while and I've just found a very simple way to create one. It is based on the principle that we want all tones to go black, including whites and greys, leaving fully saturated colours white and partially saturated colours in shades of grey.

The steps are as follows:

  1. Duplicate layer.
  2. HSL layer, check HSV, slide Saturation fully left, Blend Mode: Difference. (removes tones - blacks, whites and greys all go black).
  3. Another HSL layer, check HSV, slide Saturation fully left (turns saturated colours white).
  4. Layer/Merge Visible and Layer/Rasterise to Mask, then use mask as you like (or use the pixel layer otherwise).

Here's a video tutorial I've just uploaded to my InAffinity YouTube channel:

 

 

Dave Straker

Cameras: Sony A7R2, RX100V

Computers: Win10: Chillblast i9 Custom + Philips 40in 4K & Benq 23in; Surface Pro 4 i5; iPad Pro 11"

Favourite word: Aha. For me and for others.

Link to comment
Share on other sites

  • 1 month later...

Thanks for sharing this! I’ve used this for a while now - the only issue is it underestimates saturation somewhat if the luminosity is not always 50 (HSL value). I found a way to make a layer where hue/saturation is preserved, but luminosity is set to 50. This resolves the issue. (Note: blending with a layer where luminosity is 50 with the “luminosity” blend mode does not work - nor do any similar approaches. The luminosity result is close to 50 but varies, and hue/saturation values are affected.)

 

Here’s how to map saturation values exactly to luminosity values:

 

1. Create a HSL adjustment layer, and set saturation to -100. (Don’t turn on HSV!)

2. Merge visible. The result is a B&W layer that contains only luminosity values. (Note: desaturation, setting color to grey, etc. using pixel/fill layers/curves does not get this quite right!)

3. Invert the B&W luminosity layer, and set blend mode to “vivid light”.

4. Merge visible. The result is a layer containing only hue and saturation values. Luminosity is constant (always 50 by HSL). 

 

Now that you have separated luminosity and hue/saturation into their own layers, apply the instructions you shared above to the hue/saturation layer:

 

5. Create a HSL adjustment layer, and set saturation to -100. (Turn on HSV!) Set blend mode to “difference”.

6. Create another HSL adjustment layer, and set saturation to -100. (Turn on HSV!) Set blend mode to “normal”.

7. Merge visible. The result is a B&W layer containing only saturation values, mapped to luminosity values. This is your saturation mask.

 

Link to comment
Share on other sites

  • 4 weeks later...

I have just tried the @Fotoloco version on two different images (below). After item #5 (create HSL Adjustment Layer ...) The image goes black and stays black for the rest of the procedure. This would imply uniform zero saturation? I note that in sat_mask_new.afphoto, the visibility of all the layers except the final one are turned off. I tried that with no effect.

TewkesburyHalfTimbering47.png.d71ac986db9a92f6f221d1b6955a5789.png  1733706958_WestWindowHDR.png.c27181783b2937e39117436f65b219c7.png

John

 

Windows 10, Affinity Photo 1.10.5 Designer 1.10.5 and Publisher 1.10.5 (mainly Photo), now ex-Adobe CC

CPU: AMD A6-3670. RAM: 16 GB DDR3 @ 666MHz, Graphics: 2047MB NVIDIA GeForce GT 630

Link to comment
Share on other sites

21 hours ago, John Rostron said:

I have just tried the @Fotoloco version on two different images (below). After item #5 (create HSL Adjustment Layer ...) The image goes black and stays black for the rest of the procedure. This would imply uniform zero saturation? I note that in sat_mask_new.afphoto, the visibility of all the layers except the final one are turned off. I tried that with no effect.

...

John

 

@John Rostron Here's your church interior with the @Fotoloco method (done on 1.7), and then with my HSV method (done on 1.6) to get mask, ending with curves applied to original image using fotoloco-dmstraker mask.

I'm a bit puzzled as to how it actually works - something to do with the algorithm of Vivid light with an inverted desaturated blend layer...

Vivid Light is:

For each of RGB:

If Blend >= 0.5:
    Result = 1 – (1 – Base) / (2 * (Blend – 0.5))         [Colour Burn]
Else (Blend < 0.5):
    Result = Base / (1 – 2 * Blend)                                    [Colour Dodge]

Regards

Dave

fotoloco method on rostron church image.afphoto

fotoloco-straker method on rostron church image.afphoto

Dave Straker

Cameras: Sony A7R2, RX100V

Computers: Win10: Chillblast i9 Custom + Philips 40in 4K & Benq 23in; Surface Pro 4 i5; iPad Pro 11"

Favourite word: Aha. For me and for others.

Link to comment
Share on other sites

I notice in the attached .afphoto files that the various masks are embedded (grouped?) into the layers. This is something that I am trying to learn. I attach here firstly my Layers panel and the History panel:

Layers.png.df6b6f8def5dba2f15ec0b98546f4ca0.png  History.png.82bc1d82ff7da8f1217003e29082126f.png

Here is the Original image  (at 15%)

Original.png.f692e4cbb91207cd3bd97712f3d4e951.png

And after step Three (Invert):

1982448885_AfterStepThreeInvert.png.4011e4f39a5b76234bacc64ec5629419.png

After step Four:

1791510559_AfterStepFour.png.5f0a4912954478c3ace4ebe8a5497840.png

And after step Five:

793700037_FinalMergedVisible.png.e76d20c34aae532640b4a3063c552fcd.png

Can you explain why I do not get the same result?

John

Windows 10, Affinity Photo 1.10.5 Designer 1.10.5 and Publisher 1.10.5 (mainly Photo), now ex-Adobe CC

CPU: AMD A6-3670. RAM: 16 GB DDR3 @ 666MHz, Graphics: 2047MB NVIDIA GeForce GT 630

Link to comment
Share on other sites

No photo attached, @John Rostron.

Which process are you seeking to emulate (@fotoloco, my original or a combined version?).

If you are using the @fotoloco version, the third step from the bottom should be HSL, not HSV, then the two HSLs above should be HSV.

Groups are very handy for several reasons, including:

  • Keeping things tidy and understandable
  • Adding extra blend mode at group level
    • Spot the Overlay blend in my ' fotoloco-straker method on rostron church image.afphoto', in 'Colours' subgroup or 'Fotoloco method' group that makes it react with Tones subgroup below to show original image.
  • Constraining actions such as Erase blend mode.

Think of them like a non-destructive 'merge visible'. They often act this way.

Dave Straker

Cameras: Sony A7R2, RX100V

Computers: Win10: Chillblast i9 Custom + Philips 40in 4K & Benq 23in; Surface Pro 4 i5; iPad Pro 11"

Favourite word: Aha. For me and for others.

Link to comment
Share on other sites

47 minutes ago, dmstraker said:

No photo attached, @John Rostron.

The photos I was referring to here were the ones in the links at the end of the previous message from @dmstraker

47 minutes ago, dmstraker said:

Which process are you seeking to emulate (@fotoloco, my original or a combined version?).

I was following the one posted by @Fotoloco fairly precisely as far as I can tell. I was very careful to only tick the HSV box when instructed to do so. I did this three times, once on the window (Gloucester Cathedral Great West Window), then on the half-timbered building (in Tewkesbury), then again on the window for these images. Each time I got the same result. I was using version 1.6 for all of these.

I will try and emulate the layers in the .afphoto fies which I downloaded. I shall also practice using groups and nested adjustments.

Thanks for your comments.

John

Windows 10, Affinity Photo 1.10.5 Designer 1.10.5 and Publisher 1.10.5 (mainly Photo), now ex-Adobe CC

CPU: AMD A6-3670. RAM: 16 GB DDR3 @ 666MHz, Graphics: 2047MB NVIDIA GeForce GT 630

Link to comment
Share on other sites

1 hour ago, John Rostron said:

The photos I was referring to here were the ones in the links at the end of the previous message from @dmstraker

I was following the one posted by @Fotoloco fairly precisely as far as I can tell. I was very careful to only tick the HSV box when instructed to do so. I did this three times, once on the window (Gloucester Cathedral Great West Window), then on the half-timbered building (in Tewkesbury), then again on the window for these images. Each time I got the same result. I was using version 1.6 for all of these.

I will try and emulate the layers in the .afphoto fies which I downloaded. I shall also practice using groups and nested adjustments.

Thanks for your comments.

John

 

Ok, John. Let us know how you get on.

Dave Straker

Cameras: Sony A7R2, RX100V

Computers: Win10: Chillblast i9 Custom + Philips 40in 4K & Benq 23in; Surface Pro 4 i5; iPad Pro 11"

Favourite word: Aha. For me and for others.

Link to comment
Share on other sites

the unfortunate thing is I have not been able to create a macro for saturation masks..  It appears that affinity photo macro capability does not allow you to reorder or nest layers within the macro.  Can that be allowed in the next version 1.7?

Link to comment
Share on other sites

18 hours ago, hanshab said:

 It appears that affinity photo macro capability does not allow you to reorder or nest layers within the macro.

See: Macros: Layer Behaviour

If you meant instead the reordering in an already recorded macro, yes editing/changing that one then is pretty limited.

☛ Affinity Designer 1.10.8 ◆ Affinity Photo 1.10.8 ◆ Affinity Publisher 1.10.8 ◆ OSX El Capitan
☛ Affinity V2.3 apps ◆ MacOS Sonoma 14.2 ◆ iPad OS 17.2

Link to comment
Share on other sites

I stumbled across a web page by Tony Kuyper, and he describes a really simple (and allegedly better) method of creating saturation masks. I have posted a link to his web page along with a bunch of macros that automate this process. My post is in the Resources section, here:

 

Affinity Photo 2, Affinity Publisher 2, Affinity Designer 2 (latest retail versions) - desktop & iPad
Culling - FastRawViewer; Raw Developer - Capture One Pro; Asset Management - Photo Supreme
Mac Studio with M2 Max (2023}; 64 GB RAM; macOS 13 (Ventura); Mac Studio Display - iPad Air 4th Gen; iPadOS 17

Link to comment
Share on other sites

There is no precise way to do saturation masks in any RGB color space because lightness affects saturation and lightness is not directly used  in any RGB saturation mask approach I have seen.  Using the HSL and HSV adjustments in Affinity is a rather poor choice as this will give significantly different results for  brighter and darker images.I do not recommend this approach.  It  does not take into account lightness. (I am working on a solution in the LAB color space that includes luminosity masks  and uses color inversion masks and the difference  and or subtract blend modes but I am still working on it.)

 In LAB luminosity is directly applied and much easier to use  the formula there is:s_{ab}=\frac{C^*_{ab}}{L^*}=\frac{\sqrt{{a^*}^2+{b^*}^2}}{L^*} which is not recommended as there is no chrominance (C*) relationship to other gamuts from LAB , an approximation to what the eye sees is given by 

 S_{ab}=\frac{C^*_{ab}}{\sqrt{{C^*_{ab} }^2+{L^*}^2}} 100\%. This is still an approximation  the explanation can be found in wikipedia here: https://en.wikipedia.org/wiki/Colorfulness or on any book on Color Theory such as https://www.amazon.com/Color-Vision-Colorimetry-Applications-Monograph/dp/0819483974/ref=sr_1_13?ie=UTF8&amp;qid=1547755006&amp;sr=8-13&amp;keywords=The+physics+of+color+theory

Tony Kuyper who is cited in the dialogue above has done an excellent job of getting to the best practical solution for saturation masks and I have been using his approach up to now.  He has also done a superb job on luminosity masks

 

Link to comment
Share on other sites

20 hours ago, hanshab said:

There is no precise way to do saturation masks in any RGB color space because lightness affects saturation and lightness is not directly used  in any RGB saturation mask approach I have seen.  Using the HSL and HSV adjustments in Affinity is a rather poor choice as this will give significantly different results for  brighter and darker images.I do not recommend this approach.  It  does not take into account lightness. (I am working on a solution in the LAB color space that includes luminosity masks  and uses color inversion masks and the difference  and or subtract blend modes but I am still working on it.)

 In LAB luminosity is directly applied and much easier to use  the formula there is:s_{ab}=\frac{C^*_{ab}}{L^*}=\frac{\sqrt{{a^*}^2+{b^*}^2}}{L^*} which is not recommended as there is no chrominance (C*) relationship to other gamuts from LAB , an approximation to what the eye sees is given by 

 S_{ab}=\frac{C^*_{ab}}{\sqrt{{C^*_{ab} }^2+{L^*}^2}} 100\%. This is still an approximation  the explanation can be found in wikipedia here: https://en.wikipedia.org/wiki/Colorfulness or on any book on Color Theory such as https://www.amazon.com/Color-Vision-Colorimetry-Applications-Monograph/dp/0819483974/ref=sr_1_13?ie=UTF8&amp;qid=1547755006&amp;sr=8-13&amp;keywords=The+physics+of+color+theory

Tony Kuyper who is cited in the dialogue above has done an excellent job of getting to the best practical solution for saturation masks and I have been using his approach up to now.  He has also done a superb job on luminosity masks

 

Wups. That's telling me. But then I don't mind being told, so it's all good.

Looks like you know what you're talking about, @hanshab. Maths and all! What does the * in C* (etc) mean? Maybe something possible in Apply Image equations, setting the document to LAB beforehand (or use RGB-to-LAB conversion algorithm in formula)? And why are all books with 'Colorimetry' in the title so expensive?

I still find the HSV approach has value as it gives a practical output that I've happily used in photo editing.

I'll also look again at the Kuyper method.

Thanks again.

Dave Straker

Cameras: Sony A7R2, RX100V

Computers: Win10: Chillblast i9 Custom + Philips 40in 4K & Benq 23in; Surface Pro 4 i5; iPad Pro 11"

Favourite word: Aha. For me and for others.

Link to comment
Share on other sites

FYI I am a former professor of Astrophysics and former director of the Hughes GPS  (WAAS) program  with a specialty in relativity theory.  C* is the chrominance in LAB and is not separable in any RGB color space . I would not convert RGB to LAB you lose something as the RGB color space is smaller than LAB.  I always go directly to LAB then only convert  to Pro Photo when all the calculations are done.   Again there is a loss associated with this but its what you have to do and to print you have to go further to sRGB...... I am still experimenting with going directly to a 32 bit color space and working from there .  In the formulas  L is the luminosity and one can use a luminosity mask for this; again  I refer to Kuyper..

Link to comment
Share on other sites

Cool. I studied electronics then taught maths up to 18 year-olds before drifting into programming, business and psychology, mostly in HP (doing a couple of masters' along the way). Not rocket science nor astrophysics, so I guess you win on credibility. I'm now joyfully retired and a photo geek.

I believe files from cameras come in RGB - I use RAW with Adobe RGB, which is the widest colour space my camera will give, so yes, I guess any conversion will have some kind of loss (even if it's just rounding error). The Affinity RAW engine seems happy to output in 16 / 32 bit, though the photo persona seems to work in 8 bit (?), which is still ok for much of what I do. My highest-res output is to an Epson P600. File size is also of course an amusing conversation, especially when saving in .afphoto.

Dave Straker

Cameras: Sony A7R2, RX100V

Computers: Win10: Chillblast i9 Custom + Philips 40in 4K & Benq 23in; Surface Pro 4 i5; iPad Pro 11"

Favourite word: Aha. For me and for others.

Link to comment
Share on other sites

39 minutes ago, >|< said:

Your Sony cameras and the vast majority of digital cameras have a Bayer array sensor. A raw file from such a camera is not RGB - a single value, not three, is recorded for each photosite of a Bayer sensor. An RGB image can be derived from that data in a process known as demosaicing.

The camera colour profile (typically sRGB or Adobe RGB) is for JPEGs produced by the camera, and it has no bearing on the raw data.

The Photo persona has a choice of unsigned integer 8 and 16 bpc, and signed floating point 32 bpc, when working in RGB.

 

Helpful. Thanks. A question: when you set 16 bit output from the develop persona assistant then switch to the photo persona, does it edit in 8 or 16 bit? Colour palettes default to showing 8 bit 0-255. Is this a scaling of an actual 16 bit? How do you ensure you keep the higher bit depth?

Dave Straker

Cameras: Sony A7R2, RX100V

Computers: Win10: Chillblast i9 Custom + Philips 40in 4K & Benq 23in; Surface Pro 4 i5; iPad Pro 11"

Favourite word: Aha. For me and for others.

Link to comment
Share on other sites

In affinity photo raw development is done in 32 bit unbounded float in a linear space according to Serif.  My camera takes pictures in Adobe 14 bit not 8 bit which is adequate for a raw picture  when you record the raw file in camera as NEUTRAL.  I find cameras that record RAW in 8 bit less than desirable.  But 8 bit makes the file size smaller but you cannot recover out of gamut data easily especially in landscape photography which is what I do.  Its when you do manipulations with the camera raw file in either any persona that you need the extra precision.  So when you go to the photo persona  you could select a 32 bit space  with the assistant. Serif went to 32 bit in the develop persona i Believe with version 1.5 as with version 1.4 when you did even tonal adjustments  it would blow through the available precision and give very undesirable results.  The HDR persona is of course in 32 bit as well and I tend to do a lot of adjustments there.  Then you can continue working in 32 bit mode when you go to the photo persona.  Of course you have to scale to whatever format when you output the file... There is a video on how to do that scaling as well

 

 

Link to comment
Share on other sites

@>|<, @hanshab: many thanks. My grey matter is enhanced, as is my confidence in AP. My concern about bit depth reduction was whether AP might simplify and not tell me.

Disk space is an ongoing question. It seems disk technology has pretty much reached its limit. I've 2x 2T drives on a simple RAID and NAS with 16T reduced to 10 for longer-term storage (read speed for big photo files is relatively slow). It also seems AP will crash if you throw too much at it.

Later in the year I'm giving a talk to my local camera club on all aspects of colour, including physics, display, print and psychology. Your notes are helping -- thanks!

Dave Straker

Cameras: Sony A7R2, RX100V

Computers: Win10: Chillblast i9 Custom + Philips 40in 4K & Benq 23in; Surface Pro 4 i5; iPad Pro 11"

Favourite word: Aha. For me and for others.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.