Jump to content
You must now use your email address to sign in [click for more info] ×

Understanding inverting a selection (technical with practical application)


Recommended Posts

I am trying to understand what is going on under the hood when I invert a selection.

Situation: I have an image with a selection. The selection consists of two parts, one (at the left hand side) done with the freehand selection tool, the other (to the right) with the elliptical marquee tool. The freehand part has a blurry boundary, the elliptical part has a sharp boundary.

I now want to create two layer masks, one from the selection and the other from whatever is outside of it. I use Invert Selection for that. The idea is to stack two layers with the original image, with one of the masks applied to each of them. Hopefully, each mask will let through just enough so that together they will blend into the original image. I expect normal blending mode should do the trick.

My way of working is the following:

- the original selection I save into a channel that I call "base selection".

- I then invert the selection, and save the result in a channel called "base selection inverted".

- I create a pixel layer and fill it with pink colour.

- I duplicate that layer.

- For each of the two layers I create a mask. The "base selection" channel I load into alpha of the first mask, the "base selection inverted" channel  into alpha of the second mask.

- I stack the two layers with normal blending mode.

- I activate both layers and their masks.

- Finally, to see clearly what is happening I create an extra blue pixel layer that I use as background.

This is the resulting layer panel:

1508659843_layerpanel.thumb.png.5e132e938556a2e727c8bed42f54e710.png

I would now expect, perhaps naively, that I would see the original pink image back, since the masks are complementary and both let their own part of the original through. Instead, I see the following:

456121556_completeimage.thumb.jpg.bdc334963ed38b56aeedcfbdcd75fb3f.jpg

Whereas the elliptical selection to the right is invisible, the blurry freehand selection to the left lets part of the blue background shine through.

I now have the following questions:

- how exactly is the selection converted into a mask through the channel? As I understand it, a mask is a mapping that tells what opacity to assign to each pixel of the (pixel) layer it is assigned to. White has 100% opacity, black is fully transparant. Fine. For black and white positions in the mask, this works. But if there is a position in the mask with "grey tonal value", an opacity between 0 and 1 is assigned. What is the formula for that? How is this "gray tonal value" determined, and once you have it, how does it translate to an opacity number?

- How do the opacity numbers relate between a mask derived from a selection, and a mask derived from its inverse? Are they complementary, i.e., add up to 1 in every case?

- From what I am seeing, it looks like inverting a selection does not necessarily lead to masks that are complementary in the sense that they can work together to restore a full image. They may leave spots with less than full opacity, if selections have blurry edges. This leads to the following question: is there a way to achieve what I am aiming for? Or am I missing the obvious here?

Thanks to everyone who read through all this. I do hope you can shed some light. For reference, I attach the project (done with AP 1.7.0.128).

Jeroen.

 

 

 

selection inversion test.afphoto

Link to comment
Share on other sites

Just curious, but the inversion test file can't be opened in the retail Affinity apps, only the betas, so I assume it was created or edited in one of the 1.7 beta versions of Affinity Photo. If so, is there any difference if you duplicate the method using the retail version?

All 3 1.10.8, & all 3 V2.4.1 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
Affinity Photo 
1.10.8; Affinity Designer 1.108; & all 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Link to comment
Share on other sites

1 hour ago, MEB said:

This is not a bug in the Beta. It always worked like this in the retail as well.

Thanks for passing this to the dev team, and for coming back to me.

For clarity, I was not saying it is a bug. I was asking how it worked. And still am curious. Maybe once I know, I can use it to my advantage. And I would like to understand what the program is doing for me anyway. See my more detailed question in my posting.

But now you that mention it, I wish it were different. I wonder if there is a reason to have it this way? It seems so logical that inverting a selection would also "invert" the mask. And I cannot think of a compelling reason to do it like this. I would like to know what I am missing here, if anything.

Also, I am wondering if there is another way to achieve what I was trying? Seems like a natural thing to wish for. Again, I would like to understand where I may be wrong.

Thanks, Jeroen.

Link to comment
Share on other sites

1 hour ago, R C-R said:

Just curious, but the inversion test file can't be opened in the retail Affinity apps, only the betas, so I assume it was created or edited in one of the 1.7 beta versions of Affinity Photo. If so, is there any difference if you duplicate the method using the retail version?

Sorry, cannot duplicate to 1.6. Option to load channel to mask is not available, and I do not know of any other way.

Link to comment
Share on other sites

  • Staff

Hi Jeroen,
My reply above was regarding R C-R comment. I was not implying you said this is a bug - you didn't.

Regarding replicating this, you don't need to save the selections into a channel (or use channels at all unless I'm misunderstanding something). You can generate masks directly from selections, duplicate and invert them (click the Mask icon on the bottom of the Layers panel with a selection active to create a mask from the selection). A simple way to see/check the same issue is to create a solid fill pixel layer with two masks nested - one generated from a small feathered selection and the other generated from the same mask but inverted - with another pixel layer filled with a contrasting colour at the bottom of the layer stack. In theory, both masks should hide the original pixel layer they are nested to, but - like in your construct - there's some residues left behind. Here's a sample file showing this for reference: maskplusinvertedmask_test.afphoto.

All this was passed to the dev for checking/comments.

Link to comment
Share on other sites

Just a random thought (from a non-programmer). Since the selection(s) are feathered, the amount of selection in any given pixel in that feathered region varies between 0 and 100 percent. Inverting the selection seems like it should just give the exact opposite selection. However, since the residual is a really thin "line" is it possible that this represents a "rounding error"?

Affinity Photo 2, Affinity Publisher 2, Affinity Designer 2 (latest retail versions) - desktop & iPad
Culling - FastRawViewer; Raw Developer - Capture One Pro; Asset Management - Photo Supreme
Mac Studio with M2 Max (2023}; 64 GB RAM; macOS 13 (Ventura); Mac Studio Display - iPad Air 4th Gen; iPadOS 17

Link to comment
Share on other sites

19 hours ago, Jeroen said:

I am trying to understand what is going on under the hood when I invert a selection.

Situation: I have an image with a selection. The selection consists of two parts, one (at the left hand side) done with the freehand selection tool, the other (to the right) with the elliptical marquee tool. The freehand part has a blurry boundary, the elliptical part has a sharp boundary.

I now want to create two layer masks, one from the selection and the other from whatever is outside of it. I use Invert Selection for that. The idea is to stack two layers with the original image, with one of the masks applied to each of them. Hopefully, each mask will let through just enough so that together they will blend into the original image. I expect normal blending mode should do the trick.

My way of working is the following:

- the original selection I save into a channel that I call "base selection".

- I then invert the selection, and save the result in a channel called "base selection inverted".

- I create a pixel layer and fill it with pink colour.

- I duplicate that layer.

- For each of the two layers I create a mask. The "base selection" channel I load into alpha of the first mask, the "base selection inverted" channel  into alpha of the second mask.

- I stack the two layers with normal blending mode.

- I activate both layers and their masks.

- Finally, to see clearly what is happening I create an extra blue pixel layer that I use as background.

This is the resulting layer panel:

1508659843_layerpanel.thumb.png.5e132e938556a2e727c8bed42f54e710.png

I would now expect, perhaps naively, that I would see the original pink image back, since the masks are complementary and both let their own part of the original through. Instead, I see the following:

456121556_completeimage.thumb.jpg.bdc334963ed38b56aeedcfbdcd75fb3f.jpg

Whereas the elliptical selection to the right is invisible, the blurry freehand selection to the left lets part of the blue background shine through.

I now have the following questions:

- how exactly is the selection converted into a mask through the channel? As I understand it, a mask is a mapping that tells what opacity to assign to each pixel of the (pixel) layer it is assigned to. White has 100% opacity, black is fully transparant. Fine. For black and white positions in the mask, this works. But if there is a position in the mask with "grey tonal value", an opacity between 0 and 1 is assigned. What is the formula for that? How is this "gray tonal value" determined, and once you have it, how does it translate to an opacity number?

- How do the opacity numbers relate between a mask derived from a selection, and a mask derived from its inverse? Are they complementary, i.e., add up to 1 in every case?

- From what I am seeing, it looks like inverting a selection does not necessarily lead to masks that are complementary in the sense that they can work together to restore a full image. They may leave spots with less than full opacity, if selections have blurry edges. This leads to the following question: is there a way to achieve what I am aiming for? Or am I missing the obvious here?

Thanks to everyone who read through all this. I do hope you can shed some light. For reference, I attach the project (done with AP 1.7.0.128).

Jeroen.

 

 

 

selection inversion test.afphoto

The app is producing the correct result. The opacity of overlapping semi-opaque regions is not, and should not be expected to be, added when you consider the mathematics of alpha compositing layerB on top of layerA: 

result = alphaB * colourB + (1 - alphaB) * colourA

Consider where each of the masked pink layers have 50% opacity:

Blending the lower pink layer with the opaque blue layer: 
result = 0.5 * [0.6666, 0.3251, 0.3251] + (1 - 0.5) * [0, 0, 1]
= [0.3333, 0.1626, 0.6626]

Blending the top pink layer with the first result: 
result = 0.5 * [0.6666, 0.3251, 0.3251] + (1 - 0.5) * [0.3333, 0.1626, 0.6626]
= [0.5, 0.2439, 0.4939]

 

Link to comment
Share on other sites

8 hours ago, >|< said:

The app is producing the correct result. The opacity of overlapping semi-opaque regions is not, and should not be expected to be, added when you consider the mathematics of alpha compositing layerB on top of layerA: 

result = alphaB * colourB + (1 - alphaB) * colourA

Consider where each of the masked pink layers have 50% opacity:

Blending the lower pink layer with the opaque blue layer: 
result = 0.5 * [0.6666, 0.3251, 0.3251] + (1 - 0.5) * [0, 0, 1]
= [0.3333, 0.1626, 0.6626]

Blending the top pink layer with the first result: 
result = 0.5 * [0.6666, 0.3251, 0.3251] + (1 - 0.5) * [0.3333, 0.1626, 0.6626]
= [0.5, 0.2439, 0.4939]

 

Thanks for pointing this out. It explains why things do not work out as I wished. Next question: How to achieve that goal?

You seem to be explaining how normal blend mode is supposed to work on the alpha channel, and to be arguing that there is no bug in the way it is implemented in AP. Fine. I accept that. But my point is not to seek for a supposed bug, but to understand what goes wrong and from that to achieve my goal.

As I found out experimenting, and as you explain in terms of mathematical operations, my original approach does not work. Applying normal blend mode to the image, masked by a selection, and the same image masked by the inverse selection, does not recreate the original image if there are partially opaque pixels in the selection. That is not saying that normal blending is implemented incorrectly ("bug"), only that this particular blending mode does not achieve what I am looking for. And what is, in my view (but also in that of others if I understand correctly from the discussion), a natural thing to ask for in the context of selections representing parts of images.

Now for a possible solution for the problem: a selection and its inverse from the same image, to be combined into the original image.

In fact, for presentation purposes, I created this simplified version of a more general situation: an image with a number of selections, derived from the full image by repeatedly creating a selection, setting it apart, and continuing in the same fashion with its inverse. Eventually the whole image is "covered" by a number of selections that are mutually exclusive, except that theyt may overlap on their boundaries, where opacity on each of them is below 1. At all places, all opacities together will add up to 1, however.

So, now what would for the simple case composing the two masked images need to come down to?

- color: unchanged from the underlying image. 

- alpha: the sum of the alphas of the parts. In the two selection case, always 1.

So, I need a blend mode where alpha comes out as the sum of alphas from the selections being combined. Also, on the colour channels one can have any reasonable blend mode as long as it leaves identical pixels alone.

A simple example would be a blend mode that acts as Normal in the colour channels, but as Add in the alpha channel.

I tested my small example by taking the result, with two selections with a partially transparant boundary, and setting alpha to 1 everywhere (using Filter/Apply Image on the combined selections layer and setting formula DA=1). The boundary, and with it the problem, was gone.

So, in summary, there certainly seems to be a working solution to the simplified problem and it is not difficult to achieve by hand.

I now am looking for an elegant way to set alpha to 1 everywhere without touching color information. And to generalize to more than two selections, which is my real use case.


 

Link to comment
Share on other sites

On 5/24/2019 at 7:21 PM, MEB said:

Hi Jeroen,
My reply above was regarding R C-R comment. I was not implying you said this is a bug - you didn't.

Regarding replicating this, you don't need to save the selections into a channel (or use channels at all unless I'm misunderstanding something). You can generate masks directly from selections, duplicate and invert them (click the Mask icon on the bottom of the Layers panel with a selection active to create a mask from the selection). 

 

Thanks for pointing this out. It is simpler than I did. Using channels was actually an attempt to keep the selections around, in case I need to revisit them later. But for this demonstration not needed.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.