Jump to content

Recommended Posts

MBd I know exactly what you are talking about. I've thought about this before as well, for automated silhouetting of images. I know there are some turnkey hardware / light-table setups that automate this process, but I've never seen a software only approach. It should be feasible, an algorithm that subtracts image 1 (the background image) from image 2 (the background + subject) leaving only the subject...

 

I would move this to the feature request. If Affinity can do this, well... that's huge.

 

I question myself if it would be possible to eliminate the background of an image through taking two images of the same scene.

 

One with the object in place and one without it (using a tripod).

 

I thought about inverting the top layer and using subtract blend mode, merge the layer and invert it.

This should give the outline of the subject. Then I could cmd+klick on this layer and then make a mask from this selection and use this to separate the subject from the background.

 

I actually did not try it yet and i think a problem is that the two images are not exactly pixel perfect and especially outdoors the luminosity might not be perfectly constant.

 

So I guess it does not work this way but I really think this could be implemented as a filter with an additional algorithm in AP.

 

Does something like this already exist in PS or something like that?

Did I miss a certain point?

Are there other workflows that would allow this approach? 

 

Thanks in a advance! 


2017 15" MacBook Pro 14,3 w/ Intel 4 Core i7 @ 2.8 GHz, 16 GB RAM, AMD 455 @ 2 GB, 512 GB SSD, macOS High Sierra

Share this post


Link to post
Share on other sites

Hey MBd, good work! I think locking the exposure so that both shots are identical will also help. If I may ask, what method did you use to generate the  mask?

 

Looks like a promising technique. Glad you took the time to test it out and it works!

 

so here is an example

 

iPhone on a cheap Tripod and I used Layer Stack to align the layers and then copied them in a new document.

 

Sidenote:

I doidn´t get the new nesting behavior yet - what´s the difference when you can see the adjustment layer next to the layer icon or when it´s nested in a way that you can´t see it when the layer is collapsed?

I just noticed that I can't apply a blend mode when the adjustment layer is visible next to the pixel layer icon, it only takes affect if I drag it unter the pixel layer again and the I also can't see the adjustment layer anymore (when the pixel layer is not extended)


2017 15" MacBook Pro 14,3 w/ Intel 4 Core i7 @ 2.8 GHz, 16 GB RAM, AMD 455 @ 2 GB, 512 GB SSD, macOS High Sierra

Share this post


Link to post
Share on other sites

I think the trick would be to come up with an algorithm that looks at the difference between the two images in terms of areas of change, and not the individual changes in each pixel's values, in order to automate the mask generation part of the process, which the user could then Refine....


2017 15" MacBook Pro 14,3 w/ Intel 4 Core i7 @ 2.8 GHz, 16 GB RAM, AMD 455 @ 2 GB, 512 GB SSD, macOS High Sierra

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×