Jump to content

Recommended Posts

Photoshop has traditionally had the Select > Color Range command. There is also HSL keying inside the Hue/Saturation adjustment toolset, which, annoyingly cannot be used to create selections for use with other commands without clumsy workarounds (that I know much better than I would care to admit). 

 

What I am hoping to see is a non-destructive way to do masking based on luminance, HSL tools, as well as possibly more sophisticated keying algorithms that are found in video software (Primatte, Keylight, etc.). Most video-focussed color correction software (DaVinci Resolve, Speedgrade, Lustre, Baselight, FilmMaster, Assimilate Scratch) allows for non-destructive HSL keying already.

 

The most elegant way I can imagine this working inside Photo is that similar to regular layer masks, a new procedural mask layer type for non-destructive HSL/Select Color Range masks with adjustable parameters would be available. Like an adjustment layer, except that the result would be a mask instead of an RGBA image, to be used just like regular layer masks.

 

That way, it would be easy to adjust the selection parameters interactively after the fact, and, if need be, rasterize destructively to a regular raster mask layer. In additions to the keying in Luma, RGB, HSL, YUV and LAB, parameters for blurring the result, expanding/contracting the mask would be useful. The exact same functionality could then double as a vastly improved "Select Color Range" command as well.

Share this post


Link to post
Share on other sites

I am interested in this as well. If nothing else at least make channels of HSL and LAB selectable as filters in blend ranges to make it possible to make a layer transparent where the underlying is more saturated for example, in a non-destructive way.

Share this post


Link to post
Share on other sites

Even if HSL blend range curves were added, they wouldn't allow you to mix images based on which pixel has higher saturation. They would allow you to mask areas where values exceed a certain saturation or where they fall below it, but not compare. Blend ranges always work only based on the value in one of the two layers. What you are asking would require something like a new "Maximum Saturation" blend mode.

 

You might be able to hack something like that together using Apply Image, but it is destructive and currently doesn't have HSL support, so you'd have to whip up your own one-line formula to determine or approximate saturation based on RGB values.

 

For these types of tasks, I wish there was something like support for SeExpr, GLSL shaders (like Matchbox in flame or the ill-fated AIF/Pixel Bender in Photoshop/After Effects), or even GLUAS that would allow you to accomplish this kind of thing easily yourself.

Share this post


Link to post
Share on other sites

Maximum Saturation sounds interesting, however I was probably expressing myself not clearly enough. What was trying to achieve is to have my upper layer only be blended to parts where the lower layer is more saturated, which is the same you can do with Blend Ranges if I would want to blend based on luminosity, only the channel selection is missing for saturation. I would think this to be very much doable without major effort in the software, as it already allows to work with HSL in other places.

Share this post


Link to post
Share on other sites

Except that Resolve has no RGB or YUV keyer, which can come in very handy as well. The Discreet Keyer in flame and combustion even has the option to use HSL, RGB and YUV in conjunction, but the results could be easily emulated by simply multiplying different keys/masks together.

 

Prelight doesn't seem to allow toggling individual channels on and off from what I can tell from their website – that can be a very handy feature to be able to tell what your key is doing in each channel without manually resetting, say, the hue part, and undoing it (and remembering which changes you made to the other channels because these will inevitably also be revoked by the undo).

Share this post


Link to post
Share on other sites

It'd be great if additional selector modes were available. RGB, LAB and CMYK.

BTW Resolve does have an RGB qualifier.

 

A really cool option would be to have less common representations like RGBW (RGB with white column), Munsell, and Kelvin. Especially if you could combine keys like the stack tool combines images.

 

To be useful the output needs to be a dynamic mask that you can assign to a couple of tools though. Easy to do in a node graph, not so trivial in a layer stack. 

Share this post


Link to post
Share on other sites

I think YUV/YIQ/YCbCr (and to an extent LAB) would essentially cover the Kelvin Temperature and Tint system. I'm not sure how useful RGBW would be for keying, but it would certainly be interesting to test that.

 

Affinity already supports multiple masks being combined non-destructively and applying Curves to masks works as well, that's already a step better than Photoshop. However, the mask system is extremely quirky. When adding Curves, it doesn't auto select the Alpha channel in the channel drop down, mask layers ignore blend modes (so the only mode you have available is essentially Multiply), layer effects like Blur are ignored, changing Opacity of Mask layers has the opposite effect of what you'd expect, and so on.

 

You can in fact even add effect layers like Gaussian Blur to masks, but it doesn't work while the mask is applied to a layer. You have to first drag the mask out so it is a root-level layer, then nest a blur effect inside them, and then drag them back onto the layer you want to apply them to. It works, but it then hides the nested effect in the layers panel. To edit the parameters, you need to drag the mask out again and then re-apply it to the layer.

 

Of course node-based compositing would make a lot of things much easier and solve the long-standing Photoshop issue of keeping lots of duplicated layers around, but I don't expect this fundamental approach to change any time soon. You could use Symbols, but I fear that might make things even more confusing. I would be happy already if the current quirks were worked out. That being said, I do like how flame offers a layer-based node (Action) inside a node-based system (Batch) so you essentially get the best of both worlds.

Share this post


Link to post
Share on other sites

3D LUT Creator has the RGBW model (and some others) if you want to play with it in general. In curves mode it's most apparent on how it would behave as a qualifier. It could make tricky keys easier when part of a stack and Affinity's batch system could be used to run though a bunch of frames with that as a preset.

 

If a setup like that could be saved as a symbol with a few controllers exposed to the user, like Motion setups in FCPX or Gizmos / Groups in Nuke, that would really make a difference and give Adobe something to chew on. Affinity is node based under the hood, so if anyone they're more likely to have a chance of doing this.

Share this post


Link to post
Share on other sites

Interesting, do you have any sources for the information that Affinity is node-based internally that go into a bit of detail? My impression is that its object model is based on a node structure like Xara, in such a way that each document element is a node of a tree, not necessarily a directed acyclic processing graph like a compositing software would use. I agree that bringing node-based compositing to the still image world would be a great achievement.

 

While I think a good keyer is a valuable tool for still images, I'm not sure if using Affinity for keying image sequences via batch processing is such a good idea. I'd expect that it would be much easier to run the images through a dedicated video compositing software like Nuke, Fusion, After Effects or even Natron, as that would give you options like animating garbage masks and you could check if your settings work across the clip much easier without re-processing everything a bunch of times. If Affinity ever gets a Photoshop-type video timeline (and I suppose some form of video layer is coming for use with digital publishing in Publisher), applying a key to a video that way would be a much more feasible workflow in my view.

Share this post


Link to post
Share on other sites

Interesting, do you have any sources for the information that Affinity is node-based internally that go into a bit of detail? My impression is that its object model is based on a node structure like Xara, in such a way that each document element is a node of a tree, not necessarily a directed acyclic processing graph like a compositing software would use. I agree that bringing node-based compositing to the still image world would be a great achievement.

 

While I think a good keyer is a valuable tool for still images, I'm not sure if using Affinity for keying image sequences via batch processing is such a good idea. I'd expect that it would be much easier to run the images through a dedicated video compositing software like Nuke, Fusion, After Effects or even Natron, as that would give you options like animating garbage masks and you could check if your settings work across the clip much easier without re-processing everything a bunch of times. If Affinity ever gets a Photoshop-type video timeline (and I suppose some form of video layer is coming for use with digital publishing in Publisher), applying a key to a video that way would be a much more feasible workflow in my view.

https://forum.affinity.serif.com/index.php?/topic/15873-coming-2016-32bit-hdr-editing-sneak-preview/?p=81085

 

if they are not acyclic that would certainly make up for a great processing time....infinitely long  :D

and if they are not directed...affinity would randomly change layer processing order, read the export as import  :lol:

Edited by MBd

 

 

Share this post


Link to post
Share on other sites

That quote could mean a lot of things – it doesn't mean that it is necessarily a processing graph in the sense of a node-based compositor. Would be very cool if it was and if that power was exposed in the UI at some point.

 

If it's not just a normal processing graph, it doesn't necessarily have to be acyclic – for instance, Maya's document model has a part that can include cycles (called the "DG"). And a node-based data structure in an object model doesn't necessarily have to be directed if it just describes certain relationships between objects, such as, say, linking attributes together to be changed in sync.

 

Directed Acyclic Graph, (or short "DAG") is just the standard term for the type of graph we usually see in compositing applications. By talking about a DAG, I'm also implying that it is not a subtype with further restrictions. In our case, there being no further restrictions would mean the object model would allow for things like splitting branches and merging them again, whereas some more specific subtypes of a DAG, like Photoshop's layer model, or Xara's document tree model, might prevent that.

 

But I don't think we need to get this mathematical about it  :lol:

Share this post


Link to post
Share on other sites

That quote could mean a lot of things – it doesn't mean that it is necessarily a processing graph in the sense of a node-based compositor. Would be very cool if it was and if that power was exposed in the UI at some point.

 

If it's not just a normal processing graph, it doesn't necessarily have to be acyclic – for instance, Maya's document model has a part that can include cycles (called the "DG"). And a node-based data structure in an object model doesn't necessarily have to be directed if it just describes certain relationships between objects, such as, say, linking attributes together to be changed in sync.

 

Directed Acyclic Graph, (or short "DAG") is just the standard term for the type of graph we usually see in compositing applications. By talking about a DAG, I'm also implying that it is not a subtype with further restrictions. In our case, there being no further restrictions would mean the object model would allow for things like splitting branches and merging them again, whereas some more specific subtypes of a DAG, like Photoshop's layer model, or Xara's document tree model, might prevent that.

 

But I don't think we need to get this mathematical about it  :lol:

yeah sorry for propagating misinformation, just read up on that and seems legit  :) lets hope they have DG, otherwise it would be a shame  :)


 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×