Jump to content
You must now use your email address to sign in [click for more info] ×

Support perceptual color spaces like Oklab


Recommended Posts

It would be great if Affinity products started supporting perceptual color spaces for image processing, like Björn Ottosson's Oklab.

Let me quote two sections from Björn's post:

Quote

 

Motivation and derivation of Oklab

What properties does a perceptual color space need to satisfy to be useful for image processing? The answer to this is always going to be a bit subjective, but based on my experience, these are a good set of requirements:

  • Should be an opponent color space, similar to for example CIELAB.
  • Should predict lightness, chroma and hue well. LLL, CCC and hhh should be perceived as orthogonal, so one can be altered without affecting the other two. This is useful for things like turning an image black and white and increasing colorfulness without introducing hue shifts etc.
  • Blending two colors should result in even transitions. The transition colors should appear to be in between the blended colors (e.g. passing through a warmer color than either original color is not good).
  • Should assume a D65 whitepoint. This is what common color spaces like sRGB, rec2020 and Display P3 uses.
  • Should behave well numerically. The model should be easy to compute, numerically stable and differentiable.
  • Should assume normal well lit viewing conditions. The complexity of supporting different viewing conditions is not practical in most applications. Information about absolute luminance and background luminance adaptation does not normally exist and the viewing conditions can vary.
  • If the scale/exposure of colors are changed, the perceptual coordinates should just be scaled by a factor. To handle a large dynamic range without requiring knowledge of viewing conditions all colors should be modelled as if viewed under normal viewing conditions and as if the eye is adapted to roughly the luminance of the color. This avoids a dependence on scaling.

What about existing models?

Let’s look at existing models and how they stack up against these requirements. Further down there are graphs that illustrate some of these issues.

  • CIELAB and CIELUV – Largest issue is their inability to predict hue. In particular blue hues are predicted badly. Other smaller issues exist as well
  • CIECAM02-UCS and the newer CAM16-UCS – Does a good job at being perceptually uniform overall, but doesn’t meet other requirements: Bad numerical behavior, it is not scale invariant and blending does not behave well because of its compression of chroma. Hue uniformity is decent, but other models predict it more accurately.
  • OSA-UCS – Overall does a good job. The transformation to OSA-UCS lacks an analytical inverse unfortunately which makes it impractical.
  • IPT – Does a great job modelling hue uniformity. Doesn’t predict lightness and chroma well unfortunately, but meets all other requirements. Is simple computationally and does not depend on the scale/exposure.
  • JzAzBz – Overall does a fairly good job. Designed to have uniform scaling of lightness for HDR data. While useful in some cases this introduces a dependence on the scale/exposure that makes it hard to use in general cases.
  • HSV representation of sRGB – Only on this list because it is widely used. Does not meet any of the requirements except having a D65 whitepoint.

So, all in all, all these existing models have drawbacks.

Out of all of these, two models stand out: CAM16-UCS, for being the model with best properties of perceptual uniformity overall, and IPT for having a simple computational structure that meets all the requirements besides predicting lightness and chroma well.

For this reason it is reasonable to try to make a new color space, with the same computational structure as IPT, but that performs closer to CAM16-UCS in terms of predicting lightness and chroma. This exploration resulted in Oklab.

 

References:

https://bottosson.github.io/posts/oklab/
https://news.ycombinator.com/item?id=25525726

https://raphlinus.github.io/color/2021/01/18/oklab-critique.html
https://news.ycombinator.com/item?id=25830327

Affinity v2 suite [Designer 2.4.0.2301 | Photo 2.4.0.2301 | Publisher 2.4.0.2301]
NUC11PHKi7C [Core i7-1165G7 @ 2.8GHz, DDR4-3200 32GB RAM, GeForce RTX 2060] | 32UN880-B | Windows 11 Pro (10.0.22631)

Affinity v1 suite [Designer 1.10.6.1665 | Photo 1.10.6.1665 | Publisher 1.10.6.1665]
ThinkPad T430 (2344-56G) [Core i7 3520M @ 2.9GHz, DDR3-1600 8GB RAM, NVS 5400M] | Windows 7 Pro 64-bit (6.1.7601)

Link to comment
Share on other sites

Interesting but given what I have seen the focus is on much more core/central/infrastructure features, so I think unlikely especially given a relatively small team. I believe there are references on the forum to others using G'MIC as a plug in on Win so that might work. 

Link to comment
Share on other sites

  • 2 weeks later...

Adobe has implemented oklab -- for good reason.

https://helpx.adobe.com/photoshop/using/gradient-interpolation.html

Is see no reason why Affinity should not make that relatively small change (the code is not complicated or long at all) to also support that visually better way to pick colors.

See the 4 blog posts on the homepage. Here one explaining oklab: https://bottosson.github.io/posts/oklab/

 

And do not forget to finally implement ICC color management more complete (see my relevant posts -- listing the missing parts and why they are needed).

Link to comment
Share on other sites

  • 1 month later...

i would also love to see oklab make its way to affinity software. it's also pretty easy to implement. i previously ported it to java without much difficulty (except for the gamut clipping) and recently implemented it in desmos in an hour or two. most of the struggle i had with it in java was with seeing the results and i do plan on trying gamut clipping again in desmos. having it as an option for any effects, ideally as the default, would make it more approachable and understandable as it would work better. shifting the hue around would work as a novice user would expect, with it not significantly affecting the luminosity as it would with hsl/hsv.

Link to comment
Share on other sites

  • 10 months later...
  • 1 year later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.