Jump to content

Adjustment layer effect changes appearance when rasterizing or grouping


CM0

Recommended Posts

Not a bug, and can be explained as follows.

 

A (Initial state): the Pixel object is blended with black fill of the Artboard, and then the Curves Adjustment is applied to that result.

B (Merge Visible): the Pixel object is blended with black fill of the Artboard, and then the Curves Adjustment is applied to that result.

C (Group the Curves Adjustment with Pixel object or nest the Curves Adjustment in Pixel object): the Curves Adjustment is applied to the Pixel object, and then that result is blended with black fill of the Artboard.

 

A and B will appear to differ because A involves resampling to the view pixel density before applying the adjustment, whereas B involves applying the adjustment before resampling to the view pixel density.

A and C really will differ because of the different sequence of adjustment and blending.

Link to comment
Share on other sites

It is never expected behavior to have the display to differ from what you export or rasterize.

It is impossible to create an output to the desired specifications under these conditions.

So what is the solution to having a perfect representation of the current view? Also, where is the documentation that defines in detail the rendering pipeline sequence for layers, groups, effects etc?

Note:

I took the same image and applied the same effect in Krita. It does not have this problem. Each method of rasterization results in an image that matches the view precisely. It works exactly as a user would expect. An export or rasterization that doesn't match what you see can only be a bug.

 

Link to comment
Share on other sites

21 hours ago, CM0 said:

It is never expected behavior to have the display to differ from what you export or rasterize.

I understand and sympathise with your point of view. However, Serif will consider the differing display to not be a bug and will describe it as "by design" because it is the expected behaviour of the software design, regardless of what a user might expect.

 

21 hours ago, CM0 said:

It is impossible to create an output to the desired specifications under these conditions.

So what is the solution to having pixel perfect representation of the current view?

Set the view zoom so each pixel of the document is displayed by one corresponding pixel of the display device. That one-to one pixel mapping happens when the document unit of measure is pixel and zoom is 100%.

That view will be equivalent to the Pixel object that can be produced by Merge Visible* command, but it is not guaranteed to match a 100% scale raster export of the non-merged document. As a non-destructive editor, Affinity enables raster objects to be unaligned with the document raster grid, and resampling to the view raster grid utilises different code than the resample methods provided for exports. Of course, you can do a 100% scale export of the result of Merge Visible to ensure the export matches the 100% zoom view.

* Merge Visible is not available in all Affinity apps, but the same result can be produced by selecting everything in the document, duplicating, grouping the set of duplicates and then rasterising the group.

 

21 hours ago, CM0 said:

Also, where is the documentation that defines in detail the rendering pipeline sequence for layers, groups, effects etc?

It's not in public documentation, as far as I'm aware.

It can be worked out by contrived experiments, of course.

 

Link to comment
Share on other sites

Zooming in 100% isn't very helpful with higher resolution projects. We can never confirm from the outside what is intended or working as designed if it is not documented.

As pointed out previously, it may be by design, but the design is flawed as other applications do not exhibit this behavior and furthermore under no circumstances is this a desirable behavior. As a developer, I know the perspective of trying to protect a flawed design and not call it a bug. As a developer am also aware of the terrible customer relationship that causes as the customer does not care as to why their use case is broken, it is simply broken and dismissing it as working as designed is being tone-deaf to the customer's needs.

Therefore, it would be in the best interest of everyone to fix this undesirable behavior.

Link to comment
Share on other sites

59 minutes ago, CM0 said:

Zooming in 100% isn't very helpful with higher resolution projects. We can never confirm from the outside what is intended or working as designed if it is not documented.

As pointed out previously, it may be by design, but the design is flawed as other applications do not exhibit this behavior and furthermore under no circumstances is this a desirable behavior. As a developer, I know the perspective of trying to protect a flawed design and not call it a bug. As a developer am also aware of the terrible customer relationship that causes as the customer does not care as to why their use case is broken, it is simply broken and dismissing it as working as designed is being tone-deaf to the customer's needs.

Therefore, it would be in the best interest of everyone to fix this undesirable behavior.

I agree with much of what you say.

Was just providing some info and explanation.

Maybe you'll get a response from an involved developer of the software.

Link to comment
Share on other sites

On 8/25/2023 at 3:00 PM, CM0 said:

When moving layers to a group...

Your frustration with that is completely understandable.

Consider two categories of objects/layers:

  • A - Live Filters and Adjustments
  • B - all other objects/layers except Masks (a Group can contain stand-alone Masks in Affinity, but I want to leave that out of this message since that is an unusual, although useful at times, feature.)

When a Passthrough Group contains only A, it's the same as Pass Through in Photoshop and other apps - the content can "interact" with the scene below the Group.

When a Passthrough Group contains only B, it's the same as Pass Through in Photoshop and other apps - the content can "interact" with the scene below the Group.

Now here's the problem you came across. When a Passthrough Group contains both A and B, it's inconsistently unlike Pass Through in Photoshop and other apps. In Photoshop and other apps, all content of a Pass Through Group can "interact" with the scene below the Group. However, in Affinity, the B content can "interact" with the scene below the Group, but the A content cannot. Affinity considers the Group to have Normal blend mode for the A content while having Passthrough mode for the B content.

I realised and posted a workaround for the problem scenario a few years ago, but it isn't often repeated. Because the B content can "interact" with the scene below the Passthrough Group, put a white object/layer with Multiply blend mode and covering the entire canvas/page as the bottom member inside the Group. That provides a representation of the background scene with which all other content of the Group can "interact". The white object/layer can be a vector Rectangle or Fill Layer or Pixel object, whichever you prefer, but I generally use a vector Rectangle because there can be problems with the display of a Fill Layer in apps other than Photo.

Serif representatives have repeatedly insisted the dual personality of Affinity's Pass Through blend mode is "by design" and not a bug.

Still leaving aside stand-alone Masks contained in a Group, you'll be familiar with masked Groups. Well, you will soon come across indisputable, and never denied, many years old bugs in the rendering of masked Groups, but that's a story for another day.

 

Link to comment
Share on other sites

Yes, although maybe inconsistent and maybe not intuitive, at least for the grouping problem there is a workaround that will give you the desired output.

However, for the rasterization sampling issue, there is not a sufficient work around that I know of. So, that would be my priority as far as something to be addressed at present.

Link to comment
Share on other sites

15 minutes ago, CM0 said:

Could someone please have this issue logged with development?

You've posted in the Bugs forum. Serif will handle this topic when it reaches the front of the queue.

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
    Laptop 2: Windows 11 Pro 24H2,  16GB memory, Snapdragon(R) X Elite - X1E80100 - Qualcomm(R) Oryon(TM) 12 Core CPU 4.01 GHz, Qualcomm(R) Adreno(TM) X1-85 GPU
iPad:  iPad Pro M1, 12.9": iPadOS 18.1.1, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sequoia 15.0.1

Link to comment
Share on other sites

  • Staff
On 8/28/2023 at 1:15 PM, CM0 said:

Could someone please have this issue logged with development?

Which issue is it you feel needs logging with development? What Lepr has explained below is true and the cause of this issue, this isn't a bug its just the way Affinity works with live layers I'm afraid.

 

On 8/26/2023 at 1:42 PM, lepr said:

Not a bug, and can be explained as follows.

 

A (Initial state): the Pixel object is blended with black fill of the Artboard, and then the Curves Adjustment is applied to that result.

B (Merge Visible): the Pixel object is blended with black fill of the Artboard, and then the Curves Adjustment is applied to that result.

C (Group the Curves Adjustment with Pixel object or nest the Curves Adjustment in Pixel object): the Curves Adjustment is applied to the Pixel object, and then that result is blended with black fill of the Artboard.

 

A and B will appear to differ because A involves resampling to the view pixel density before applying the adjustment, whereas B involves applying the adjustment before resampling to the view pixel density.

A and C really will differ because of the different sequence of adjustment and blending.

 

Please tag me using @ in your reply so I can be sure to respond ASAP.

Link to comment
Share on other sites

1 hour ago, Callum said:

Which issue is it you feel needs logging with development? What Lepr has explained below is true and the cause of this issue, this isn't a bug its just the way Affinity works with live layers I'm afraid.

 

 

I'm not concerned about the order of rendering. That can be worked around.

The bug is that there is no correct combination. No matter how you place the layers or fill layers. You still can not get a result that looks like it does on screen. It is impossible to produce the desired output under these conditions.

Why would it be implemented this way, apparently by design as you say, when I can perform the same actions in something like Krita and get a perfect representation of what I see on the screen? It is a terrible experience for the user, so much so in such cases I simply have to use another application for such work.

If not accepted as a bug, I would hope it could be accepted as an enhancement as I don't care about the classification, just that I can do my work in Affinity. That I can trust what I see on the screen will be accurate.

Thank you.

Link to comment
Share on other sites

  • Staff

Hey CM0,

When you are using a Live filter, they are just that—live. This means that when you pan or zoom in the document, the image behind the Live layer may 'adjust' itself. It's also true for when you rasterise the image—the filter gets baked into the pixel layer and will give you a final result which may slightly differ.

This is only true for Live layers. If you use the destructive filter (from Filters Menu) this will not happen as the filter is already being baked straight into whatever pixels you are applying the filter when you apply it.

The adjustment should be minimal when you perform the merge but I appreciate that it can change slightly from what you may have been expecting. Perhaps you may be better in certain circumstances to use the destructive filter (Filters Menu) instead.

I am not au fait with Krita but I suspect it does destructive filters as opposed to Live ones like we do and that is why you are not seeing the same thing as you are in Affinity.

Link to comment
Share on other sites

Just now, Chris B said:

not au fait with Krita but I suspect it does destructive filters as opposed to Live ones like we do and that is why you are not seeing the same thing as you are in Affinity.

Krita has live layers as well, called filter layers, and that is what I used. It is open source, maybe you can take some inspiration for how they have achieved this.

Link to comment
Share on other sites

  • Staff

Thanks, I did not know it was open-source. I'm happy to download it and check it out. Perhaps there is room to improve or perhaps there is a specific reason we do it how we do it but that's a question (which I'm happy to ask) for the developers.

Link to comment
Share on other sites

  • Staff
7 minutes ago, CM0 said:

I could give you my Krita file if you are interested. Although, not sure this is the right place to share such 🙂

Sure thing, that could be helpful. If at any point you need to share something with tech/qa for the developers you can just request a private Dropbox link if it needs to remain private - https://www.dropbox.com/request/lKWfbmWHSJpoUh0Mgh12

Link to comment
Share on other sites

I added the file to the dropbox. Named "krita rasterize test".

It is the same image as used for the Affinity example.

What you should notice, is that when you zoom in and out, there are no artifacts to the image that you can perceive. I've always noticed Affinity sometimes subtly changes the image when zooming in and out. Just small artifacts. But you do not notice that when doing the same in this Krita example.

You can rasterize this example in Krita by right clicking on the group and selecting either "flatten image" or "merge group". When you do so, the tone of the red seems to be preserved perfectly, unlike in Affinity where it seems a bit faded after the rasterize.

If you want to make changes to the live adjustment, right click on the filter layer and select "properties"

FYI, I did this using Krita 5.1.5 on Windows 10.

Hopefully this will be helpful. Thanks.

Link to comment
Share on other sites

  • Staff
10 minutes ago, CM0 said:

Affinity sometimes subtly changes the image when zooming in and out

This is because we use mipmaps for speed and efficiency reasons. I do not know if that is exactly why we do see those slight changes.

I'll pull the file down and have a look - thank you :) 

Link to comment
Share on other sites

4 minutes ago, Chris B said:

This is because we use mipmaps for speed and efficiency reasons. I do not know if that is exactly why we do see those slight changes.

I'll pull the file down and have a look - thank you :) 

That may be a valid tradeoff, but if that is the cause of the visible discrepancy in rasterization, then I would hope you could at least have a button toggle like the "view in retina mode" or something to give you an accurate view when you need it.

Also, FYI, just an additional minor point. There are no destructive options for the adjustment filters. So, no work arounds in Affinity that I'm aware as suggested above.

Note, I also tried adding a layer on top and using blend modes to get nearly identical effect to the color curves. However, it results in the same problem. Once rasterized the colors change slightly. So it is not just a live filter issue, FYI.

Once again, thank you for taking a look.

Link to comment
Share on other sites

1 hour ago, CM0 said:

That may be a valid tradeoff, but if that is the cause of the visible discrepancy in rasterization, then I would hope you could at least have a button toggle like the "view in retina mode" or something to give you an accurate view when you need it.

There is a kind of toggle already. As stated many times in the forums, you should zoom to 100% when viewing your document to validate your filter settings, and that should give you an accurate view. Then when you Rasterize the filter, you should get the results you expect.

There are additional concerns if you have pixel layers that are not aligned to the document pixel grid or don't match the document DPI, of course 

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
    Laptop 2: Windows 11 Pro 24H2,  16GB memory, Snapdragon(R) X Elite - X1E80100 - Qualcomm(R) Oryon(TM) 12 Core CPU 4.01 GHz, Qualcomm(R) Adreno(TM) X1-85 GPU
iPad:  iPad Pro M1, 12.9": iPadOS 18.1.1, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sequoia 15.0.1

Link to comment
Share on other sites

3 minutes ago, walt.farrell said:

There is a kind of toggle already. As stated many times in the forums, you should zoom to 100%

I clearly stated above why this is not a work around. It is impossible to work on a high resolution document and have correct perspective of the output. I need to see the forest, not the trees. Furthermore, I have no idea why this position is even defended as it is undesirable in any context. If it has been stated many times, then clearly this is an important issue as it has been brought up many times. Why not advocate for the improvement of Affinity?

Link to comment
Share on other sites

25 minutes ago, CM0 said:

Why not advocate for the improvement of Affinity?

I am all for the improvement of the Affinity suite, this would be a good feature request. And it's one I'm sure has been requested before.

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
    Laptop 2: Windows 11 Pro 24H2,  16GB memory, Snapdragon(R) X Elite - X1E80100 - Qualcomm(R) Oryon(TM) 12 Core CPU 4.01 GHz, Qualcomm(R) Adreno(TM) X1-85 GPU
iPad:  iPad Pro M1, 12.9": iPadOS 18.1.1, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sequoia 15.0.1

Link to comment
Share on other sites

@Chris B FYI, you are probably already aware, but an easy test to perceive the rendering accuracy is to merge visible, then set that layer's blend mode to difference.

In Affinity, you will see as you zoom out, it is less accurate as the image begins to appear once you zoom out from 100%

FYI, In Krita, the difference mode is nested under 'Negative' group of blend modes. 

Doing the same in Krita, you can see the display is pure black no matter the zoom level.

Link to comment
Share on other sites

  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.