Jump to content
You must now use your email address to sign in [click for more info] ×

Merge visible at top of layer stack changes image


Recommended Posts

I am trying to merge the layers from an image I processes. When I use the merge layer option when I am at the top of the layer panel, the resulting new pixel layer is different  than the image before I did the merge.  The first image below is a screen shot before doing a merge visible and the second one is after I did the merge visible (ctr+shift+alt E). As can be seen there is an obvious difference in the histogram for the two different images.  I have couple times before noticed slight shifts but this is quite dramatic. Can you explain what is going on and how to remedy.

I also noticed that the merge down option is an unallowed option in the layer panel for this option for most layers in the layer stack for this image. Again why?

Appreciate help with this.

Cheers,

 

Tomimage.thumb.png.153aa08b14a2b1054e8ca605df6c2e83.pngimage.thumb.png.6113a1bf643f7f0707800bf471000192.png

Link to comment
Share on other sites

Without the document to experiment with, and try on different systems, it might be difficult to diagnose this problem.

Can you share the document at a stage before the merging?

(I notice that the top layer in your second screenshot has a Screen Blend Mode, which could affect things, but I don’t know how that came about or what it might affect as I can’t see what’s in the group.)

Link to comment
Share on other sites

  • Staff

It could be the live High Pass filter: spatial filters may appear to render differently when merging/flattening if you are previewing at a zoom level below 100%. Ignoring the histogram changing, if you zoom to 100% (CMD+1 / Ctrl+1) before merging, is there any visible difference?

What's going on in those groups with masks, do you have more live filters in them?

As for Merge Down, you can only use it if the layer below your current layer is a mutable Pixel layer. Image layers (placed images) and RAW layers are considered immutable, and you have to explicitly rasterise them if you wish to merge layer content into them. You'll find the same with vector content such as Curve layers and shapes: the user is prevented from merging content down into them as this would have to rasterise the vector data.

Hope that helps!

Product Expert (Affinity Photo) & Product Expert Team Leader

@JamesR_Affinity for tutorial sneak peeks and more
Official Affinity Photo tutorials

Link to comment
Share on other sites

Thanks for the replies.

The effect is not due to the high pass filter. I deleted this and still get the same effect. No other high pass filters. The two mask layers are simple mask with normal blend modes. in one there is a curve adjustment and a level adjustment and in the other there is an HSL adjustment. There is another mask layer lower down which has two contrast adjustments - one with a darken blend mode and the other with a lighten one. Deleting all of these groups with mask layer does not resolve the problem. In fact, it is only when I delete all of the adjustment layers that I do not see an effect. There are three pixel layers at the bottom of the stack. The first being the original, the second is a copy of  the original with as far as I can remember being the application of the haze filter  and the third one being a copy the second with  one with a bit of burning and dodging Removing these two pixel layers and only leaving the original one still results in a similar effect when merging. Also, if I export as a Tif file before merging, the tif file looks like the merged layer. Attached is the Affinity photo file for the image. Rather baffled.

Hope you can clarify.

Thanks and Cheers,

 

Tom

 

PS my connection is very slow tonight and have not been able to upload file. Will try tomorrow.

Link to comment
Share on other sites

The images are identical when do a merge visible on iPad and compare the layer content numerically (blend mode difference).
 

Visible (temporary) differences while rendering the file in Photo is mainly due to mipmap rendering which gives 1/2 resolution results temporary before achieving full resolution after a few seconds waiting. Try to change performance settings (view quality - best).

I often get wrong rendering after some time and need to restart my Mac/PC/iPad to get consistent rendering again.

 

Use zoom level 100% when inspecting files, or integer multiple. All other zoom level give wrong results in fine details.

Mac mini M1 A2348 | Windows 10 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

 

Link to comment
Share on other sites

Thanks for the reply.

 

Thanks for reply.

When i do the merge visible with the blend mode difference, there is a visible difference - see screen shot below. The image is not pure black as can be seen  both in the image an in the histogram. The difference does seem to be a screen rendering effect om Affinity photo . Thus, if I export the image as a tiff before merging and as a tiff after merging, the two tiff images are identical (based on the resulting histograms or by loading the two in the same file and using a blend mode difference). The tiff corresponds in both cases to the merged image and not the image prior to merging.  I haven't tried printing the unmerged and merged versions but using the soft proof layer option, the two display differently on the screen (including the histograms).  It would seem that this is not a screen/display issue because the software based on the histograms is finding a difference between the image when merged and unmerged (I am using a PC with a calibrated BenQ monitored). This is disturbing as it means that one can not be confident that the adjustments one is making in adjustment layers will translate into the final image. It would seem that one can only trust the merged images as what one will get when exporting or printing. This can become a bit of a tail chasing scenario of doing a merge then further adjustments to achieve the effect of the non-merged image.  Thankfully , this does not happen to frequently and probably reflects the combination of layers I used in editing this image (and others where I have experienced this)?

                         image.thumb.png.060408a85292cd1b1b7a1dda8276859d.png                                                                                                                                                                                                                                                                                                       

Link to comment
Share on other sites

7 hours ago, running tide said:

When i do the merge visible with the blend mode difference, there is a visible difference - see screen shot below. The image is not pure black as can be seen  both in the image an in the histogram. The difference does seem to be a screen rendering effect om Affinity photo . Thus, if I export the image as a tiff before merging and as a tiff after merging, the two tiff images are identical (based on the resulting histograms or by loading the two in the same file and using a blend mode difference). The tiff corresponds in both cases to the merged image and not the image prior to merging

Your observations are correct.

Photo is unable to render images correctly and fails to give accurate preview due to multiple issues:

  • forced dithering of gradients at export
  • no 16 bit color depth rendering
  • Mipmap leads to wrong rendering ij case zoom is not 100% (Noise, sharpness, resampling of layers)
  • Anti-aliasing issues with vector shapes
  • resampling issues with edges of pixel layers
  • numerous bugs to handle alpha layer for child layers
  • etc

please add your vote to 

 

 

Mac mini M1 A2348 | Windows 10 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

 

Link to comment
Share on other sites

On 7/8/2024 at 2:50 AM, running tide said:

I was hoping to get some response and insight as to what is going on with this now that I have uploaded the file. Any help, much appreciated.

In your uploaded file...

Delete the top layer (the merged layer)
In the layers panel, select the top most background layer (3rd layer from bottom), right-click & select Rasterise & Trim
Delete or deselect the Curves Adjustment layer immediately above that background layer

Select the very top Layer and do a Merge Visible

Set blend mode to Difference - Do you see an improvement in that now?

 

To save time I am currently using an automated AI to reply to some posts on this forum. If any of "my" posts are wrong or appear to be total b*ll*cks they are the ones generated by the AI. If correct they were probably mine. I apologise for any mistakes made by my AI - I'm sure it will improve with time.

Link to comment
Share on other sites

There is nothing wrong with the file, layer structure or anything else. 
it is a design limitation of Affinity renders fine details like noise, sharpness, or any 1px structure (below 2px) wrong. Not intentionally, but a collateral damage due to the performance optimization by mipmaps.

Affinity kind of renders more correct (still many bugs unfixed) in export preview - and many complain that the preview is totally slow and want to get rid of the current enforced preview in export.

So the choice is between a mostly fast rendering - but wrong in fine details - or a mostly correct but extremely rendering in export preview.

If you compare with other Apps: Canon DPP can take multiple minutes to render a single CR3 raw image.

I not aware of any app rendering both correct and fast. Never the less, the current „compromise“ in Photo is too much on the „wrong“ side and I miss the option to get a more accurate rendering even if much slower.

Mac mini M1 A2348 | Windows 10 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

 

Link to comment
Share on other sites

Thanks for the explanation "No Fault".  I am finding it more and more of problem. It seems for me to be a problem when I have lots of mask layers. Also with the live band pass filter, which I have been finding quite useful in some specific cases.  Does result in a lot of trying to correct the final image so the final images matches the rendering. Agree a slower rendering option would be very helpful in the final stages of editing.  Does Photo Shop do a better job in this case and does it have any option for a more accurate rendering?

Link to comment
Share on other sites

52 minutes ago, running tide said:

Thanks for the explanation "No Fault".

Just as an aside, claims of “Not My Fault” and “No Fault” are worlds apart!

Alfred spacer.png
Affinity Designer/Photo/Publisher 2 for Windows • Windows 10 Home/Pro
Affinity Designer/Photo/Publisher 2 for iPad • iPadOS 17.5.1 (iPad 7th gen)

Link to comment
Share on other sites

After playing a bit with your uploaded file and looking up close (quite zoomed in), it appears to me that (MAYBE) could be  that when the live filters, contrast, etc are still "live"  they are actually of lower quality than once merged (as looking it in detail, pixel level, the merged image seems of higher quality, even if different in colors and tones, but also less "pixelated"). In this particular image, with tons of small details (horrible for mipmapping in zoomed out, but I tested always at 100% or lower), the image previous to merging /flattening is of lower quality, so, the problem would be perhaps more in  the filters and live stuff (maybe for optimization) than in the actual final merge, which indeed it is seeming to me that merges "to higher quality" but then the result is different to what you were visually controlling (which yes, can be a problem). Probably more noticeable with some images than others. And this one seems to have a combination of factors to produce that more.

It could also be  that ... the merge, or more likely, the filters are not considering the color profile or image depth (16 bits) fully,  and the operation is happening with less range, or a different color profile (until the revamp called "space invasion", based on the GEGL library , Gimp suffered of all or most of the layer effects happening in sRGB (and dunno if 8 bits). Something similar in PaintShop Pro, if I recall well). I don't know.

I tested freeing up all live stuff (no visual changes, btw), and then trying different ways of merging (always producing the problem) and also dragging the live stuff over the one background layer (on top, as is 100%) and while doing so no visual change happened, the final merge or rasterization always generates the problem.

What I couldn't explain is why when I merged the 3 background layers (which should have no effect, as the one on top of the 3 is at 100% opacity and is (the 3 are) solely a pixel rendered layer!), without touching anything else, and at 100% zoom, I already saw some changes that should never occur doing that, IMO (darkening of some subtle areas). Could this be a bug?

I am more and more suspecting that the live stuff is like  in a temp mode not fully the final-final render and it is what is introducing this. I am not a photographer but a digital painter, so I had not noticed this as I apply filters non live or only one by one, so I kind of control it more so (but also is less of an issue when illustrating or painting).

About if it happens in Photoshop... well, the last time I used it (some years ago, but not many, and I had been using it since 1995) you could happen to find some issues when using complex groups of layers with different layer blending modes and some live stuff. But I learned some tricks (typically rendering with intermediate pixels layers, and other tricks) to avoid issues. Still not gone into that with Affinity, as I haven't needed it. I don't know if Adobe has fixed those problems, though. I know I could avoid any issue with certain workflows, so it was a non-issue for me.

I would always love a priority on quality of viewing, and viewing "pixel perfect" accuracy or... even if by default they set "high performance" mode on, for the mass of users,  specially many people that don't fine tune stuff for professional work, but that there would be some preference settings for "full accuracy" that we could configure for viewport accuracy.  Also it would be lovely to have more control over the Lanczos export (in every file format), like a slider with a preview window... or even if just the slider (we could do personal tests) as (I never remember which is which) one of the modes ('separated' or 'non separated') is too "smooth" as in, a bit blurry, and the sharper one is way too sharp, forcing some halos similar to when you went too heavy with an unsharp mask (or any sharpen filter) setting.

I still can't find an explanation of why merging (ctrl + shift + e), or doing a group of them and the rasterizing the group of solely the 3 pixel layers (the 3 backgrounds) being the one on top at 100% opacity, how or why does this alters what I am viewing (the live stuff above should not matter), as this would never happen in Photoshop or many other image editors, and also, as it complicates quite the workflows. My only explanation is that it's all working for a "perception", but it is not tied to the raw pixels in the way we usually think (in most 2D raster applications).

If someone can throw some light at this, I'd be quite grateful. 

I personally have no huge issues with this (as I said, several live filters and layer effects in Photoshop did have important "usability" issues) as I mostly paint and can do stuff "under control", but it'd be really good any improvement in this whole area.

 

 

AD, AP and APub V2.5.x. Windows 10 and Windows 11. 
 

 

Link to comment
Share on other sites

4 hours ago, SrPx said:

What I couldn't explain is why when I merged the 3 background layers (which should have no effect, as the one on top of the 3 is at 100% opacity and is (the 3 are) solely a pixel rendered layer!), without touching anything else, and at 100% zoom, I already saw some changes that should never occur doing that, IMO (darkening of some subtle areas). Could this be a bug?

These Pixel objects are not aligned with the document pixel grid - they are slightly rotated. Merge Selected and Merge Down give a more blurred result (hence the darkening you saw) than Merge Visible when the lowest object of the merge is not congruent with the document pixel grid.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.