Jump to content
You must now use your email address to sign in [click for more info] ×

I am simply unable to get an unsharp mask Live filter layer to merge


Recommended Posts

1 minute ago, firstdefence said:

Ok, I'll be the dissenter, I don't really get what you think you're seeing, whether I apply a Gaussian Blur as a destructive filter or a Live filter the effect for me is the same. The blur is the same, it doesn't maintain the level of blur when the image is larger as the smaller image would blur to nothing.

Same here. Other than the difference in the edges of the live filter when "Preserve alpha" option is not ticked, both the live & destructive versions look exactly alike at any zoom level.

All 3 1.10.8, & all 3 V2.4.1 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
Affinity Photo 
1.10.8; Affinity Designer 1.108; & all 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Link to comment
Share on other sites

Remember that the problem in question concerns a merged layer not a composite layer. A merged layer looks different to the composite of layers from which it came at virtually all levels of zoom. It is only at 100% zoom that the merged layer appears correct. This is what we are seeing in the short videos I posted above.

One way of getting around this problem is never to perform a merge until the very last moment. However, this can mean we have to put up with slower performance as a large number of composites slow down the editing (whereas a merged layer made up of those composites does not).

Link to comment
Share on other sites

31 minutes ago, peanutLion said:

However, this can mean we have to put up with slower performance as a large number of composites slow down the editing (whereas a merged layer made up of those composites does not).

On each change of zoom level, everything has to be re-rendered before it can be displayed. Since zoom is continuously variable over an enormous range (from 1 to as much as 1 billion %), it is not practical to cache the display at each pre-rendered zoom level -- that would require an insanely large amount of real or virtual memory.

That's why screen updates can take so long when there are many non-destructive layers (like adjustments, live filters, & masks) in the document -- every time you change zoom levels, everything those layers affects the appearance of has to be re-rendered on-the-fly. Obviously, that can take a lot of time in complex, multi-layer documents, so to speed things up, you can create 'merge visible' layers & hide the layers they merge if or until you need to edit those hidden layers. Of course, these merged layers are only static 'snapshots' of the appearance of the layers when they are merged into one image -- if they were dynamically updated there would be no improvement in performance since that would not avoid the need to re-render those layers.

But since 'merge visible' does not destroy the layers they merge, you can delete them at any time & continue to edit the remaining layers.

Regardless of all that, because at anything other than 100% zoom the on-screen document rendering must be interpolated to be displayed using screen pixels, it is the only 100% accurate representation of the document's pixels. There is no way around that.

All 3 1.10.8, & all 3 V2.4.1 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
Affinity Photo 
1.10.8; Affinity Designer 1.108; & all 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Link to comment
Share on other sites

I'm beginning to understand what's going on when we notice that the merged layer alone looks different to the composite alone.

It's the merged layer that properly represents our edits, and it does so at any level of zoom. The composite only does so when the zoom is at 100%. I had incorrectly thought it was the other way around.

So it seems to me that merging the layers of a composite is actually what we should be doing as we go along. Doing so will let us zoom by any amount and still see (in the merged layer) what our use of layers in the composite has actually achieved.

Merging is a more attractive way of working than looking at the composite and then always zooming to 100% to see the effect of our use of layers (because if the file is large, then 100% zoom will put only a small part of the image in the app window, such as pretty much just the eyes in a head & shoulders portrait).

It's still a pity, though, that we have to work like that! I wonder what other image-editing apps do.

It would be interesting to know why AP's use of low-res versions of our image (ie mipmaps) in a composite at zooms below 100% leads to an inaccurate representation on screen of our use of the layers of the composite.

Link to comment
Share on other sites

9 hours ago, peanutLion said:

It would be interesting to know why AP's use of low-res versions of our image (ie mipmaps) in a composite at zooms below 100% leads to an inaccurate representation on screen of our use of the layers of the composite.

From the Wikipedia article on mipmaps (emphasis added):

Quote

mipmaps {...} are pre-calculated, optimized sequences of images, each of which is a progressively lower resolution representation of the same image. {...} They are intended to increase rendering speed and reduce aliasing artifacts.

Mipmaps are no more inaccurate at less than 100% zoom levels than would be 'on-the-fly' (IOW, not pre-calculated) versions of those images. All raster images viewed at less than 100% image size are inherently inaccurate in the sense that there cannot be a 1 to 1 relationship between image pixels & screen pixels. This is because there simply are not enough available screen pixels to do that. This is true for all raster images, whether merged or not.

There is no way around this limitation, not in the Affinity or in any other app. Affinity just uses pre-calculated mipmap images to improve performance, & (I suppose) to avoid some kinds of aliasing artifacts, although that would depend on the algorithms used to create the mipmaps, how many mipmaps are created, & how they are combined when the image display size is not the same as one of those mipmaps.

If this limitation is still not clear, consider a hypothetical example of displaying a 8000 x 5000 px raster image on a computer monitor with a native resolution of 2000 x 1250 px. The monitor has 2.5 million pixels but the image has 40 million of them, 16 times more than the monitor can display at the same time. Even if the app uses every one of those 2.5 million screen pixels to display the image, you would have to zoom to 25% to fit all of it on the screen at once. At that zoom level, each screen pixel would have to display some interpolated average of 16 of the image's pixels, making it impossible to display the image accurately.

This is basically the same issue that occurs when resampling a raster image to smaller pixel dimensions -- to reduce the number of pixels, pixel perfect accuracy must be sacrificed. There is no way to avoid this.

All 3 1.10.8, & all 3 V2.4.1 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
Affinity Photo 
1.10.8; Affinity Designer 1.108; & all 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Link to comment
Share on other sites

The problem I'm on about has more to do with something other than interpolation (if by interpolation we mean the manner in which AP decides how to show many image pixels in much fewer screen pixels).

Suppose we zoom out of an image containing just the original Background layer. Visually it's pretty much as if we are walking backwards away from a print on the wall. The interpolation, however AP does it, seems to work well. Only when we zoom quite far out does the image look odd here and there. But that's what we would expect because, after all, AP has to show much fewer pixels than are actually available in the image and it makes a decision about how to do that. No problems there. Hats off to AP.

We get the same "walking away from a print on the wall" feeling if we zoom out from a solely visible merged layer instead. The interpolation again seems pretty good again. Hats off again.

The problem I'm on about arises in the case of composites, to display which AP employs mipmaps. At zooms different to 100% the composite alone is not a good representation of the edits we make in the layers that make up that composite (earlier I thought it was merged layers that were incorrect but I now realise it's not). The problem I'm on about is not so much to do with the interpolation. What's causing the problem I'm interested in is the low-res mipmaps and (it seems to me) the way AP combines them with a composite's layers.

Here's what Serif said on 7 Dec 2018, 2.28pm (moderator MEB, paraphrased):

" Only 100% zoom gives you an accurate preview of how the filter will affect the image if flattened/merged with other layers or exported. "  [peanutLion: MEB is talking about 100% zoom of a composite]

[peanutLion: MEB helpfully goes on:]
" … If you create a merged copy you will get more accurate preview of the filter because its effect is now baked to the image and was calculated using the original image dimensions … when merged, the filter effect is always calculated/baked using the original image (not the low-res versions - called mipmaps - that we use to display/render the image at zoom levels below 100% more quickly).   "

 

Link to comment
Share on other sites

52 minutes ago, peanutLion said:

The problem I'm on about has more to do with something other than interpolation (if by interpolation we mean the manner in which AP decides how to show many image pixels in much fewer screen pixels).

It has everything to do with trying to display many image pixels in fewer screen pixels. One way or another, the image pixels have to be interpolated & resampled to be rendered to the screen -- there is literally no other way to display the image accurately at other than 100% zoom. At lower zoom levels, the image must be displayed at a lower resolution, whether by using pre-calculated mipmaps or any other method.

1 hour ago, peanutLion said:

... MEB helpfully goes on:]
" … If you create a merged copy you will get more accurate preview of the filter

Note that as he said, you only get a more accurate preview, & that the only way to get a completely accurate one is at 100% zoom. In that same topic, he also said: 

Quote

Since some filters also display specific rendering issues when not displayed at 1:1 px (which includes zoom levels above 100%) I recommend to always check everything at 100% zoom.

So the bottom line is there is literally only one way to get a 100% totally accurate preview, & that is at 100% zoom level -- period, end of statement. There is nothing Serif or any other app makers can do to work around that limitation. At best, Serif could forego the use of mipmaps & use some more computationally intensive method to build the preview on-the-fly each time the zoom level is changed, but it still would not be 100% accurate. In complex multi-layer documents, it would also take considerably longer to render a significantly more accurate preview. So, until that process completed, at best all you would see is a blocky highly pixelated image similar to how progressive JPEG images first appear on web pages over slow internet connections & then gradually the blocks are replaced with less pixelated versions.

If you consider how much that would slow down typical workflows, I think you may better understand why they decided to use those less accurate, low resolution mipmaps.

All 3 1.10.8, & all 3 V2.4.1 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
Affinity Photo 
1.10.8; Affinity Designer 1.108; & all 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Link to comment
Share on other sites

  • 4 years later...

I guess I'm having a dense moment because after reading this, I can't tell if this was solved with something that can be adjusted. I have the same issue. One unsharp mask seems to work until it's merged then it's looks much the same as without a mask. I've resorting to duplicating the mask a couple times and merging one by one until the image is sharpened to what is acceptable. I'm cleaning up old comic book covers and between the wear and tear, the bad print jobs and the not great download quality, they need to be sharpened quite a bit.

Link to comment
Share on other sites

When using sharpening, the essential requirement is to use zoom level 100%, or integer multiple of 100%.

Otherwise Affinity will create a false preview where sharpening (or noise) effects look dramatically stronger.

Mac mini M1 A2348 | Windows 10 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.