Jump to content
peanutLion

I am simply unable to get an unsharp mask Live filter layer to merge

Recommended Posts

1 hour ago, firstdefence said:

Ok, I'll be the dissenter, I don't really get what you think you're seeing, whether I apply a Gaussian Blur as a destructive filter or a Live filter the effect for me is the same. The blur is the same, it doesn't maintain the level of blur when the image is larger as the smaller image would blur to nothing.

I think the effect is akin to having a printed blurred image, the blur is static and cannot be changed you are standing a foot away from it but if you start to walk backwards the image starts to look less blurred.

The blur relative to the size of the printed blurred image remains constant as you walk backward from it. The same thing happens when you zoom out the display of an image that has a baked in blur.

 

Share this post


Link to post
Share on other sites
1 minute ago, firstdefence said:

Ok, I'll be the dissenter, I don't really get what you think you're seeing, whether I apply a Gaussian Blur as a destructive filter or a Live filter the effect for me is the same. The blur is the same, it doesn't maintain the level of blur when the image is larger as the smaller image would blur to nothing.

Same here. Other than the difference in the edges of the live filter when "Preserve alpha" option is not ticked, both the live & destructive versions look exactly alike at any zoom level.


Affinity Photo 1.6.7 & Affinity Designer 1.6.1; macOS High Sierra 10.13.6 iMac (27-inch, Late 2012); 2.9GHz i5 CPU; NVIDIA GeForce GTX 660M; 8GB RAM
Affinity Photo 1.6.11.85 & Affinity Designer 1.6..4.45 for iPad; 6th Generation iPad 32 GB; Apple Pencil; iOS 12.1.1

Share this post


Link to post
Share on other sites

Remember that the problem in question concerns a merged layer not a composite layer. A merged layer looks different to the composite of layers from which it came at virtually all levels of zoom. It is only at 100% zoom that the merged layer appears correct. This is what we are seeing in the short videos I posted above.

One way of getting around this problem is never to perform a merge until the very last moment. However, this can mean we have to put up with slower performance as a large number of composites slow down the editing (whereas a merged layer made up of those composites does not).

Share this post


Link to post
Share on other sites
31 minutes ago, peanutLion said:

However, this can mean we have to put up with slower performance as a large number of composites slow down the editing (whereas a merged layer made up of those composites does not).

On each change of zoom level, everything has to be re-rendered before it can be displayed. Since zoom is continuously variable over an enormous range (from 1 to as much as 1 billion %), it is not practical to cache the display at each pre-rendered zoom level -- that would require an insanely large amount of real or virtual memory.

That's why screen updates can take so long when there are many non-destructive layers (like adjustments, live filters, & masks) in the document -- every time you change zoom levels, everything those layers affects the appearance of has to be re-rendered on-the-fly. Obviously, that can take a lot of time in complex, multi-layer documents, so to speed things up, you can create 'merge visible' layers & hide the layers they merge if or until you need to edit those hidden layers. Of course, these merged layers are only static 'snapshots' of the appearance of the layers when they are merged into one image -- if they were dynamically updated there would be no improvement in performance since that would not avoid the need to re-render those layers.

But since 'merge visible' does not destroy the layers they merge, you can delete them at any time & continue to edit the remaining layers.

Regardless of all that, because at anything other than 100% zoom the on-screen document rendering must be interpolated to be displayed using screen pixels, it is the only 100% accurate representation of the document's pixels. There is no way around that.


Affinity Photo 1.6.7 & Affinity Designer 1.6.1; macOS High Sierra 10.13.6 iMac (27-inch, Late 2012); 2.9GHz i5 CPU; NVIDIA GeForce GTX 660M; 8GB RAM
Affinity Photo 1.6.11.85 & Affinity Designer 1.6..4.45 for iPad; 6th Generation iPad 32 GB; Apple Pencil; iOS 12.1.1

Share this post


Link to post
Share on other sites

I'm beginning to understand what's going on when we notice that the merged layer alone looks different to the composite alone.

It's the merged layer that properly represents our edits, and it does so at any level of zoom. The composite only does so when the zoom is at 100%. I had incorrectly thought it was the other way around.

So it seems to me that merging the layers of a composite is actually what we should be doing as we go along. Doing so will let us zoom by any amount and still see (in the merged layer) what our use of layers in the composite has actually achieved.

Merging is a more attractive way of working than looking at the composite and then always zooming to 100% to see the effect of our use of layers (because if the file is large, then 100% zoom will put only a small part of the image in the app window, such as pretty much just the eyes in a head & shoulders portrait).

It's still a pity, though, that we have to work like that! I wonder what other image-editing apps do.

It would be interesting to know why AP's use of low-res versions of our image (ie mipmaps) in a composite at zooms below 100% leads to an inaccurate representation on screen of our use of the layers of the composite.

Share this post


Link to post
Share on other sites
9 hours ago, peanutLion said:

It would be interesting to know why AP's use of low-res versions of our image (ie mipmaps) in a composite at zooms below 100% leads to an inaccurate representation on screen of our use of the layers of the composite.

From the Wikipedia article on mipmaps (emphasis added):

Quote

mipmaps {...} are pre-calculated, optimized sequences of images, each of which is a progressively lower resolution representation of the same image. {...} They are intended to increase rendering speed and reduce aliasing artifacts.

Mipmaps are no more inaccurate at less than 100% zoom levels than would be 'on-the-fly' (IOW, not pre-calculated) versions of those images. All raster images viewed at less than 100% image size are inherently inaccurate in the sense that there cannot be a 1 to 1 relationship between image pixels & screen pixels. This is because there simply are not enough available screen pixels to do that. This is true for all raster images, whether merged or not.

There is no way around this limitation, not in the Affinity or in any other app. Affinity just uses pre-calculated mipmap images to improve performance, & (I suppose) to avoid some kinds of aliasing artifacts, although that would depend on the algorithms used to create the mipmaps, how many mipmaps are created, & how they are combined when the image display size is not the same as one of those mipmaps.

If this limitation is still not clear, consider a hypothetical example of displaying a 8000 x 5000 px raster image on a computer monitor with a native resolution of 2000 x 1250 px. The monitor has 2.5 million pixels but the image has 40 million of them, 16 times more than the monitor can display at the same time. Even if the app uses every one of those 2.5 million screen pixels to display the image, you would have to zoom to 25% to fit all of it on the screen at once. At that zoom level, each screen pixel would have to display some interpolated average of 16 of the image's pixels, making it impossible to display the image accurately.

This is basically the same issue that occurs when resampling a raster image to smaller pixel dimensions -- to reduce the number of pixels, pixel perfect accuracy must be sacrificed. There is no way to avoid this.


Affinity Photo 1.6.7 & Affinity Designer 1.6.1; macOS High Sierra 10.13.6 iMac (27-inch, Late 2012); 2.9GHz i5 CPU; NVIDIA GeForce GTX 660M; 8GB RAM
Affinity Photo 1.6.11.85 & Affinity Designer 1.6..4.45 for iPad; 6th Generation iPad 32 GB; Apple Pencil; iOS 12.1.1

Share this post


Link to post
Share on other sites

The problem I'm on about has more to do with something other than interpolation (if by interpolation we mean the manner in which AP decides how to show many image pixels in much fewer screen pixels).

Suppose we zoom out of an image containing just the original Background layer. Visually it's pretty much as if we are walking backwards away from a print on the wall. The interpolation, however AP does it, seems to work well. Only when we zoom quite far out does the image look odd here and there. But that's what we would expect because, after all, AP has to show much fewer pixels than are actually available in the image and it makes a decision about how to do that. No problems there. Hats off to AP.

We get the same "walking away from a print on the wall" feeling if we zoom out from a solely visible merged layer instead. The interpolation again seems pretty good again. Hats off again.

The problem I'm on about arises in the case of composites, to display which AP employs mipmaps. At zooms different to 100% the composite alone is not a good representation of the edits we make in the layers that make up that composite (earlier I thought it was merged layers that were incorrect but I now realise it's not). The problem I'm on about is not so much to do with the interpolation. What's causing the problem I'm interested in is the low-res mipmaps and (it seems to me) the way AP combines them with a composite's layers.

Here's what Serif said on 7 Dec 2018, 2.28pm (moderator MEB, paraphrased):

" Only 100% zoom gives you an accurate preview of how the filter will affect the image if flattened/merged with other layers or exported. "  [peanutLion: MEB is talking about 100% zoom of a composite]

[peanutLion: MEB helpfully goes on:]
" … If you create a merged copy you will get more accurate preview of the filter because its effect is now baked to the image and was calculated using the original image dimensions … when merged, the filter effect is always calculated/baked using the original image (not the low-res versions - called mipmaps - that we use to display/render the image at zoom levels below 100% more quickly).   "

 

Share this post


Link to post
Share on other sites
52 minutes ago, peanutLion said:

The problem I'm on about has more to do with something other than interpolation (if by interpolation we mean the manner in which AP decides how to show many image pixels in much fewer screen pixels).

It has everything to do with trying to display many image pixels in fewer screen pixels. One way or another, the image pixels have to be interpolated & resampled to be rendered to the screen -- there is literally no other way to display the image accurately at other than 100% zoom. At lower zoom levels, the image must be displayed at a lower resolution, whether by using pre-calculated mipmaps or any other method.

1 hour ago, peanutLion said:

... MEB helpfully goes on:]
" … If you create a merged copy you will get more accurate preview of the filter

Note that as he said, you only get a more accurate preview, & that the only way to get a completely accurate one is at 100% zoom. In that same topic, he also said: 

Quote

Since some filters also display specific rendering issues when not displayed at 1:1 px (which includes zoom levels above 100%) I recommend to always check everything at 100% zoom.

So the bottom line is there is literally only one way to get a 100% totally accurate preview, & that is at 100% zoom level -- period, end of statement. There is nothing Serif or any other app makers can do to work around that limitation. At best, Serif could forego the use of mipmaps & use some more computationally intensive method to build the preview on-the-fly each time the zoom level is changed, but it still would not be 100% accurate. In complex multi-layer documents, it would also take considerably longer to render a significantly more accurate preview. So, until that process completed, at best all you would see is a blocky highly pixelated image similar to how progressive JPEG images first appear on web pages over slow internet connections & then gradually the blocks are replaced with less pixelated versions.

If you consider how much that would slow down typical workflows, I think you may better understand why they decided to use those less accurate, low resolution mipmaps.


Affinity Photo 1.6.7 & Affinity Designer 1.6.1; macOS High Sierra 10.13.6 iMac (27-inch, Late 2012); 2.9GHz i5 CPU; NVIDIA GeForce GTX 660M; 8GB RAM
Affinity Photo 1.6.11.85 & Affinity Designer 1.6..4.45 for iPad; 6th Generation iPad 32 GB; Apple Pencil; iOS 12.1.1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×