Jump to content
You must now use your email address to sign in [click for more info] ×

"Transform selection" option, please!


Recommended Posts

  • Staff
On 7/9/2022 at 7:46 PM, Stokestack said:

The entire underlying image is degraded (and repeatedly) when you merge a small transformed patch down onto it.

If you have a new bug to report, please create a new thread for this and out team will be sure to provide you any information/support you need.

Make sure to include examples with your report if you can, to help our team investigate any issue you may have.

On 7/9/2022 at 9:15 PM, Stokestack said:

And again, Photoshop doesn't have this problem. So it need not exist.

The Affinity apps are not the same as Adobe apps. They are designed and coded in completely different ways, so this comparison is not 'apples to apples'.

Please Note: I am now out of the office until Tuesday 2nd April on annual leave.

If you require urgent assistance, please create a new thread and a member of our team will be sure to assist asap.

Many thanks :)

Link to comment
Share on other sites

10 hours ago, Dan C said:

The Affinity apps are not the same as Adobe apps. They are designed and coded in completely different ways, so this comparison is not 'apples to apples'.

Thanks for the reply, Dan. I have reported this and it has been discussed in at least one (and, I think several) lengthy threads. Here's one: 

My comparison is exactly "apples to apples." The function I'm talking about is a fundamental operation in both Photo and Photoshop, and in any image-compositing application. Unnecessarily degrading the user's work is not "coding it differently." It's coding it wrong.

Link to comment
Share on other sites

  • Staff
16 hours ago, Stokestack said:

I have reported this and it has been discussed in at least one (and, I think several) lengthy threads.

The answer has not changed since your post was made, where both Affinity staff and users explained the behaviour and confirmed this is by design:

On 4/26/2021 at 10:16 AM, Gabe said:

What you see is somehow expected. First, your pixel layer are not using integer values. So, when you merge down, things will shift slightly and will be blurred, because the merged layer will now be pixel aligned. Second, your images have different DPI values. If you want to have "clean" pixel layers when you merge down, you would need to make sure you have the same DPI for whet you merge down, and that your image Position/size are using whole pixels.

On 5/4/2021 at 1:34 PM, walt.farrell said:

You have two layers in it. One with a DPI of 165 and the other with 166. Merging down is going to cause issues due to the mismatched DPI even if the alignment to the pixel grid is correct. Further, the document DPI is 144, which probably also contributes to your issue.

On 9/10/2021 at 12:13 PM, NotMyFault said:

The better option is to Rasterize & Trim the lower layer before merge down.

(...)

And based on what said above it now absolutely makes sense that Photo works correctly, and this is no bug:

If you merge a sharp 300dpi layer into a layer with different / stretched DPI, it technically unavoidable must become blurry. This blurriness the gets amplified by being stretched again for rendering in the document DPI  settings.

From the same thread;

On 5/24/2022 at 2:00 AM, Stokestack said:

We shouldn't even be discussing this defect a year later. It should be fixed.

As confirmed there, and again here, this is not a bug in the Affinity apps. The Affinity apps allow for pixel precision and documents composited with layers that have different DPIs, which may cause unexpected results if you're unsure how the Affinity app combines these layers, but that does not mean the application is incorrect, or cannot achieve what you're looking for.

17 hours ago, Stokestack said:

My comparison is exactly "apples to apples."

As confirmed in the below Adobe support thread, Photoshop sets a document DPI value that all layers adhere to - unlike Affinity Photo that allows for the additional control over each specific layer, hence they cannot be compared directly in this matter.

https://community.adobe.com/t5/photoshop-ecosystem-discussions/how-to-check-dpi-on-each-layer-or-object/m-p/11490020#M470869

Please Note: I am now out of the office until Tuesday 2nd April on annual leave.

If you require urgent assistance, please create a new thread and a member of our team will be sure to assist asap.

Many thanks :)

Link to comment
Share on other sites

I don't care how many times you float the same excuses. I am not the only one to encounter this problem.

5 hours ago, Dan C said:

First, your pixel layer are not using integer values

Why not? I would never set pixel values fractionally. So that's a problem right there. WHY are they non-integer? Why keep repeating the excuse without saying how it could possibly arise? Nor did I set the document DPI.

Quote

which may cause unexpected results if you're unsure how the Affinity app combines these layers

I'm the end user. Look at how long that thread was, full of speculation because clearly quite a few people are "unsure" of why layers are getting degraded. This excuse is a variation on the classic, "you're confused." No. There is no "confusion" here, because this entire mess is opaque.

Quote

If you merge a sharp 300dpi layer into a layer with different / stretched DPI, it technically unavoidable must become blurry. This blurriness the gets amplified by being stretched again for rendering in the document DPI  settings.

Which is not what I described doing. Again, if you have a layer that you're merging down onto an underlying layer, the top layer should be resampled to match the underlying one and blended. Why would you also resample the entire underlying layer? Again, this has never been explained. Why would an entire 4K image, for example, be blurred because you merged a 100x100-pixel patch onto it somewhere?

If this isn't a defect, then explain this: Why would a user want his image blurred over and over? You're advertising some kind of benefit here:

Quote

Affinity Photo that allows for the additional control over each specific layer..."

 Really? Like what? What is the use case where all this implied "control" outweighs the likelihood of degrading your work? And, whatever it is, are you seriously going to argue that it's more common than taking an image fragment, transforming it, and compositing it into a larger image?

Link to comment
Share on other sites

  • Staff
16 hours ago, Stokestack said:

I don't care how many times you float the same excuses.

I'm sorry to hear you feel this way, but I can only provide the same information I already have, there are no excuses being made here and multiple staff & forum regulars have also confirmed this.

16 hours ago, Stokestack said:

Why not? I would never set pixel values fractionally. So that's a problem right there. WHY are they non-integer?

If you have 'Move By Whole Pixels' active and have placed or pasted a layer on a part pixel value, then it will remain on this 'off pixel' location due to the aforementioned snapping option.

I'd recommend ensuring Snapping is enabled, with Force Pixel Alignment active, and Move By Whole Pixels disabled to avoid this.

You can confirm this by checking the X & Y locations in the Transform Studio with your layer selected.

16 hours ago, Stokestack said:

This excuse is a variation on the classic, "you're confused." No. There is no "confusion" here, because this entire mess is opaque.

There should be no confusion because Affinity staff members, including myself, have confirmed this is by design and provided the reasoning, alongside resolutions in both this thread and the previous thread linked. My apologies but I'm not sure what else you want me to say here to change your point of view if the explanation above has not.

16 hours ago, Stokestack said:

Which is not what I described doing

The thread you linked to confirmed that you were working in a document at 144DPI, with 2 pixel layers at 165DPI, which is exactly as NotMyFault has described.

16 hours ago, Stokestack said:

Again, if you have a layer that you're merging down onto an underlying layer, the top layer should be resampled to match the underlying one and blended

And as you've been informed, using rasterise and trim on the layer before merging down will resample one layer only to match the documents DPI, meaning when merging down this won't occur.

16 hours ago, Stokestack said:

Why would you also resample the entire underlying layer?

When Merging layers that have different DPI, both layers have to be resampled. There is no avoiding this if the DPI does not match, hence the above suggestion which ensures any placed/pasted content matches the documents DPI before merging and avoids both layers being resampled.

16 hours ago, Stokestack said:

Why would an entire 4K image, for example, be blurred because you merged a 100x100-pixel patch onto it somewhere?

It wouldn't be, if the DPI of both layers match before merging.

16 hours ago, Stokestack said:

Why would a user want his image blurred over and over? You're advertising some kind of benefit here:

Quote

Affinity Photo that allows for the additional control over each specific layer..."

 Really? Like what? What is the use case where all this implied "control" outweighs the likelihood of degrading your work?

As mentioned in my last post, other image editors will force all layers to be the same DPI as the document, however Affinity allows users control over each layer, independent of the document DPI.

The Affinity apps are designed around 'non-destructive' editing, meaning most users will take advantage of the above and won't ever 'merge' their layers, instead leaving each as its own object in the Layers Studio, with its own DPI value.

There would be no 'degrading' of the layer if you simply rasterise before merging the 2 layers. You might not like that behaviour and you're more than entitled to this opinion, but that does not make this a bug within the Affinity apps.

16 hours ago, Stokestack said:

whatever it is, are you seriously going to argue that it's more common than taking an image fragment, transforming it, and compositing it into a larger image?

I'm not here to argue anything, that isn't my job. I'm providing the facts regarding the Affinity app and as mentioned previously I'm not sure what else you need to hear to change your opinion regarding how the Affinity apps are designed.

The bug you are reporting is not a bug, and the behaviour is not planned to be changed at this time. Information has been provided in abundance to reaffirm this, with resolutions for each concern raised.

Should you have any new questions / issues outside of this, please create a new thread.

Please Note: I am now out of the office until Tuesday 2nd April on annual leave.

If you require urgent assistance, please create a new thread and a member of our team will be sure to assist asap.

Many thanks :)

Link to comment
Share on other sites

4 hours ago, Dan C said:

... other image editors will force all layers to be the same DPI as the document, however Affinity allows users control over each layer, ...

This. I didn't purchase Photo for this feature but boy oh boy I do appreciate it.

Mac Pro (Late 2013) Mac OS 12.7.4 
Affinity Designer 2.4.0 | Affinity Photo 2.4.0 | Affinity Publisher 2.4.0 | Beta versions as they appear.

I have never mastered color management, period, so I cannot help with that.

Link to comment
Share on other sites

Quote

The thread you linked to confirmed that you were working in a document at 144DPI, with 2 pixel layers at 165DPI

So what? You said:

Quote

If you merge a sharp 300dpi layer into a layer with different / stretched DPI, it technically unavoidable must become blurry

And again, that is not what's going on. The layer being merged down isn't becoming blurry; the underlying one is. Furthermore, you again failed to answer how layers acquire apparently arbitrary DPI. Somehow, with all source material coming from screen grabs with the same resolution, we have three different DPI; one of which differs by ONE dot per inch. Nor has anyone explained why resampling happens over and over, blurring the underlying layer more and more with each "merge down."

Quote

When Merging layers that have different DPI, both layers have to be resampled

Why? Why would you not simply resample the lower-DPI element to match that of the higher-DPI element and then blend them?

Quote

There would be no 'degrading' of the layer if you simply rasterise before merging the 2 layers

Then why doesn't Photo simply DO THAT? Once the layers are merged, you've lost all of this highly-touted "control" over the now-merged fragment anyway. What exactly is the benefit of NOT doing this upon "merge down?"

Quote

Affinity allows users control over each layer

And again, you've yet to define exactly what this "control" entails or exactly what use case it serves or what benefit it provides to the user.

Link to comment
Share on other sites

33 minutes ago, Stokestack said:

Furthermore, you again failed to answer how layers acquire apparently arbitrary DPI

If you use the move tool and change the size of the layer (or selection), the DPI will change. The number of pixels will stay constant. This function is essential to allow non-destructive editing. If Affinity would raster the layer after each edit step, it would degrade the quality (unavoidably) by rasterizing multiple times, and loose the ability to later re-edit the size without quality loss.

Not recognizing this fundamental difference (to PS and other apps) as feature instead of misinterpreting as bug is the root cause of this heated discussion.

 

Mac mini M1 A2348 | Windows 10 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

 

Link to comment
Share on other sites

35 minutes ago, Stokestack said:

And again, that is not what's going on. The layer being merged down isn't becoming blurry; the underlying one is.

When you merge down, these 2 layers are combined into one layer, indistinguishable. It makes no sense to talk about a no-more-existing layer which has been consumed and integrated into the „underlying one“

Mac mini M1 A2348 | Windows 10 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

 

Link to comment
Share on other sites

Quote

 If Affinity would raster the layer after each edit step, it would degrade the quality

Nobody said it should rasterize after every step, so I don't know why you'd make that comment.

The proposed workaround to the blurring was to rasterize and then merge down. The question is why doesn't Photo do that by itself, because as you agree:

Quote

It makes no sense to talk about a no-more-existing layer which has been consumed and integrated into the „underlying one“

The layer being merged down is no longer its own entity and therefore the user loses the much-touted "separate control" over it anyway. So why isn't it automatically rasterized in the process of being merged? Why does Affinity choose to degrade the quality of the composition instead of taking the steps that will allegedly avoid doing so?

Link to comment
Share on other sites

@Stokestack

I agree about fractions of pixels being a PITA.  I would gladly welcome a setting where absolutely everything is automatically rounded to whole pixels – I despise fractions of pixels.

Anyway, regarding the degradation of the image quality when merging down (I.E. making the image blurry).  I haven't been following the issue, however has anyone given an explanation why the bottom layer moves slightly when merging the top layer down?

In the below test file, when the top layer is merged down, the bottom layer expands slightly – which will obviously cause blurring.  I have no idea why it would be considered normal to do this when both layers are already perfectly aligned to the pixel grid.

Upper Layer Before Merge Down:

1505806129_ScreenshotA.thumb.png.470be826615cdddee322b10f626a63c3.png

 

Lower Layer Before Merge Down:

1373505564_ScreenshotB.thumb.png.d0f1bfa600851fd7e4e0d78e730deb71.png

 

Merged Down:

1233953825_ScreenshotC.thumb.png.a31aeefb649c6611927c172fa48ffe75.png

 

Before Merge Down (Close-up):

2047140381_ScreenshotD.thumb.png.b93782cc3e968ff9bead70c29da12f82.png

 

After Merge Down (Close-up):

407737475_ScreenshotE.thumb.png.3bd3c9fbdc56c332242b3b50e3f2e9f5.png

 

Test File:

Test.afphoto

 

Link to comment
Share on other sites

1 hour ago, - S - said:

In the below test file, when the top layer is merged down, the bottom layer expands slightly – which will obviously cause blurring.  I have no idea why it would be considered normal to do this when both layers are already perfectly aligned to the pixel grid.

Your document is 300 dpi, the black square is also 300dpi (since I presume it was created inside that document), while that Notepad screenshot is 360 dpi. If you merge the two layers with different dpi, there will be a resampling, hence the slight pixel offset.

If you want to avoid this, all you have to do is to rasterize the Notepad screenshot before merging with the black square.

Link to comment
Share on other sites

1 hour ago, - S - said:

Anyway, regarding the degradation of the image quality when merging down (I.E. making the image blurry).  I haven't been following the issue, however has anyone given an explanation why the bottom layer moves slightly when merging the top layer down?

It is caused by the DPI difference (300 vs. 360). Consequently, both starting position and size / end position of the upper layer become (potentially) non-integer, leading to at one additional pixel at each side, and the rendered pixels could shift their position by up to 1px.

take the example with 2 (top) or 3 (bottom) layer DPI, viewed in a document with 6 dpi, and 6 px width. Color the lower layer pure green, the upper layer one red and one blue pixel.  Now merge down.

Try again with layer order switched.

Mac mini M1 A2348 | Windows 10 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

 

Link to comment
Share on other sites

Again, the regurgitation of the DPI excuse without explaining why BOTH layers would be resampled.

Quote

If you want to avoid this, all you have to do is to rasterize the Notepad screenshot before merging with the black square.

Which there's no way for the user to know. So why doesn't the software do that as a part of the "merge down?"

Link to comment
Share on other sites

1 hour ago, Stokestack said:

Which there's no way for the user to know.

The user can either spend a few seconds to understand and acknowledge the way AP deals with placed graphics and what dpi and resampling is, or keep bitching about how AP is not Photoshop. It's their choice. That's the difference between a professional and an amateur.

The fact that AP preserves the original image data of placed graphics is simply awesome. I do image compositing for print and it's such a relief that I can freely resize imported images while maintaining their full resolution. AP continuously shows me the effective resolution of an imported graphic, and as long as that's above 150, 200 or 300 dpi, I know that the final, flattened result will look good.

1 hour ago, Stokestack said:

So why doesn't the software do that as a part of the "merge down?"

For pixel-precision work, I really don't want the software to do any resampling for me. I do it manually and only if/when needed, because resampling always destroys pixels.

The same with the merging. Merging is destructive. I only merge layers as a last resort, when I really can't do what I want with groups.

Link to comment
Share on other sites

7 hours ago, tudor said:

The user can either spend a few seconds to understand and acknowledge the way AP deals with placed graphics and what dpi and resampling is

Hahah, they're just supposed to wait, for no apparent reason, for "understanding" to magically descend upon them? And that's assuming they realize that their work was degraded in the first place (which, as already pointed out, may depend on the type of imagery it is). Knowing what DPI and resampling are doesn't make any difference, except in being able to point out how defective Photo's logic is here.

7 hours ago, tudor said:

The fact that AP preserves the original image data of placed graphics is simply awesome. I do image compositing for print and it's such a relief that I can freely resize imported images while maintaining their full resolution.

Nobody is complaining about that. Of course that's desirable. I never said anything to the contrary. I'm complaining that Photo doesn't do that when you are satisfied with the imported imagery's placement and integrate it into what's underneath it. It decimates the underlying image data; and nobody is denying that. Looks like we agree here.

7 hours ago, tudor said:
8 hours ago, Stokestack said:

So why doesn't the software do that as a part of the "merge down?"

For pixel-precision work, I really don't want the software to do any resampling for me.

Me neither, and the only way to avoid resampling both layers, according to the almighty authorities in this thread, is to rasterize the overlaying layer before the merge down. So you too must wonder why Photo shouldn't do this as part of the merge-down operation.

Link to comment
Share on other sites

  • Staff
6 minutes ago, Stokestack said:

why Photo shouldn't do this as part of the merge-down operation

Again, this allows the user the choice/control. Although undesirable in your project, a user may wish to resample both layers on the merge operation. Equally, if the DPIs of the layers match, the rasterisation of the layer would not be necessary and therefore doesn't occur automatically.

If you wish to request a new feature, that would combine Rasterise & Merge Down into one operation, you can do so here - 

https://forum.affinity.serif.com/index.php?/forum/56-feedback-for-the-affinity-suite-of-products/

Please Note: I am now out of the office until Tuesday 2nd April on annual leave.

If you require urgent assistance, please create a new thread and a member of our team will be sure to assist asap.

Many thanks :)

Link to comment
Share on other sites

26 minutes ago, Dan C said:

If you wish to request a new feature, that would combine Rasterise & Merge Down into one operation, you can do so here - 

https://forum.affinity.serif.com/index.php?/forum/56-feedback-for-the-affinity-suite-of-products/

No need to file a new one. @Stokestack

 Just add your vote to the existing.

Mac mini M1 A2348 | Windows 10 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

 

Link to comment
Share on other sites

14 hours ago, Stokestack said:

Which there's no way for the user to know. So why doesn't the software do that as a part of the "merge down?"

And Affinity Photo already has a feature precisely for this type of thing [View > Assistant Manager], where it could be added.

But Serif will continue to ignore it – like they always do.  The nerds will continue to gaslight and make excuses – like they always do.  And people will have to keep paying through the nose for Photoshop on their main machine because nothing come close to it and it just works.

Serif really need to get out of this nerd-bubble and do some real-world usability testing.  How they think the majority of users use their software is not how they actually use it. Nobody wants blurry layers, or hairline seams at the edges of their vectors, or blend modes that don't work properly, or to wrestle with countless other idiosyncrasies with the software that keep trying to trip them up, they just want to get on with what they are doing.  A constant stream of "work-a-rounds" impresses nobody.

 

Link to comment
Share on other sites

I really wanted to support Serif for developing new art software (especially Designer, because Illustrator is dead), and I hate the software-rental rip-off trend. But their designers' detachment from reality, not to mention ignorance of good UI standards and design that already existed, has led to bizarre functional gaps and gaffes. I mean... this whole thread started with a simple request that should never have been necessary: Let us transform a selection. I mean... WTF? How does a NEW photo-editing/compositing application make it out the door without that in the last 20 years? The FREE software I got with my Epson scanner in the '90s had that (and still does).

You can point out defects and make sure you suggest solutions that don't impede anyone else's workflow, and serve the most-common use cases and reflect industry standards. You can mock up a complete UI revision and grant free use of it, which again impedes no one's workflow. Yet the same apologists will endlessly attack it and defend dysfunction, like that which gradually degrades users' work through innocuous operations that are harmless in every similar piece of software (and should be in this one).

I sincerely appreciate the fact that an employee or representative engaged with us in the thread. But I don't appreciate the insulting excuses and technically incorrect statements (peppered with citations of speculation from other customers as some kind of "proof"), and the condescending "You simply [undertake a number of steps that there was no reason to guess were necessary and are not indicated as necessary in the UI and aren't in competing software], so it's not a bug."

Several people have independently demonstrated the blurring defect reported here, and nobody has been able to show why they should have guessed that their work was being ruined. Dan claimed he's sorry that I don't feel this software can be trusted, but that didn't hold up in further conversation. If Serif cared about users' work, it would either prevent it from being degraded, with the simple step we've suggested here; or at the very least WARN THAT IT'S GOING TO HAPPEN. There is no excuse for not doing at least one of those things.

And that's that. It's sad to see that these applications are already moribund, hobbled by the same design defects, functional defects, and missing functionality that have dragged them down since their inception.

Over and out.

Link to comment
Share on other sites

  • 7 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.