Jump to content

Recommended Posts

Posted

Trying to wrap my head around the delete function in Affinity. Why oh why does it delete an entire layer? Surely the most obvious behaviour is for any selected pixels to be deleted. There appears to be no function to select pixels and clear them. Crazy.

How does one create transparent backgrounds around objects if you cant delete the surrounding pixels.

Posted

@influxx If you select pixels on an "Image Layer" and hit Delete, it deletes the entire layer. Why? Because "Image Layers" are considered IMMUTABLE. Yes, they are made up of pixels, but it is a special type of layer that is designed to be left intact (like a smart object in PS).

You need a "Pixel Layer", which CAN be changed or edited. You can rasterize an Image Layer to convert it to a Pixel Layer (right click on the Image Layer in the Layers panel, then select Rasterize). Any true pixel layer is "mutable", so if you select some pixels and hit delete, those selected pixels will be deleted. 

It does the basics, but not all things work the same. Part of the learning process. 

2024 MacBook Pro M4 Max, 48GB, 1TB SSD, Sequoia OS, Affinity Photo/Designer/Publisher v1 & v2, Adobe CS6 Extended, LightRoom v6, Blender, InkScape, Dell 30" Monitor, Canon PRO-100 Printer, i1 Spectrophotometer, i1Publish, Wacom Intuos 4 PTK-640 graphics tablet

Posted

Thank you thank you Ldina

Yes I'm going thru the learning curve. Slowly working thru tutes and docs.

So an Image Layer is like a Smart Object and a Pixel Layer is like a regular layer. Got it. That makes sense now. Totally understand. This was a great help.

Posted
2 minutes ago, influxx said:

Yes I'm going thru the learning curve. Slowly working thru tutes and docs.

So an Image Layer is like a Smart Object and a Pixel Layer is like a regular layer. Got it. That makes sense now. Totally understand. This was a great help.

You're welcome. Image vs Pixel Layers was one of the "gotchas" when I first started using Affinity, so it's important to know. Another biggie (for me) was the handling of Alpha channels, masking, etc, which also have some major differences from PS. Learning to manage layer positioning, (masking, clipping, grouping, etc) is also important, so spend some time learning about that. Many things work the same way, but some are very different. Glad that helped. 

2024 MacBook Pro M4 Max, 48GB, 1TB SSD, Sequoia OS, Affinity Photo/Designer/Publisher v1 & v2, Adobe CS6 Extended, LightRoom v6, Blender, InkScape, Dell 30" Monitor, Canon PRO-100 Printer, i1 Spectrophotometer, i1Publish, Wacom Intuos 4 PTK-640 graphics tablet

Posted (edited)

I don't use Photoshop but I've struggled a while like you, until I found that an image layer is not a pixel layer. 

[Old answer: Obsolete] When in the latter, only what you select is erased when you press Delete. But if you are in an image layer, the whole layer will be deleted. [/Old answer]

I soon realized that keeping an eye on the Layers panel was essential, as could be an altimeter for an aviator… 

17 minutes ago, Ldina said:

Learning to manage layer positioning, (masking, clipping, grouping, etc) is also important

I found this cheat page useful:

P.S. An important difference between Pixel layer and Image layer is that, as a file linked or embedded, an Image layer appears in the Resource manager, where you can replace the file and see various informations about it.

Edited by Oufti

Affinity Suite 2.5 – Monterey 12.7.5 – MacBookPro 14" 2021 M1 Pro 16Go/1To

I apologise for any approximations in my English. It is not my mother tongue.

Posted
1 hour ago, Oufti said:

I soon realized that keeping an eye on the Layers panel was essential, as could be an altimeter for an aviator… 

Personally, I long ago set the tooltip delay to around 1 to 2 seconds when I was first learning the app's ins & outs, & I have gotten so used to them popping up quickly that I never changed it back to the considerably longer default delay time. (I suggested long ago that the default should be short to help new users but nothing came of it.)

Anyway, even when the icons are small & hard to distinguish from one another, the short delay can really help.

All 3 1.10.8, & all 3 V2.6 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
A
ll 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Posted
4 hours ago, Ldina said:

You're welcome. Image vs Pixel Layers was one of the "gotchas" when I first started using Affinity, so it's important to know. Another biggie (for me) was the handling of Alpha channels, masking, etc, which also have some major differences from PS. Learning to manage layer positioning, (masking, clipping, grouping, etc) is also important, so spend some time learning about that. Many things work the same way, but some are very different. Glad that helped. 

Indeed. I've had to figure out the masking functionality here. Invested too many years in one system. I'm too old to learn a new UI lol :)

Posted
2 hours ago, R C-R said:

Personally, I long ago set the tooltip delay to around 1 to 2 seconds

wow, didnt even know you could do that. To be fair I havent even began to customize the prefs yet, but that is a cool option

 

Cheers

  • Staff
Posted

Hi @influxx, we've made some changes to the way Image layers are handled in the 2.6 beta which will hopefully improve this initial experience. The goal is to keep the "immutable" (non-destructive) aspect, so a mask will now be added to an Image layer if you delete from it with an active selection. If you're not bothered about the layers remaining non-destructive, this behaviour can be toggled to destructive in the assistant settings (the little robot icon on the top toolbar), and then it will behave exactly like you'd expect if coming from other image editing software. You can also duplicate with an active selection and it will create a new layer with just the selected portion.

As you work with Image layers in the current 2.5 release, you may find other scenarios where they need to be rasterised manually. These have also been addressed in 2.6, so you'll be able to enter the different personas (Develop, Tone Mapping, Liquify) and apply plugins to them, although the latter has just gone in so won't be in a public build yet.

To elaborate more on Image layers: think of them as "containers" for the bitmap data, which could be compressed and in a different format to the document you're working in. For example, you could have placed a JPEG which is 8-bit sRGB, but you're working in a document which is 16-bit Adobe RGB. Rather than convert the compressed image data to 16-bit and store it in the document (which will inflate the document file size needlessly), we just store the original compressed data, then decode it at runtime to show it on screen and composite with it. They can also be an external/linked resource.

This does mean that Image layers are treated differently—immutable objects as opposed to mutable. It has been one of the long-standing differences that causes a lot of initial confusion, so we're addressing it for 2.6 to make the initial experience less frustrating. Hope that helps.

@JamesR_Affinity for Affinity resources and more
Official Affinity Photo tutorials

Posted
34 minutes ago, James Ritson said:

and apply plugins to them, although the latter has just gone in so won't be in a public build yet.

do you mean that plugins/scripting is coming in the next cycle? That would be awesome news!😁

New hardware

dell inspiron 3030 i5 14400/16GB DDR5/UHD 730 graphics

Acer KB202 27in 1080p monitor

Affinity Photo 1.10.6

Affinity photo 2 2.5.3 Affinity Designer 2 2.5.3 Affinity Publisher 2 2.5.3 on Windows 11 Pro version 24H2

Beta builds as they come out.

canon 80d| sigma 18-200mm F3.5-6.3 DC MACRO OS HSM | Tamron SP AF 28-75mm f/2.8 XR Di LD | Canon EF-S 10-18mm f/4.5-5.6 IS STM Autofocus APS-C Lens, Black

 

Posted
5 hours ago, James Ritson said:

The goal is to keep the "immutable" (non-destructive) aspect

It's a shame that Serif didn't listen to the requests for a specific "raster" layer that would convert the Image container into a layer of pixels usable for common pixel manipulations and operations. This would preserve the original, but allow manipulation of its contents.

Affinity Store (MSI/EXE): Affinity Suite (ADe, APh, APu) 2.5.7.2948 (Retail)
Dell OptiPlex 7060, i5-8500 3.00 GHz, 16 GB, Intel UHD Graphics 630, Dell P2417H 1920 x 1080, Windows 11 Pro, Version 24H2, Build 26100.2605.
Dell Latitude E5570, i5-6440HQ 2.60 GHz, 8 GB, Intel HD Graphics 530, 1920 x 1080, Windows 11 Pro, Version 24H2, Build 26100.2605.
Intel NUC5PGYH, Pentium N3700 2.40 GHz, 8 GB, Intel HD Graphics, EIZO EV2456 1920 x 1200, Windows 10 Pro, Version 21H1, Build 19043.2130.

Posted
1 hour ago, Pšenda said:

It's a shame that Serif didn't listen to the requests for a specific "raster" layer that would convert the Image container into a layer of pixels usable for common pixel manipulations and operations. This would preserve the original, but allow manipulation of its contents.

I do not understand what you mean. Isn't that what the Rasterize option does currently?

All 3 1.10.8, & all 3 V2.6 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
A
ll 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Posted
2 hours ago, R C-R said:

Isn't that what the Rasterize option does currently?

No, because the Rasterize command changes (destroys) the container Image layer by destructively converting it to a Pixel layer. The aforementioned "Rasterize layer" would rasterize only virtually, i.e. it would preserve the original Image layer, and provide the rasterization output for pixel manipulation.

Affinity Store (MSI/EXE): Affinity Suite (ADe, APh, APu) 2.5.7.2948 (Retail)
Dell OptiPlex 7060, i5-8500 3.00 GHz, 16 GB, Intel UHD Graphics 630, Dell P2417H 1920 x 1080, Windows 11 Pro, Version 24H2, Build 26100.2605.
Dell Latitude E5570, i5-6440HQ 2.60 GHz, 8 GB, Intel HD Graphics 530, 1920 x 1080, Windows 11 Pro, Version 24H2, Build 26100.2605.
Intel NUC5PGYH, Pentium N3700 2.40 GHz, 8 GB, Intel HD Graphics, EIZO EV2456 1920 x 1200, Windows 10 Pro, Version 21H1, Build 19043.2130.

Posted
10 hours ago, James Ritson said:

Hi @influxx, we've made some changes ... Hope that helps.

Hi James. Thanks much for the detailed explanation. The construct does make sense once one understands the nomenclature. Equating the Image Layer with the familiar Smart Object - a container as you suggest - makes everything fall into line in my little head. Its learning a whole new ecosystem after decades of muscle memory with another. All good. I just gotta remember to be patient and adapt.

Looking forward to the updates.

Cheers

Posted
48 minutes ago, Pšenda said:

No, because the Rasterize command changes (destroys) the container Image layer by destructively converting it to a Pixel layer. The aforementioned "Rasterize layer" would rasterize only virtually, i.e. it would preserve the original Image layer, and provide the rasterization output for pixel manipulation.

I do not understand how this virtually rasterized output would work while still using the Image layer's color space & resolution if either or both is different from that of the document.

All 3 1.10.8, & all 3 V2.6 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
A
ll 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Posted
26 minutes ago, R C-R said:

I do not understand how this virtually rasterized output would work while still using the Image layer's color space & resolution if either or both is different from that of the document.

 

1 hour ago, Pšenda said:

No, because the Rasterize command changes (destroys) the container Image layer by destructively converting it to a Pixel layer. The aforementioned "Rasterize layer" would rasterize only virtually, i.e. it would preserve the original Image layer, and provide the rasterization output for pixel manipulation.

The only image editor that I know of that actually manages this is PhotoLine.

PhotoLine's Image layers do not differentiate between arbitrary "raster" and "image" layers or "smart objects" (Photoshop). In PhotoLine an image imported/dragged into a layer is by default an editable pixel layer, yet it retains its original bit depth, resolution, image mode, transparency setting, colour management profile, and transformation independent of the file's output intent.

And the pixel data remains editable without affecting the original intent of that image layer.

So it is entirely possible to have such a workflow. It's just that 99% of creatives aren't familiar with this or can even imagine how that is possible because (as far as I am aware) outside of node-based editing workflows, no other layer-based image editor allows for such a workflow. Which explains your confusion how that would work @R C-R.

Nevertheless it is possible and a bit of a game changer in one's workflow, albeit that the workflow is a tad different, of course.

Posted
2 hours ago, Medical Officer Bones said:

In PhotoLine an image imported/dragged into a layer is by default an editable pixel layer, yet it retains its original bit depth, resolution, image mode, transparency setting, colour management profile, and transformation independent of the file's output intent.

OK, but how does it handle edits that for instance would change its transparency or color profile ... or is it not possible to do that on a per layer basis?

All 3 1.10.8, & all 3 V2.6 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
A
ll 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Posted
5 hours ago, R C-R said:

I do not understand how this virtually rasterized output would work

Exactly the same as the existing Rasterize command.

Affinity Store (MSI/EXE): Affinity Suite (ADe, APh, APu) 2.5.7.2948 (Retail)
Dell OptiPlex 7060, i5-8500 3.00 GHz, 16 GB, Intel UHD Graphics 630, Dell P2417H 1920 x 1080, Windows 11 Pro, Version 24H2, Build 26100.2605.
Dell Latitude E5570, i5-6440HQ 2.60 GHz, 8 GB, Intel HD Graphics 530, 1920 x 1080, Windows 11 Pro, Version 24H2, Build 26100.2605.
Intel NUC5PGYH, Pentium N3700 2.40 GHz, 8 GB, Intel HD Graphics, EIZO EV2456 1920 x 1200, Windows 10 Pro, Version 21H1, Build 19043.2130.

Posted
12 minutes ago, Pšenda said:

Exactly the same as the existing Rasterize command.

The existing command converts the Image container layer (with its own color space & resolution) to that of the document. So how could its output be used for 'pixel manipulation' when applied to something in the document's other pixel layers that have the document's resolution or color space?

All 3 1.10.8, & all 3 V2.6 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
A
ll 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Posted
3 hours ago, R C-R said:

OK, but how does it handle edits that for instance would change its transparency or color profile ... or is it not possible to do that on a per layer basis?

Every property can be individually controlled/maintained per layer, and that includes transparency (on or off) and colour management.

Of course, some tools rely on transparency to work (such as the auto transparency tool), and as such transparency will be activated for a specific layer when those are used.

Posted
2 minutes ago, Medical Officer Bones said:

Every property can be individually controlled/maintained per layer, and that includes transparency (on or off) and colour management.

So what happens when the resolution (DPI/PPI) of the layer doesn't match that of the document & the 'placed' resolution is changed? Is everything relating to its properties updated dynamically when used elsewhere in the document or what?

All 3 1.10.8, & all 3 V2.6 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
A
ll 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Posted
5 minutes ago, R C-R said:

So what happens when the resolution (DPI/PPI) of the layer doesn't match that of the document & the 'placed' resolution is changed? Is everything relating to its properties updated dynamically when used elsewhere in the document or what?

The native resolution of a layer is always maintained unless the user decided to "fix" the resolution of the layer.

When the document is scaled up or down as a whole, bitmap layers merely scale up and down as well and the effective PPI is adjusted for all. This is similar to how Affinity and image layers work (or how DTP apps also handle it) when the document is scaled.

This is also one of the reasons why PhotoLine has a "pixel view" mode. When assets are placed at (for example) at 600ppi or higher in a low resolution 100ppi document, zooming in resolves more and more detail. This is not always wanted, so switching to pixel view mode renders the higher PPI assets at the document's resolution intent. And it is of course possible to edit pixels in any mode, so the creative user is able to seemingly work in lower resolution, and preview the pixel edits at that lower resolution, yet in the background the higher resolution version is actually updated with high resolution edits.

Bitmap layers also have their own canvas settings, btw. Also quite handy, in particular when assets require transparent padding.

Export and output control over layers is btw excellent in PhotoLine. Just like in FIgma and other modern design tools each layer can be configured quickly to export to different formats and varying resolutions. Works well with those higher resolution assets placed: export a lowres, midres, and highres asset using different file formats and optimization settings in one action. And destructive pixel edits do not negatively affect the highres asset even in a low resolution document.

Posted
2 hours ago, Medical Officer Bones said:

The native resolution of a layer is always maintained unless the user decided to "fix" the resolution of the layer.

OK, but what happens if for example a marquee selection is made & copied to the clipboard? Is it copied at the document or layer resolution? If it is then pasted into some other layer (either a regular pixel bitmap layer or a different layer with its own set of properties independent of those of the document) what properties does it use?

All 3 1.10.8, & all 3 V2.6 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7
A
ll 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7

Posted
15 hours ago, R C-R said:

OK, but what happens if for example a marquee selection is made & copied to the clipboard? Is it copied at the document or layer resolution? If it is then pasted into some other layer (either a regular pixel bitmap layer or a different layer with its own set of properties independent of those of the document) what properties does it use?

When a selection is made and copied, it copies the current layer's properties and resolution (and dimensions!).

Let's say we are working in an 8 bit RGB 1920x1080 image and drag a 3000x2000 32bit EXR file, an  A4 300ppi 8bit CMYK, and a 1200ppi A3 1bit black and white into this document. We would need to scale some of these down to fit the lower resolution document.

All three layers retain their original data. When a selection is made of the 1 bit A3 1200ppi layer and duplicated or cut into a new layer that copy will maintain the original data. When a merged copy is made of that marquee selection it will be pasted at the image file's intent, i.e.: 8 bit RGB 1920x1080 resolution (but canvas size of that layer will be the size of the marquee selection).

The layer's original data informs how merging of layers functions as well or how the tools work. If we merge down the CMYK layer into the 1bit layer the CMYK layer's data is converted to the 1bit's layer bit depth and resolution.

If we use the brush tool to draw in one of these layers we cannot use colour in the 1bit layer: only black or white with aliased brushes. And no transparency (although it is actually possible to assign transparency to 1bit monochrome files/layers in PhotoLine - which is very handy when editing these and we can work in layers in 1bit monochrome mode too!).

Obviously certain blend modes will not have much effect (if any at all) when working with 1bit or greyscale bitmap layers.

At any rate, it's a different way of working compared to other layer-based image editors, and as always there are advantages and disadvantages. One caveat is that this adds a new level of control yet adds more complexity.

For example, when we merge that CMYK layer into a 1bit layer, the default conversion will be a very bad one without any dithering. A simple 50% threshold is used. So to prevent this we could first add a threshold adjustment layer to the CMYK layer to at least control the conversion a little better. Or we could convert the CMYK layer to a dithered black and white version with the reduce colors filter and then merge it into the 1bit layer for a good conversion.

Or (since the bottom image layer in the layer stack controls the final rendering intent) we could switch the bottom image layer to CMYK mode, then turn off the layer's CMY channels in the layer properties, and merge down the K channel only into the 1bit layer, then switch the bottom layer back to RGB, and voila: only the black of the CMYK layer is merged down without affecting any other layer but the bottom one (which would be preferably an empty bitmap layer just to control final rendering intent).

This last move is utterly alien to any other image editor, by the way.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.