Jump to content

Machine Learning: Object Selection Tool


Recommended Posts

  • Staff

Apps: Photo
Platforms: All

A Machine Learning Feature

As you will see there are two new features which use machine learning models for automatic object and subject selection. These features are optional and require downloading of the relevant models for them to work (instructions included in this post). We want to make sure it is clear that these are installed as pre-trained models and they do not use any of your own data for further training. Furthermore these operations all work ‘on device’ meaning none of your data leaves your device at any time.

Object Selection Tool

It is now possible to automatically select objects of your image, as well as their parts, to speed up your targeted editing process.

Where to find it

Access the Object Selection Tool from the Tools panel.

Initial Setup

Automatic Select  Subject relies on Machine Learning Models, which need to be installed prior to using the feature.

ml-models-settings.png

To install the Machine Learning Models, you will need to visit Settings > Machine Learning Models. Both Segmentation and Saliency need to be installed. Since all machine learning inference happens on the device itself, the models need to be downloaded and stored locally. These models can be quite large in size, making it impractical to include them in the initial app installation. Users also have the choice to uninstall these models to free up space if they are not needed.

Segmentation

The Segmentation model allows Photo’s Object Selection Tool to create precise, detailed pixel selections from pixel layers or placed images.

Saliency

Saliency, installed alongside Segmentation, allows Photo to understand what is visually significant on a pixel layer or placed image. It is required for the new one-click Select Subject to work.

How to use it

Access the Object Selection Tool from the Tools panel.

object selection tool.png

 

image.png

The tool uses a segmentation model to identify different objects on a layer. After the inference process is finished, you will be able to hover over different areas of the document and see hatched outlines appear. By clicking once, an initial selection can be made, with subsequent clicks adding to the selection (the mode on the context toolbar will switch automatically to Add).

Note: the selection will be post-processed if Soft Edges is enabled, but for complex subjects with hair/fur etc. you will still want to use Refine on the context toolbar.

Context Toolbar options:

Multi-part Objects

These are enabled by default, will include all parts of an object if it is obstructed by another object. When disabled, the sampling of the object mask is bounded to the area the user is hovering over.

multi objects on.png

Multi-part Objects ON

 

multi objects off.png

Multi-part Objects OFF

 

Above, note the selection result is different as a result of the tool’s Multi-part Objects recognising similar areas of the branch (ON) vs fragmenting the selection when detecting the snake (OFF).

 

Soft Edges

Enabled by default in most raster-based workflows, Soft Edges refine the selection by adding a small border value to help matte and soften the selection bounds. While it is recommended to keep Soft Edges enabled for the majority of projects, it may be beneficial to disable it for edge-case workflows such as pixel art.

Soft Edges.png

 

Modifiers

  • When using Option (Mac) / Alt (Windows), you can break down a selection into components such as outfit, hands, and face.
  • Holding Shift while doing this will further break it down into parts, like separating an outfit into top and bottom halves. - Click-dragging can be used to make a partial selection of an object, which is helpful for selecting segmented areas like foreground detail when holding Option (Mac) / Alt (Windows).

[[ IMG: Modifiers.png ]]

Modifiers.png

(A) Object Selection at its default,

(B) Object Selection with option (Mac) / Alt (Windows),

(C) Object Selection with option-Shift (Mac) / Alt Shift (Windows).

 

 Click-drag to select part of the main subject.

All of the above functionality is also available on iPad.

COMPATIBILITY: Sorry but due to the functions we need to run the Object Selection and Subject Selection functionality, the Machine Learning features only available on Apple Silicon iPads and macOS devices (and those need to be running on a recent macOS, as Catalina, Big Sur and Monterey do not support the required calls). Machine Learning will run on both Windows x64 and Arm64 hardware running Windows 10 or Windows 11.

 

Patrick Connor
Serif Europe Ltd

"There is nothing noble in being superior to your fellow man. True nobility lies in being superior to your previous self."  W. L. Sheldon

 

Link to comment
Share on other sites

  • Staff

Known Issues in this feature
A list of unresolved issues for this feature, reported by users

  • AF-3596 - iPad > App Settings > Machine Learning Models > No progress bar shows when installing models
  • AF-4763 - Machine Learning section of iPad Settings is truncated when in Portrait
  • AF-3864 - Object Selection Tool and Select Subject: Appears to use 5.5GB of VRAM when used/selected on Windows
  • AF-4821 - App crashes when using Object Selection tool on stock image converted to RGB/16
  • AF-5182 - App hangs when making a large selection using Object Selection tool on a large image size

Released Fixes
A list of issues for this feature, available in the current beta build

  • AF-4319 - ML: Cannot swap to CPU Only, app restarts using CPU and GPU
  • AF-4720 - Crash after installing Machine Learning Models when Install is clicked multiple times
  • AF-4813 - App will crash when using Object Selection tool on RAW layers
  • AF-4825 - Object Selection icon not appear in customised toolbar after update
Link to comment
Share on other sites

Amazing, cant wait to test and use this!

2021 16” Macbook Pro w/ M1 Max 10c cpu /24c gpu, 32 GB RAM, 1TB SSD, macOS Sequoia 15.1

2018 11" iPad Pro w/ A12X cpu/gpu, 256 GB, iPadOS 18.1

Link to comment
Share on other sites

I haven't used these machine learning models yet, but I just wanted to say how nice a surprise it was to see that these were optional downloads, and not simply bundled into the existing applications. I really appreciate Serif giving creatives the respect to choose whether or not they want to integrate any machine learning into their workflows. 

Also, kudos for side-stepping the blanket AI hype train and calling these 'Machine Learning Models', which is not only more accurate, but also helps to frame the discussion in a much more informed manner.

Link to comment
Share on other sites

I think that at least on low-end machines, the Object Selection Tool could use some sort of progress indicator (e.g. in the status bar) while initially analyzing the image. While it might be available instantly on modern computers, my experience here (on an old Intel MacBook Pro) is that while the tool basically works as expected, it takes a considerable amount of time until selections can be made. Currently, this is only indicated by a clock cursor (as opposed to the crosshair once the ML model has finished doing its job), and this doesn't give users any clue about the remaining waiting time. I understand that this might be considered an edge case given that Intel Macs are slowly going extinct. I still think it should be considered as long as Affinity still runs on older hardware.

Edit: The same is true for the Select Subject feature.

Link to comment
Share on other sites

New issues arise.
When trying to duplicate a selection on pixel layer it duplicates the complete image like it was an image layer.
Also the selected areas are still present although completely deselected or invisible, see video


(suggestions)Perhaps a current layer and below option could be implemented.
And also a contextual, menu to copy/paste flattened/merged as the tool also selects on image layers.

 

 

 

 




 

Link to comment
Share on other sites

23 minutes ago, Return said:

When trying to duplicate a selection on pixel layer it duplicates the complete image like it was an image layer.

At 0:14-0:15 in your video you have something selected highlighted, but when you move to the Layers panel and get the Context Menu there doesn't seem to be an active selection showing on the screen (0:18 or so). And I never saw the marching ants appear. Did you click the tool when the correct object was highlighted, to actually make the selection?

New video (above) makes it much clearer.

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
    Laptop 2: Windows 11 Pro 24H2,  16GB memory, Snapdragon(R) X Elite - X1E80100 - Qualcomm(R) Oryon(TM) 12 Core CPU 4.01 GHz, Qualcomm(R) Adreno(TM) X1-85 GPU
iPad:  iPad Pro M1, 12.9": iPadOS 18.1, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sequoia 15.0.1

Link to comment
Share on other sites

15 minutes ago, walt.farrell said:

At 0:14-0:15 in your video you have something selected highlighted, but when you move to the Layers panel and get the Context Menu there doesn't seem to be an active selection showing on the screen (0:18 or so). And I never saw the marching ants appear. Did you click the tool when the correct object was highlighted, to actually make the selection?

It wasn't a good representation on what I saw, will create another video to show my findings and put that instead of the current in that post.




 

Link to comment
Share on other sites

Thanks, @Return. Much clearer.

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
    Laptop 2: Windows 11 Pro 24H2,  16GB memory, Snapdragon(R) X Elite - X1E80100 - Qualcomm(R) Oryon(TM) 12 Core CPU 4.01 GHz, Qualcomm(R) Adreno(TM) X1-85 GPU
iPad:  iPad Pro M1, 12.9": iPadOS 18.1, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sequoia 15.0.1

Link to comment
Share on other sites

  • Staff

Hi @Return

The behaviour in your video appears to be expected and not specific to this selection tool.

40 minutes ago, Return said:

When trying to duplicate a selection on pixel layer it duplicates the complete image like it was an image layer.

Duplicate will always duplicate the whole layer, Duplicate Selection (Ctrl/CMD +J) or Copy & Paste will copy the selection area. 

40 minutes ago, Return said:

Also the selected areas are still present although completely deselected or invisible, see video

The marching ants are not tied to any particular layer and are just a selection that remains on screen. Changing the Layer in the Layers Panel will allow you to change what you're copying from. You can use Deselect button on the Toolbar (just to the right of Select All you shown in your video) to remove the marching ant selection. 

There is already an option to Copy Flattened/Merged in the Edit Menu. 

I will check with the devs the expected behaviour with Object Selection Tool showing the preview areas when the select layer is hidden. 

Link to comment
Share on other sites

13 minutes ago, EmT said:

There is already an option to Copy Flattened/Merged in the Edit Menu. 

I meant right clicking the selected section/segment to duplicate the selection like the ctrl/cmd+J option where you are correct that the regular "duplicate" doesn't work on pixel selections.

Now try to select a segment>ctrl+J>creates new pixel layer=ok
Now try to select a different segment>oops I have to reselect the original layer in the layers panel to create a new selection from it because that part doesn't exist on that current layer because it auto selects the newly created layer.(see video)

Option merged/flattened suggestion is because you can use this tool on image layers but you have to go to edit>copy flattened/merged and paste to make this selection appear as pixel layer hence my suggestion to add a contextual right click menu.
 

 

 

 




 

Link to comment
Share on other sites

So, I installed the 2.6 beta and installed the ML models on my desktop Mac. I've got to say I'm underwhelmed. I tried the Object Selection tool on a couple of stock photos (from Pexels) and got less-than-spectacular results. I've attached a video of one such attempt.

To me, the benefit of "object selection" ought to be that it makes my life quicker and easier. It ought to be able to select the branches of a tree, the hairs on a model's portrait, and so forth (or at least give a really good starting point). But, the attempt I'm including below seems fairly typical (albeit on its first iteration, and with only a few attempts on my part). I've sped up parts of the video, but the time between clicking on the tree (with the Object Selection tool active) and the eventual disappearance of the dialog box was nearly a minute and a half. And this is the result?

It really should be better than this.

I must add that I love the idea that Subject and Object Selection is being pursued, and the manner in which it has been implemented seems to be really nice. l also like the idea that it is entirely local, without any cloud involvement, and hence can be reliably called private and safe from copyright infringement issues.

I've also got to add that I would not be averse to downloading a much bigger machine learning model in order to get better selections. The way it stands now, the feature is unusable (for me, at least). But I love the idea that it is being done, and I applaud the developers for letting the cat out of the bag (even at its "not ready for prime time" current state).

Affinity Photo 2, Affinity Publisher 2, Affinity Designer 2 (latest retail versions) - desktop & iPad
Culling - FastRawViewer; Raw Developer - Capture One Pro; Asset Management - Photo Supreme
Mac Studio with M2 Max (2023); 64 GB RAM; macOS 13 (Ventura); Mac Studio Display - iPad Air 4th Gen; iPadOS 18

Link to comment
Share on other sites

Oh dear 😉

Actually a great feature. But a few extra laps on the training track are still required. The three real-life examples were already cut out on white or transparency. So the tool should be able to recognize the shapes even better. Product shots like these are part of everyday layout work.

 

 

Hardware: Windows 11 Pro (23H2, build 22631.4317, Windows Feature Experience Pack 1000.22700.1041.0), Intel(R) Core(TM) i9-14900K, 32 Core@3.20 GHz, 128 GB RAM, NVIDIA RTX A4000 (16GB VRAM, driver 551.61), 1TB + 2TB SSD. 1 Display set to native 2560 x 1440.
Software: Affinity v1 - Designer/Publisher/Photo (1.10.6.1665), Affinity v2 (universal license) - Designer/Publisher/Photo, v2 betas.

Link to comment
Share on other sites

Very cool implementation! My quick experiments with different images were quite promising. I especially loved the ability to select different components and parts, even though it didn’t always separate eyes from face or nails from hands.

I also wish you could leave the usual selection modifiers shortcuts for add and delete in place (and use something else for components).

Affinity 2.6.0 Beta | macOS Sequoia 15.1 | MacBook Pro 14" M1 Pro/16GB (2021) | XPPen Artist Pro 16 (Gen 2)

Link to comment
Share on other sites

Seems that the dialog on iPad is out of bounds.

IMG_0513.png

Both PC’s Win 11 x64 System
PC1 ASUS ROG Strix - AMD Ryzen 9 6900X CPU @ 3.3GHz. 32GB RAM

- GPU 1: AMD Radeon integrated. GPU 2: NVIDIA RTX 3060, 6GB
PC2 ASUS ProArt  PZ13 - Snapdragon X Plus X1P42100 (8 CPUs), 16GB RAM
- Neural Processor - Qualcomm® Hexagon™ NPU up to 45TOPS 

- GPU 1: Qualcomm Adreno Graphics, 

 

Link to comment
Share on other sites

Okay, found it.

So far it doesn't seem like machine learning understands objects it doesn't recognize, either here or in Gigapixel AI.

After I clicked on the larger image excerpted here it hung for 30 seconds, gave me the cursor back for a second, then hung again... at which point I force-quit Affinity Photo.

I restarted and tried it again, and it did make some marginally intelligent selections when I dragged the tool around, but when I clicked on this one it hung again for a good 15 seconds before making the selection.

The idea is good, but it seems to be early days still, and I don't know whether it'll ever be useful on the pictures I take (which I use to "paint" with, i.e. they don't make much sense on their own).

But I can't say that bothers me, because it just reinforces that no machine will ever be able to produce the kind of abstract art I use Affinity Photo for!

 

Untitled.jpg

Link to comment
Share on other sites

For what it's worth, this is what the macOS "remove background" command did in milliseconds (the marching ants are from the Affinity screenshot). It looks like it also took the liberty of increasing the saturation and contrast without asking me.

Pretty arrogant, eh?

 

Untitled Background Removed.png

Link to comment
Share on other sites

1 hour ago, nickbatz said:

Where do you find the Object Selection tool?

If it does not show up in Tools automatically, you can use View > Configure Tools to add it.

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
    Laptop 2: Windows 11 Pro 24H2,  16GB memory, Snapdragon(R) X Elite - X1E80100 - Qualcomm(R) Oryon(TM) 12 Core CPU 4.01 GHz, Qualcomm(R) Adreno(TM) X1-85 GPU
iPad:  iPad Pro M1, 12.9": iPadOS 18.1, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sequoia 15.0.1

Link to comment
Share on other sites

1 minute ago, walt.farrell said:

If it does not show up in Tools automatically, you can use View > Configure Tools to add it.

Thanks, yeah, I found it.

I knew you had to add it, but there are 50,001 icons with no labels until you mouse over them. :)  But I looked more closely at one of the videos above.

Link to comment
Share on other sites

49 minutes ago, AiDon said:

Seems that the dialog on iPad is out of bounds.

It's fine with the iPad in landscape orientation, but I agree it has issues in Portrait orientation.

image.thumb.png.9d43146008073ab543e549d3fb2e16e4.png

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
    Laptop 2: Windows 11 Pro 24H2,  16GB memory, Snapdragon(R) X Elite - X1E80100 - Qualcomm(R) Oryon(TM) 12 Core CPU 4.01 GHz, Qualcomm(R) Adreno(TM) X1-85 GPU
iPad:  iPad Pro M1, 12.9": iPadOS 18.1, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sequoia 15.0.1

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.