Jump to content

Recommended Posts

Does Affinity Photo support dual GPUs? And if not, is this planned?
Adobe disappointed usersof the new Mac Pros because they still are not supporting the two GPUs. However, Pixemater does...

Share this post


Link to post
Share on other sites

That's a bit sad to hear. Multi-threading on CPUs however is welcome...

It's just that I work with huge files with hundreds of layers and I think any tiny bit of speed would help. Even on my new Mac PRo... and the demands will just get bigger.

Share this post


Link to post
Share on other sites

I think you'll find that Affinity is quicker with most documents than other offerings... even those that support Dual GPUs. I can definitely rig together documents to crucify each application (including Affinity) but in the real world of just actually using the software to achieve something, you should see that Affinity's approach is scalable and effective. GPUs ultimately bring restrictions with them that are hard to get around and as soon as you need to start 'getting around them' you incur terrible penalties in terms of performance.

 

I would not argue against using GPUs if the user's requirements were always a known quantity - video editing for example will always be something of a sane pixel size that you can plan for and work with, but we aim to allow gigapixel documents to be worked on and getting data to/from the card is too costly (as you can't actually store this much data on the card itself)

Share this post


Link to post
Share on other sites

Further to this, in order to offer features with significant algorithmic complexity (inpainting, performant blurs, etc.), the GPU is not an option.

 

We take advantage of it for some things though - the super smooth panning / zooming of documents is entirely GPU based :)

 

Andy.

Share this post


Link to post
Share on other sites

...we aim to allow gigapixel documents to be worked on and getting data to/from the card is too costly (as you can't actually store this much data on the card itself)

When you say "too costly" what is that cost? Does that mean the amount of time it takes to transfer the data from RAM (or storage) to the card and back, can be more than the amount of time saved applying the GPU to the data?

Share this post


Link to post
Share on other sites

Transferring data *to* a GPU is fast - no problem there.. The GPU can work on the data very fast - no problem there either..

 

The problem comes in getting the data back into main memory - that's slow..

 

Take a simple Gaussian blur for example - we can perform the blur many times faster just using CPU in main memory than roundtripping to the GPU - because of how long it takes to get the result back.

 

Thanks,

 

Andy.

Share this post


Link to post
Share on other sites

 The hype around GPU processing, particularly that coming from AMD and Apple is just noise to support their various decisions. Noise that isn’t nearly accurate or informative of real world implications and opportunities for GPU processing.


 


Bullshit, by any other word.

Share this post


Link to post
Share on other sites

I'm not so sure..

 

See, sometimes, you don't need to "come back" from the GPU - you can send your data there (fast), do loads of stuff (fast) - then only bring the data back right at the end for saving..

 

Video stuff can work will like this, but yeah - photo editing needs the data back from the card all the time, so is often ill suited.

 

A lot of products wheel out the "GPU accelerated" line because it sounds shiny - and often they don't really use much GPU at all, or are slow when they do. That, yes, is bullshit - but the GPU cannot be dismissed completely.

 

Also, here's another interesting thing - some architectures (iPad being a good example) - share the same memory between the GPU and CPU - so their is no penalty in either direction.. this can be very useful.

 

There are a few filters in Photo which perform *faster* on an iPad Air 2 than on my 12 core new Mac Pro 32GB in the office.. because we can use the GPU without penalty - but that's all stuff for another day ;)

 

Thanks,

 

Andy.

Share this post


Link to post
Share on other sites

Absolutely agree. Metal being a prime example of taking advantage of that shared memory, and a bunch of other optimisations for putting things like physics and other activities off to the GPU.

 

PhysX was the other great lose from the fallout around Nvidia and CUDA, unfortunately.

Share this post


Link to post
Share on other sites

 

Also, here's another interesting thing - some architectures (iPad being a good example) - share the same memory between the GPU and CPU - so their is no penalty in either direction.. this can be very useful.…There are a few filters in Photo which perform *faster* on an iPad Air 2 than on my 12 core new Mac Pro 32GB in the office.. because we can use the GPU without penalty - but that's all stuff for another day ;)

Thanks for answering my earlier question. Your answer leads to another question:

 

If memory sharing between CPU and GPU can make GPU acceleration of some operations practical, where they might not have been if you had to go out across a bus to get to the discrete GPU, is there ever a case on a Mac where GPU acceleration of a feature is more practical with Iris/Iris Pro level integrated graphics than with a discrete GPU?

 

I ask because if that was true, that would go against the conventional wisdom that a discrete GPU is always better. But I don't know if Mac integrated graphics result in the same penalty-free GPU use as your iPad example of GPU/CPU shared memory.

Share this post


Link to post
Share on other sites

Hi Leftshark,

 

That's a good question - and yes, in theory, integrated graphics cards should be able to work like that.

 

Unfortunately though, while Iris shares main memory with the CPU, the ability to allocate memory which is accessible by both the CPU and GPU is not exposed through any drivers I have seen - we are still forced to do a real copy of all the bits to and from the card - from one area of main memory to another area of main memory and back again!

 

I've always been surprised that this wasn't something which is made available..

 

Thanks,

 

Andy.

Share this post


Link to post
Share on other sites

Hi Leftshark,

 

That's a good question - and yes, in theory, integrated graphics cards should be able to work like that.

 

Unfortunately though, while Iris shares main memory with the CPU, the ability to allocate memory which is accessible by both the CPU and GPU is not exposed through any drivers I have seen - we are still forced to do a real copy of all the bits to and from the card - from one area of main memory to another area of main memory and back again!

 

I've always been surprised that this wasn't something which is made available..

 

Thanks,

 

Andy.

YEAH I´d definitely love to have my MBp running faster than a MacPro  :D  :rolleyes:

 

Sometimes one is just scratching ones head about what engineers don´t include,

Sony Software, start recording with photo trigger, false color overlay, voice transmission over Sony Wifi app and LAV mic attached to smartphone instead of 500€ Sennheiser "cough cough" :blink: 


 

 

Share this post


Link to post
Share on other sites

Here's a free image editor that uses GPU and it seems pretty fast:

https://pencilsheep.com/

 

So, somehow they figured out how to use GPU for blur, glow, distort and all the other live filters.

The number of filters is pretty impressive, it's definitely worth a try.

Although a web version is available, I would encourage trying the desktop version.

 

Edit: I noticed is for Windows/Linux only.


Andrew
-
Win10 x64 AMD Threadripper 1950x, 64GB, 512GB M.2 PCIe NVMe SSD + 2TB, dual GTX 1080ti
Dual Monitor Dell Ultra HD 4k P2715Q 27-Inch

Share this post


Link to post
Share on other sites

it's just an example of GPU being used in an image editor.

There's been some talking lately here on the forum about how GPU is not ideal for an image editor, Pencilsheep seems to prove image editors can benefit from GPU too.


Andrew
-
Win10 x64 AMD Threadripper 1950x, 64GB, 512GB M.2 PCIe NVMe SSD + 2TB, dual GTX 1080ti
Dual Monitor Dell Ultra HD 4k P2715Q 27-Inch

Share this post


Link to post
Share on other sites

I ran the web version on my iPhone 7. VERY impressive I have to say! Of course the UI for a mobile device sucks, but to be fair it's definitely a computer vs mobile app....

 

Here's a free image editor that uses GPU and it seems pretty fast:

https://pencilsheep.com/

 

So, somehow they figured out how to use GPU for blur, glow, distort and all the other live filters.

The number of filters is pretty impressive, it's definitely worth a try.

Although a web version is available, I would encourage trying the desktop version.

 

Edit: I noticed is for Windows/Linux only.


2017 15" MacBook Pro 14,3 w/ Intel 4 Core i7 @ 2.8 GHz, 16 GB RAM, AMD 455 @ 2 GB, 512 GB SSD, macOS High Sierra

2018 11" iPad Pro 256 GB, latest iPadOS public beta

Share this post


Link to post
Share on other sites
On 2/16/2015 at 9:26 AM, Andy Somerfield said:

Hi Leftshark,

 

That's a good question - and yes, in theory, integrated graphics cards should be able to work like that.

 

Unfortunately though, while Iris shares main memory with the CPU, the ability to allocate memory which is accessible by both the CPU and GPU is not exposed through any drivers I have seen - we are still forced to do a real copy of all the bits to and from the card - from one area of main memory to another area of main memory and back again!

 

I've always been surprised that this wasn't something which is made available..

 

Thanks,

 

Andy.

 

And the question is, Andy, don't you use Metal on the iPad?

That's the main reason Apple created Metal: to avoid data copying between the CPU and the GPU...

 

And if you do use it on the iPad, why not use it on the Mac too?

 


mfsignature.jpg

Share this post


Link to post
Share on other sites
2 minutes ago, Nikos said:

 

And the question is, Andy, don't you use Metal on the iPad?

That's the main reason Apple created Metal: to avoid data copying between the CPU and the GPU...

 

And if you do use it on the iPad, why not use it on the Mac too?

 

already using metal


 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×

Important Information

These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.