Jump to content
You must now use your email address to sign in [click for more info] ×

Support of Dual GPU (New Mac Pro)?


Recommended Posts

  • Staff

I think you'll find that Affinity is quicker with most documents than other offerings... even those that support Dual GPUs. I can definitely rig together documents to crucify each application (including Affinity) but in the real world of just actually using the software to achieve something, you should see that Affinity's approach is scalable and effective. GPUs ultimately bring restrictions with them that are hard to get around and as soon as you need to start 'getting around them' you incur terrible penalties in terms of performance.

 

I would not argue against using GPUs if the user's requirements were always a known quantity - video editing for example will always be something of a sane pixel size that you can plan for and work with, but we aim to allow gigapixel documents to be worked on and getting data to/from the card is too costly (as you can't actually store this much data on the card itself)

Link to comment
Share on other sites

...we aim to allow gigapixel documents to be worked on and getting data to/from the card is too costly (as you can't actually store this much data on the card itself)

When you say "too costly" what is that cost? Does that mean the amount of time it takes to transfer the data from RAM (or storage) to the card and back, can be more than the amount of time saved applying the GPU to the data?

Link to comment
Share on other sites

  • Staff

Transferring data *to* a GPU is fast - no problem there.. The GPU can work on the data very fast - no problem there either..

 

The problem comes in getting the data back into main memory - that's slow..

 

Take a simple Gaussian blur for example - we can perform the blur many times faster just using CPU in main memory than roundtripping to the GPU - because of how long it takes to get the result back.

 

Thanks,

 

Andy.

Link to comment
Share on other sites

 The hype around GPU processing, particularly that coming from AMD and Apple is just noise to support their various decisions. Noise that isn’t nearly accurate or informative of real world implications and opportunities for GPU processing.


 


Bullshit, by any other word.

Link to comment
Share on other sites

  • Staff

I'm not so sure..

 

See, sometimes, you don't need to "come back" from the GPU - you can send your data there (fast), do loads of stuff (fast) - then only bring the data back right at the end for saving..

 

Video stuff can work will like this, but yeah - photo editing needs the data back from the card all the time, so is often ill suited.

 

A lot of products wheel out the "GPU accelerated" line because it sounds shiny - and often they don't really use much GPU at all, or are slow when they do. That, yes, is bullshit - but the GPU cannot be dismissed completely.

 

Also, here's another interesting thing - some architectures (iPad being a good example) - share the same memory between the GPU and CPU - so their is no penalty in either direction.. this can be very useful.

 

There are a few filters in Photo which perform *faster* on an iPad Air 2 than on my 12 core new Mac Pro 32GB in the office.. because we can use the GPU without penalty - but that's all stuff for another day ;)

 

Thanks,

 

Andy.

Link to comment
Share on other sites

Absolutely agree. Metal being a prime example of taking advantage of that shared memory, and a bunch of other optimisations for putting things like physics and other activities off to the GPU.

 

PhysX was the other great lose from the fallout around Nvidia and CUDA, unfortunately.

Link to comment
Share on other sites

 

Also, here's another interesting thing - some architectures (iPad being a good example) - share the same memory between the GPU and CPU - so their is no penalty in either direction.. this can be very useful.…There are a few filters in Photo which perform *faster* on an iPad Air 2 than on my 12 core new Mac Pro 32GB in the office.. because we can use the GPU without penalty - but that's all stuff for another day ;)

Thanks for answering my earlier question. Your answer leads to another question:

 

If memory sharing between CPU and GPU can make GPU acceleration of some operations practical, where they might not have been if you had to go out across a bus to get to the discrete GPU, is there ever a case on a Mac where GPU acceleration of a feature is more practical with Iris/Iris Pro level integrated graphics than with a discrete GPU?

 

I ask because if that was true, that would go against the conventional wisdom that a discrete GPU is always better. But I don't know if Mac integrated graphics result in the same penalty-free GPU use as your iPad example of GPU/CPU shared memory.

Link to comment
Share on other sites

  • Staff

Hi Leftshark,

 

That's a good question - and yes, in theory, integrated graphics cards should be able to work like that.

 

Unfortunately though, while Iris shares main memory with the CPU, the ability to allocate memory which is accessible by both the CPU and GPU is not exposed through any drivers I have seen - we are still forced to do a real copy of all the bits to and from the card - from one area of main memory to another area of main memory and back again!

 

I've always been surprised that this wasn't something which is made available..

 

Thanks,

 

Andy.

Link to comment
Share on other sites

  • 1 year later...

Hi Leftshark,

 

That's a good question - and yes, in theory, integrated graphics cards should be able to work like that.

 

Unfortunately though, while Iris shares main memory with the CPU, the ability to allocate memory which is accessible by both the CPU and GPU is not exposed through any drivers I have seen - we are still forced to do a real copy of all the bits to and from the card - from one area of main memory to another area of main memory and back again!

 

I've always been surprised that this wasn't something which is made available..

 

Thanks,

 

Andy.

YEAH I´d definitely love to have my MBp running faster than a MacPro  :D  :rolleyes:

 

Sometimes one is just scratching ones head about what engineers don´t include,

Sony Software, start recording with photo trigger, false color overlay, voice transmission over Sony Wifi app and LAV mic attached to smartphone instead of 500€ Sennheiser "cough cough" :blink: 

 

 

Link to comment
Share on other sites

  • 1 year later...

Here's a free image editor that uses GPU and it seems pretty fast:

https://pencilsheep.com/

 

So, somehow they figured out how to use GPU for blur, glow, distort and all the other live filters.

The number of filters is pretty impressive, it's definitely worth a try.

Although a web version is available, I would encourage trying the desktop version.

 

Edit: I noticed is for Windows/Linux only.

Andrew
-
Win10 x64 AMD Threadripper 1950x, 64GB, 512GB M.2 PCIe NVMe SSD + 2TB, dual GTX 1080ti
Dual Monitor Dell Ultra HD 4k P2715Q 27-Inch

Link to comment
Share on other sites

it's just an example of GPU being used in an image editor.

There's been some talking lately here on the forum about how GPU is not ideal for an image editor, Pencilsheep seems to prove image editors can benefit from GPU too.

Andrew
-
Win10 x64 AMD Threadripper 1950x, 64GB, 512GB M.2 PCIe NVMe SSD + 2TB, dual GTX 1080ti
Dual Monitor Dell Ultra HD 4k P2715Q 27-Inch

Link to comment
Share on other sites

  • 2 weeks later...

I ran the web version on my iPhone 7. VERY impressive I have to say! Of course the UI for a mobile device sucks, but to be fair it's definitely a computer vs mobile app....

 

Here's a free image editor that uses GPU and it seems pretty fast:

https://pencilsheep.com/

 

So, somehow they figured out how to use GPU for blur, glow, distort and all the other live filters.

The number of filters is pretty impressive, it's definitely worth a try.

Although a web version is available, I would encourage trying the desktop version.

 

Edit: I noticed is for Windows/Linux only.

2021 16” Macbook Pro w/ M1 Max 10c cpu /24c gpu, 32 GB RAM, 1TB SSD, Ventura 13.6

2018 11" iPad Pro w/ A12X cpu/gpu, 256 GB, iPadOS 17

Link to comment
Share on other sites

On 2/16/2015 at 9:26 AM, Andy Somerfield said:

Hi Leftshark,

 

That's a good question - and yes, in theory, integrated graphics cards should be able to work like that.

 

Unfortunately though, while Iris shares main memory with the CPU, the ability to allocate memory which is accessible by both the CPU and GPU is not exposed through any drivers I have seen - we are still forced to do a real copy of all the bits to and from the card - from one area of main memory to another area of main memory and back again!

 

I've always been surprised that this wasn't something which is made available..

 

Thanks,

 

Andy.

 

And the question is, Andy, don't you use Metal on the iPad?

That's the main reason Apple created Metal: to avoid data copying between the CPU and the GPU...

 

And if you do use it on the iPad, why not use it on the Mac too?

 

mfsignature.jpg

Link to comment
Share on other sites

2 minutes ago, Nikos said:

 

And the question is, Andy, don't you use Metal on the iPad?

That's the main reason Apple created Metal: to avoid data copying between the CPU and the GPU...

 

And if you do use it on the iPad, why not use it on the Mac too?

 

already using metal

 

 

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.