Jump to content
You must now use your email address to sign in [click for more info] ×

Affinity 2.0 is Hideously Slow/Buggy on Windows


Recommended Posts

I hate to be a nagging complainer, but I figured this would be worth a comment.

Is Serif a small company with limited resources? Yes. Does Serif have to maintain three different platforms with the same code-base? Yes.

But does Serif also have an obligation to make their paid Windows software as optimized, and well maintained and the other two? Yes.

I've got an AMD 6700XT paired with a Ryzen 3800X. You may expect this can destroy a little passively cooled Apple laptop or iPad. Unfortunately, that's just not true. With a combined Multi CPU score of 524, this desktop workstation has less performance than an M1 iPad. I'm not sour or jealous at Apple for releasing an objectively amazing product. But the fact is, the M1 does not have a superior CPU to the 3800X. M1 scores about 7000 in Cinebench R23 while the 3800X scores about 13,000.

Lets consider something else, my desktop class GPU. With a combined Single GPU score of 1114, and no Multi GPU score, the M1 Max is about 16X faster than my 6700XT. That's not a "wow Apple is so fast they beat everyone" that is a FAILURE to optimize for Windows in any fashion. If anything, it's so slow that it could be considered a fault in the software or a quality assurance problem.

To add more insult, hardware acceleration adds so much flickering and stuttering in the UI and canvas that it may as well be left off. Can you blame AMD for this? I'm not so sure. Software like Photoshop or Blender works perfectly fine, with proper acceleration.

I'm convinced if someone at Serif had actually USED the Windows version with regularity on different GPU platforms they'd realize how far off the mark they are. Seriously, it's like Windows has been abandoned.

(I've owned Affinity 1 on Windows since it just came out of Beta. It may have always been this slow, but I cannot recall.) I also own an iPad and recently sold my MacBook so I know what it is CAPABLE of. If I remember correctly, very fast performance and optimization was one of the selling points of Affinity. Now it's one of the major downfalls, unless you own an Apple product.

Link to comment
Share on other sites

Link to comment
Share on other sites

Lemme do some comparisons to widen the dataset:

M1 MacMini - 573/2296/678/9348/-/647/9200/-

iPadPro 1 - 224/413/105/1885/-/105/1109/-

Ryzen 5 5600H + RTX3050mobile - 350/1887/300/7524//8616/404/4629/5223

 

The Apple M-Series chips may not have the fastest gpus but it has one very helpful feature for gfx work: ultrafast unified memory that prevents a lot of waiting and copying.

Mac mini M1 / Ryzen 5600H & RTX3050 mobile / iPad Pro 1st - all with latest non beta release of Affinity

Link to comment
Share on other sites

Wonderful commentary here, @Zen22. I agree with you wholeheartedly on your general assumptions about hardware optimization. I don't necessarily know the reasoning myself and so I don't provide as much feedback because development was very hodgepodge through V1 with multiple new programs added, particularly Publisher, so figured optimization wouldn't be possible by then anyway. (That all changes with V2)

Re: GPU scores via AMD cards, there is the AMD driver situation regarding their GPU complicating benchmarks and a substantial thread. Personally I agree with you, this should not matter at all as other apps have shown they can coexist in the AMD ecosystem "just fine" as far as design. Obviously, their drivers are more optimized for gaming rather than productivity, but as you say, the performance shouldn't dip so low... anyway, they hare provided their explanation here:

As for crazy high M1 scores especially in the context of GPU results... unfortunately this thread is locked, but some important details are still visible via a staffmember...
 

image.thumb.png.24e74d7666ccb0b4c071b4313c1ccafb.png

Anyway, feel free to be naggy. I'm like you, these are puzzling design choices. I tend to think of Affinity as Mac-centric because that is where it planted its roots first, but my hardware has always ran quite well. Though I did upgrade my rig under the impression some big updates were coming up eventually and graphics software were leaning more heavily towards being GPU-driven. AMD GPUs were immediately off the table because of the performance problems in Affinity. Especially with things like the brush engine, it tend to run quite bloat-y compared to other apps on my machines. Just my experience with it (YMMV). I figure with a few updates, it might become clearer where things are heading as far as polishing up the experience. I would fully expect V2 to become a more fleshed out/optimized experience compared to the previous version if they've learned anything developing V1. Just an opinion.

Microsoft Windows 10 Home (Build 19045)
AMD Ryzen 7 5800X @ 3.8Ghz (-30 all core +200mhz PBO); Mobo: Asus X470 Prime Pro
32GB DDR4 (3600Mhz); EVGA NVIDIA GeForce GTX 3080 X3C Ultra 12GB
Monitor 1 4K @ 125% due to a bug
Monitor 2 4K @ 150%
Monitor 3 (as needed) 1080p @ 100%

WACOM Intuos4 Large; X-rite i1Display Pro; NIKON D5600 DSLR

Link to comment
Share on other sites

On 12/9/2022 at 4:07 PM, Tia Lapis said:

Lemme do some comparisons to widen the dataset:

M1 MacMini - 573/2296/678/9348/-/647/9200/-

iPadPro 1 - 224/413/105/1885/-/105/1109/-

Ryzen 5 5600H + RTX3050mobile - 350/1887/300/7524//8616/404/4629/5223

 

The Apple M-Series chips may not have the fastest gpus but it has one very helpful feature for gfx work: ultrafast unified memory that prevents a lot of waiting and copying.

Quite eye-opening seeing these benchmarks together.

To add mine: Ryzen 5950x with PBO capable of ~5Ghz on the best core + Gigabyte Auros Master RTX 3070: 476/4782/982/13688/-/1266/7206/-

The M1 has good single core performance, so it's not surprising seeing it scoring as well as AMD and Intel desktop processors

I did some benchmarks using Geekbench 5, which is a great benchmark application for testing as the CPU tests contain a lot image manipulation routines - image compression, PDF rendering, gaussian blur, ray tracing etc. I then compared it to uploaded benchmarks for the M1 Mac Mini

On single core tests, Geekbench shows the Mac Mini is 5% faster. Affinity sees a 20% deficit for the 5950x (comparing @Tia Lapisresults posted above with mine)

On multicore tests, Geekbench shows the 5950x to be 219% faster than the Mac Mini. Affinity sees a smaller 208% difference

So - single core performance for Affinity on Windows looks pretty unoptimized, which also affects the multicore performance to some extent. 

OpenCL performance is a whole other ballgame. The RTX 3070 literally blows the Mac Mini out of the water on Geekbench, being 639% faster on the compute benchmark. By comparison, Affinity on Windows shows the RTX 3070 to be only 146% faster than the M1 Mac Mini single GPU raster test (13688 vs 9348)

I'm assuming for this purpose that the OpenCL benchmark is somewhat comparable to the GPU benchmark being done by Affinity (not the numeric results per se, just the workload it uses). 

 

image.thumb.jpeg.9188f385f8422df4c75072d811a08c65.jpeg

This isn't definitive data, but it sure seems to point to a significant performance delta between Apple and Windows platforms, both on CPU and even more markedly on GPU

 

 

 

Link to comment
Share on other sites

I don't think they're nearly as fast and performant on new Macs as they could/should be. Just the advantages of the proximity and integration of CPU<->GPU benefit of m1 and m2 showing up, not the usage of all cores, nor full usage of the GPU.

Take out those architecture advantages of the m1/m2 and Mac version's aren't significantly more performant.

Affinity was never fast, except for light vector work, I thought/felt for the longest time. The only seeming advantage of v2 is that light vector work performance now extends to medium complexity vector works. Start adding effects and things go even slower than before, it seems. 

 

Link to comment
Share on other sites

On 12/9/2022 at 5:42 PM, debraspicher said:

As for crazy high M1 scores especially in the context of GPU results

so for non Apple users, Its the Resizable Bar feature which probably is not supported by the Affinity Product line?

Sketchbook (with Affinity Suite usage) | timurariman.com | gumroad.com/myclay
Windows 11 Pro - 22H2 | Ryzen 5800X3D | RTX 3090 - 24GB | 128GB |
Main SSD with 1TB | SSD 4TB | PCIe SSD 256GB (configured as Scratch disk) |

 

Link to comment
Share on other sites

17 hours ago, myclay said:

so for non Apple users, Its the Resizable Bar feature which probably is not supported by the Affinity Product line?

I get why it could be seen that way, but they're not the same. UMA is an architecture/implementation like integrated graphics as far as I understand it, whereas ReBar is a PCIE spec that allows a discrete (non-integrated) GPU to assign more space in VRAM for CPU, which allows it take more at a time from the CPU. Basically, it's allowing an app to make the most efficient use of resources possible.

I have it enabled in both the BIOS and showing in the driver, but only some apps use it as far as I'm aware and even when supported in certain games, it doesn't seem to make a ton of difference in FPS and AFAIK nothing I use regularly supports it. It doesn't hurt to have it on though *shrug*

39 minutes ago, cgidesign said:

Question to those with AMD Ryzen CPUs.

Is your ram at bios default (around 2133) or at 3200 or faster?

Ryzen CPU speed is known to scale well with ram frequency.

My current setting is in my signature. I had done benchies recently after I upgraded my RAM (was 3000 prior), and it really did not seem to influence the results of the benchmarks beyond where they tested before, so it's not easy to tell if the averages were higher. In both cases, I used the XMP profile.

Microsoft Windows 10 Home (Build 19045)
AMD Ryzen 7 5800X @ 3.8Ghz (-30 all core +200mhz PBO); Mobo: Asus X470 Prime Pro
32GB DDR4 (3600Mhz); EVGA NVIDIA GeForce GTX 3080 X3C Ultra 12GB
Monitor 1 4K @ 125% due to a bug
Monitor 2 4K @ 150%
Monitor 3 (as needed) 1080p @ 100%

WACOM Intuos4 Large; X-rite i1Display Pro; NIKON D5600 DSLR

Link to comment
Share on other sites

4 hours ago, Napkin6534 said:

Well, I have Ryzen 5900X + nVidia 3080 + 64 Gb of 2666 MHz RAM. My score from the Photo v2 benchmark is:

459 / 3853 / 662 / 10300 / N-A / 797 / 5947 / N-A

Maybe that will help.

This is the second person with a RTX 3080 I've noticed ( @debraspicheris the other person) who has a benchmark raster single GPU score 30% lower than the score I get on my RTX 3070. Curious result this - yours is 10,300 compared to mine 13,688. Clearly one would expect the opposite.

I see you have slower RAM which might account for some of the difference but @debraspicherhas RAM running at 3,600MHz, the same as mine, so it doesn't explain the delta in that case. 

Link to comment
Share on other sites

21 minutes ago, Napkin6534 said:

Aorus Pro, B550 chipset.

I have an x570. @debraspicherhas x470.

The B550 has a x16 PCIe 4.0 lane for the GPU and PCIe 4.0 lanes for the storage, but does not have any general purpose PCIe 4.0 lanes or a PCIe 4.0 chipset uplink - they're PCIe 3.0, compared to the x570 which has PCIe 4.0 general purpose lanes and chipset uplink. 

This would clearly make a difference, but as much as 30%?!!

Link to comment
Share on other sites

6 hours ago, Napkin6534 said:

Well, I have Ryzen 5900X + nVidia 3080 + 64 Gb of 2666 MHz RAM. My score from the Photo v2 benchmark is:

459 / 3853 / 662 / 10300 / N-A / 797 / 5947 / N-A

Maybe that will help.

Turns out I had the latest BIOS version. But I made sure the windows is updated, updated latest Studio drivers, restarted Windows and ran the test on fresh system. The results are more or less the same.

458 / 4042 / 654 / 10877 / N-A / 841 / 6088 / N-A

Link to comment
Share on other sites

Hi,

 

I can jump on the RTX 3080 / 5950X bandwagon here, the V2 is atrociously slow compared to V1, I can't even zoom in on a 2k image without the software lagging.

I can already tell you what the problem is, but you won't like it.

It's 4 little letters - MSIX

Microsoft store apps by design work subpar in terms of performance inside Windows desktops - it's crazy , I know, right?

I am looking for an x86/x64 msi installer at this point for V2, that would immediately solve half the problem in terms of performance.

The other half I think is down to the really poor optimization of the threading, I can see the process hogging on 1-2 cores at most when I have 32 available.

 

Link to comment
Share on other sites

When I tested on V1 (different benchmark but likely still applicable for this) using various processors, CPU+GPU heavily influenced GPU results. So imo, that is a major factor for disparities in GPU scores when paired with other CPU combos. So while the score is technically GPU-only, it's more like mostly GPU-ish that is influenced by CPU as well.

The V1 beta thread is hidden now, but thankfully I can still access these screens from attachments...

Everything is the same in the below charts except the CPU. All the tests were on the 3000MHz RAM iirc.

Different CPU swaps with the same GPU on V1:

Quote

image.png.f1c8526894d491ba642193bda987acc8.png.7f73fec1fdc6b95e4cb5630ed3782628.png

Undervolting my current processor bench changes:

Quote

image.png.ae9433e12fb7c261799f63b8e7fc2e20.png.fe884fec0f2d9a093484e294ca1e3da6.png

1 hour ago, rvst said:

I have an x570. @debraspicherhas x470.

The B550 has a x16 PCIe 4.0 lane for the GPU and PCIe 4.0 lanes for the storage, but does not have any general purpose PCIe 4.0 lanes or a PCIe 4.0 chipset uplink - they're PCIe 3.0, compared to the x570 which has PCIe 4.0 general purpose lanes and chipset uplink. 

This would clearly make a difference, but as much as 30%?!!

As far as I understood it, 3000 cards didn't fully saturate the PCIE3 spec? There were plenty of videos comparing performance of the cards on the different platforms (Gen 3/Gen 4) and it was a minuscule difference.

Microsoft Windows 10 Home (Build 19045)
AMD Ryzen 7 5800X @ 3.8Ghz (-30 all core +200mhz PBO); Mobo: Asus X470 Prime Pro
32GB DDR4 (3600Mhz); EVGA NVIDIA GeForce GTX 3080 X3C Ultra 12GB
Monitor 1 4K @ 125% due to a bug
Monitor 2 4K @ 150%
Monitor 3 (as needed) 1080p @ 100%

WACOM Intuos4 Large; X-rite i1Display Pro; NIKON D5600 DSLR

Link to comment
Share on other sites

My husband's specs and his 3070 results (again, V1)

Mobo: ASUS Rog Strix B-450-F Gaming
CPU: Ryzen 5 5600x
RAM: 16 GB 3200 (DOCP Enabled)
GPU: Gigabyte Nvidia 3070 8GB OC Edition

Quote

5600x.jpg.2b3640e7e4af40c9b5dff1a71422a782.jpg.7f9ff699e878269f7cc1f4930ccbeed8.jpg

 

Microsoft Windows 10 Home (Build 19045)
AMD Ryzen 7 5800X @ 3.8Ghz (-30 all core +200mhz PBO); Mobo: Asus X470 Prime Pro
32GB DDR4 (3600Mhz); EVGA NVIDIA GeForce GTX 3080 X3C Ultra 12GB
Monitor 1 4K @ 125% due to a bug
Monitor 2 4K @ 150%
Monitor 3 (as needed) 1080p @ 100%

WACOM Intuos4 Large; X-rite i1Display Pro; NIKON D5600 DSLR

Link to comment
Share on other sites

I have Resizable BAR enabled, I doubt this test can hit the teraflop limit on my GPU, the test did not even fan my card to 30% power. It's a joke test imo, but since everyone likes to post results, here are mine. My card is thermally modded and stable on OC over 2000Mhz core and +1000Mhz on memory.  I think the issue is also related with HDR combined with the MSIX package, that's a double danger zone since performance dips really hard in MS store HDR apps. Ask me how I know - painful months of gaming on MS store apps - never doing that again, for sure.

 

PS: these results are on factory settings, no OC anything.

 

benchmark affinity.png

Edited by Nomad Raccoon
stock settings
Link to comment
Share on other sites

1 minute ago, Tia Lapis said:

I am pretty sure we see the impact of the copy to and from gpu memory. 

I don't know what to say. Perhaps you are right, my guess is my results are better than other RTX 3080 or 5900/5950x here also because my ram is 3600 CL14 but some other people have 3600 too, and if only the timings account for 2x difference in some scores, it's pretty nuts, isn't it?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.