Jump to content
You must now use your email address to sign in [click for more info] ×

Nomad Raccoon

  • Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yes, but everything else is still true. It’s faster only marginally from end user perspective, and around 15-20% faster from the synthethic benchmark perspective. It still hangs when it loads anything, hangs when the app starts up, freezes from time to time and crashes sometime as described initially. Mind you most of these were not problems in V1, or to a much smaller extent ( hangs especially )
  2. Hi, I specifically said there are no crash reports generated, I am aware of the folders. Last one is from March in my case, not related to this imo. If i disable hw acceleration I might as well quit altogether . This software uses less than 10% of my 4090 as it is, I don’t want to use my cpu exclusively working with high megapixel images. It’s very slow as it is. See my other posts in the benchmark threads, bug reports on slow msix version etc. only 1 , maximum2 cpu (very rarely) threads are used of the 32 available. I consider this a bug in itself, can’t call this a feature. I’d like the software to use my hardware, not crash and fall back on cpu emulation. How about Cuda support in 2023? I won’t say Optix that’s asking too much. All the best, Your neighbourly Raccoon
  3. any developer insight on this issue? Has anyone acknowledged the big performance disparity? And perhaps some insight on why there are only 1-2 cpu core threads being used by the app?
  4. 2 crashes like this in the past 24 hours, no crash dump in affinity to upload. Windows 11,rtx 4090, latest drivers, clean install, doubt it's linked to that, the app just folds into itself like a small black hole. GPU errors leave traces 9/10 times.
  5. I honestly uninstalled the sanded version as soon as I could, but I ran tests in the unsanded one, and it looks fine. I am willing to guess that i will still have the same variance as Debra above, as that was the case with older benchmark versions as well. One thing I want to note: the unsanded version does not have "hiccups" in using the app. Random freezes basically are present in the sandboxed app. That's the main reason I was on this thread to begin with. That and the really poor performance zooming in and out of regular images - this last part was fixed after fiddling with my own computer, but the freezes remained until I switched to this unsandboxed version. I think the sandboxed version is just unable to use system resources in the same way as the other one, there is no other explanation for what we are seeing.
  6. Yes and both should be using OpenCL. Meaning the OpenCL implementation is really shoddy in Affinity. Can we get a mod/dev to explain this??
  7. Hey @rvst My 3080 scored 201513 score , safely ranks in the 3080TI range of the Geekbench open cl benchmark. Probably because I thermally modded it myself, and it's OC-ed nicely, but geekbench does not really stress it either to be honest. (max tdp 258w, barely 70% load) I might have some time in the weekend to properly test the CPU part of the geekbench as well, but that will require at least a Curve Optimizer run.
  8. Back when CUDA was working on MacOS, it was doing pretty good, I don't know what to say https://community.adobe.com/t5/premiere-pro-discussions/shock-result-opencl-vs-cuda-vs-cpu/td-p/7354875 This seems like one of those arguments on hardware, not on performance of said hardware. Issue at hand is we have plenty to work with, and no software to work it.
  9. Also, what happened to CUDA guys? Is it verboten? Why isn't CUDA a thing in Affinity? And don't tell me it's a complete rewrite of the app, cause it's not. PS: regarding the cores, i have posted a graph showing the benchmark only using 50% of CPU power, and 60% of my GPU. And that's A BENCHMARK. Usual app usage is 5-10%
  10. Second this, for a rework of the app it seems just as bad if not worse performance for Windows based systems, Amd especially. But my 3080 is barely used, my cpu is barely used. And the savants on the thread are discussing UMA on Apple when Infinity Fabric is literally staring them in their face along with Pci 4.0 x16 . It doesn’t take a geekbench to realize this app is currently aimed to perform only at macs/ios? Will Serif do anything about this or not?
  11. In fact, I have a X570 Aorus Master motherboard, it’s all pcie 4.0, even the the ssds. I didn’t OC my Cpu that hard- i think it’s default PBO atm, then again 5950x doesn’t OC thaaat much tbh. I will check if going around 5ghz single core changes things but it’s still a silly test if you look at the cpu/gpu usage of the test. The variance in results also supports this.
  12. Mine's pretty damn clean and running pretty high frequencies on everything. It would ace any other benchmark you can throw at it. There is a lot of variance in the results first of all, I can repeat the test and get pretty wild values from one run to another, even they mention we should average them. But that's not my problem here, it's the fact that the software runs SLOW, and gpu-z confirms that the GPU is not being used, while process explorer is telling me that both GPU and CPU are not used at all. It barely reaches a 70% peak on CPU during the benchmark. The test is over before it even starts as far as I see. Same for GPU. Maybe MacOS handles this faster, but besides that I see absolutely no reason why these tests would mean anything at this point.
  13. Well that's easy to verify, just find someone with a different GPU and run an older benchmark, but I doubt ONLY that one test would have its score halved, it would impact the Combined GPU scores too, wouldn't it?
  14. Yeah I've restarted and added my normal settings, results remain similar : Multi CPU is lower in unsandboxed, but every other result is higher, by up to 20% higher in the case of GPU raster. I see the same trend in Vector Multi CPU results for @debraspicher . What I don't understand is how someone with a 3070 reach a higher score on this benchmark version than we do with our 3080. And more interestingly how did someone reach 22372 score with a 3080 in 11021 Benchmark version and we can only reach 10-12k in version 20000.
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.