Jump to content

Qu4ntumSpin

Members
  • Content count

    36
  • Joined

  • Last visited

About Qu4ntumSpin

  • Rank
    Member

Recent Profile Visitors

189 profile views
  1. Qu4ntumSpin

    -

    Yes moaaaarrrr performance ! :D However, lets not forget performance / feature, we can't have always both because they both depend on Human resources :ph34r: Skilled ninjas!
  2. Qu4ntumSpin

    -

    I edited an older post related to the GPU. Before someone breaks the internet because of it :D Yes, absolutely. But then that changes the concept of "efficiency" no ? Specially when you are speaking about a heavily multi-threaded application :D Anyway, it was just a remark :) you're free to do what you want haha. Remember me to test again when 1.6 is out. I'll gladly do it. Best.
  3. Qu4ntumSpin

    -

    Thanks @harrym @MBd Just went trough the sheet you made with the results and the post where you explain your intention related to the "efficiency" column. I do understand what you wanted, however I don't see why you don't count threads instead of cores ? Not all cpus have SMT or HyperThreading. Example : The FX8350 has effectively 8 "native" cores, not threads. My CPU has 8 cores but 16 threads. As an extra information, threads are not as fast as a full blown core, because it shares cpu cache with the "native" core. Difficult for me to express this otherwise without going into each architecture details... And Ultra low consumption cores is something of another level where those ghz aren't that important. << But this you can forget for your sheet.
  4. Qu4ntumSpin

    -

    I am quite generous... lol More seriously, if by any means this can help build a better product I'm all for it. I like all software as long as: - They work - They make sense - They don't try to screw users. I think Serif Affinity products match that rules. So the better they are, the quicker my workflow is. Which if you are a money maniac, means more money for you hahaha. What I meant with my comment is that building a computer takes a little bit of time (not much but still) if you need to do overclocking to get most of it, it also takes time. And the sad part, is that you can't do anything while you do it. It is a very chronophage task! So basically, while I'm writing here, I allow myself to distract my mind from my work. But I can go right back to it when I want. Which is not the case when you OC. Until you are stable.... You are stuck trying again and again all the settings. PS: I love the idea of the Affinity benchmark by the way. Would love to see them actually release something officially directly in their tool. Might also be very valuable data for them.
  5. Qu4ntumSpin

    -

    All DDR4 is 2133. Everything above that is "overlocking" XMP profiling etc. So technically, they are saying the right thing on their site. Now, not all motherboard are able to use all XMP/A-XMP profiles, that's why there is a difference in price and chipsets in the motherboards. But yeah, 4x16GB at 3200 for a very very young platform (Things will get better as new UEFI comes out and AMD releases more code) is extremely impressive. Now, I wonder about his system stability... One shot run is not stability. This is why I don't overclock my CPU to 4GHZ. I just don't have the time to do that and test all the minuscule settings. I paid less for a 1700 instead of a X part and overclocked a bit so I get more for my money for very little time investment. If you can earn more than 100$/hour. Then saving 200$ but spending more than 2-3hours doing this is pointless. Just purchase somethings that works for your needs right away :) This is how I do it. ^^ time = money.
  6. Qu4ntumSpin

    -

    All of them :)
  7. Qu4ntumSpin

    -

    You're absolutely correct. But it also loves very low latency RAM. So you can live with slower speeds and low CAS. Yes, my RAM is rate 3200. But it is not a Samsung B ram chipset one... Hynix memory instead. So Dual rank and all that... So no way to boot at 3200 even living on the very newest UEFI bios: You can find all the info about RAM on the CH6 x370 board here : https://rog.asus.com/forum/forumdisplay.php?292-Crosshair-VI-Motherboards-(X370) and here: https://docs.google.com/spreadsheets/d/1YSZB70P71Kd4iAyxSpAZf0lc2GmyALKJOQ7vA1MhV2s/pub?output=html&widget=true
  8. Qu4ntumSpin

    -

    Well, kinda disagree here. Most graphical algorithms can be done in the GPU, in fact they are made for that explicitly. The problem is that there is many ways to code for a GPU, and sadly AMD/NVIDIA didn't come with a standard that "works" for all, which means a ton of work to support both. Plus there is the problem that Nvidia is king on windows, and AMD is the choice of Apple for most of their devices. They use sometimes Nvidia cards, but those are generally on the very low end mobile card :/ CUDA -> NVIDIA OPENCL -> BOTH The problem is that even if CUDA is technically "open", AMD did choose to go with OpenCL and they haven't really put efforts in it and still only support old opencl 1.1 standards... However AMD might have come to a solution so if "Affinity" Developers are reading this... : https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP Convert your CUDA code to portable C++ code that will run on both AMD & NVIDIA ! Quite nice. If this reveals to be a success (most of it really depends on political issues... not engineering problems), more greatness is to come just like the Vulkan API for computer graphics. But again. I doubt it, I just hope Affinity (Serif) is able to generate a budget that is good enough to target both the main platforms. NOTE: I know the issues related to exchanging information from RAM to VRAM. I have done my optimizations in GPU, and I also know that it scales better when the data is big and that the time you spend doing the round trip to the GPU is better than the CPU doing the work. Now if you representation of the data in RAM is terrible, nothing is going to help that. PCI-E gen3 x 16 bandwith is insane, and new graphic cards are insanely fast and yes, some work like filter might better be done on the GPU, just like most filters you applied in this macro. Even 3 minutes to process this is very long! Andy Somerfield is perfectly right when he says that the architecture of mobile devices share the memory and with the api available that saves a lot of work and can result in awesome gpu usage compared to desktop. However I wouldn't simple trash the gpu work because of that.
  9. Qu4ntumSpin

    -

    @MBd, check the my post above, I edited quite a bit. Added screenshots and even measured for you the GPU usage. None of the filters you apply in this macro testing are using the gpu. EDIT: My 2 cents on Ryzen 7. It feels exactly like x99 platform from Intel but hell lot cheaper !!! (Affordable might one say) If you do any editing, video, photo, etc. Just get one of those.
  10. Qu4ntumSpin

    -

    @MBd Thanks. Firstly results SBOOK PERF - Surface Book Performance 2tb version - GPU GTX 965M - 16GB of RAM - Time: '00:19:51:05' - Aouch... this is quite slow ^^ - Plugged to the wall (no power saver active) Ryzen with "WRAP" instead of GTX1070 - Same as before, just the WRAP selected - Time: '00:03:31:31' - I guess the difference, is mainly due to me having other software running like a gazilion chrome tabs, etc. Let me answer your questions then: 1) All 16 threads are used at 100% (well 99%...) 2) Yes my cpu is overclocked but it is far from an extreme overclocking, I'll explain Ryzen bellow 3) Ryzen 7 series aren't an APU (CPU+GPU in a chip) but instead just CPU's 4) Concerning the dedicated GPU, I can use the "WARP" renderer, but this doesn't guarantee everything will be ran on the CPU. See https://msdn.microsoft.com/en-us/library/windows/desktop/gg615082(v=vs.85).aspx In clear, it should run whatever on the CPU that can't be done on the GPU of the machine. But nowhere did I find that it does enforce the rendering on the cpu. Raster rendering is generally done on the CPU, it's a technique that transforms the 3D content into a flat image of pixel (rastering), however such behavior can be done in many ways. I have no clue how the WARP thing from Microsoft works... Ryzen 7 series explained: - All ryzen 7 CPU sku are the same. 1700/1700x/1800x - They have all 8cores and 16 threads - What changes is the "stock" speed at which they run and by so the power they consume. Why is that ? Not all silicon is made equal, believe it or not, if you make 10 "same" CPU parts, they will behave very differently from one another. This is why AMD/INTEL/NVIDIA/you_name_it do chipset binning. They test every cpu/gpu/chipset with several criteria and they sell it accordingly. In clear, when you take a 1800x you pay more because they already tested your cpu at higher frequency and it is set to run at that frequency no matter what, without you doing anything. Because of that you can easily extrapolate the performance of those at "stock" for all the ryzen 7 series. http://www.guru3d.com/articles-pages/amd-ryzen-7-1700-review,10.html << Check the graphs. Look for "multi-threaded" runs, they use all the cores like Affinity Photo does, you will see the % difference between each Ryzen 7 series, it will be the same % for your macros in AP. Ryzen cpu's and the way they are manufactured (LPP) forces them to run at "lower" frequencies. It means these cpus are not ment to go above 4.1/4.2 ghz, except if you go all nuts with liquid nitrogen but that is something entirely different ! NOTE on how I did things: I executed your tests with the settings on "Best Quality" and "Dither gradients" disabled. Which means, this might also effect the speed at which your "macro/bench" is executed too. I started the timer as soon as the process starts. I stoped the timer only when the image is rendered and properly displayed on AP. NOTE 2 on this macro test: None of the filters that you apply use the gpu. Look at the GPU usage on the monitors, 0% usage.
  11. Qu4ntumSpin

    -

    Here are my results: - Time: '00:03:42:60' - OS: Windows 10 - CPU: Ryzen 7 (1700@3.8ghz) AIO x62 kraken (silence! thank you...) - RAM: 32GBDDR4 @2666 - MAIN DISK: system on a 500GB nvme ssd (960 EVO) - Lots of hardrives.... not worth mentioning. (I wish I could live without them... super noisy!) - GPU: MSI gaming x GTX 1070 @1.9ghz (totally silent) - PSU: 750w RMx from corsair (not using half of that, but most 80gold+ psu remains totally silent with 50% load) AP run great, tons of features. I have tested Capute One Pro, it just handles my RAW uncompressed files wayyyyyyyyyy better, it is really another level of precision and speed, but it is not a replacement for AD features, more a lightroom replacement. This is how I would score Serif products across all my machines (X/5 stars): AP: 4.5 stars definitely, todo: improve the raw processing speed and quality, create a DAM to go with it. AD: 3 stars (Crashes too often), lots of small quirks here and there. Will post results for the Surface Book performance later. Best
  12. Just a note related to "fail to start with 0xC0000005": - Riva tuner (mostly comes with msi afterburner) must be turned off if you are using it - From what I have seen if your "renderer" fails, means gpu restarts etc. The application will crash like that. A little bit more on the GPU note: @Sean P - Installing the drivers through the geforce experience will reload the driver and will make any affinity product crash because they don't handle the case. << This is a bug, should definitively be fixed, most applications handle this. - If a bug happens on the GPU it will also restart the driver, this is both valid on AMD and NVIDIA side. (Mostly the reason systems don't fail nowadays, it's not because they are bug free, but because they handle crashes better). - If you overclocked your card, cpu, pci-e, it's possible that your system is a bit unstable for some use cases like AD or AP. PS: I'm runing 1703, and all the previous insiders for that matter and they all work fine. Crashs are either due to AD/AP having a weird particular issue (that happens) or some conflict with your system.
  13. Qu4ntumSpin

    Error when remove shadow color

    If you can edit your main post (with the full editor) and upload the affinity designer file to the QA team, they will hopefully be able to reproduce your bug and report it.
  14. Great that you could partially reproduce. Concerning the crash, it is an error "0xC0000005 (Which seem to be every error your devs decide not to handle ^_^ ), so it doesn't give much info. Sadly I can't share you the exact file that leads to a crash... However, I made another post including the test of the new beta version that will lead to a same "0xC0000005" crash. https://forum.affinity.serif.com/index.php?/topic/37644-unhandled-exception-has-occured-code-0xc0000005/ They might both be linked, because I do use elements from the package I shared above in the "crashing artboards" file I can't share with you. That package was made by me, all the content was created inside affinity designer except the icon curves which were retrieved from multiple svg files. Best regards
  15. Qu4ntumSpin

    Lots of bugs when working with constraints

    What I figured is that the constraints tend to follow their container. So if you have a constraint before grouping the same "movement" constraint will be applied when you group. Resulting in your objects being moved all across the screen, at least in my version (1.5.2.58) What I end up doing is: - Grouping - Reapplying the constraint as I like on the group itself and all the sub elements - Create a symbol Then use symbols instead.
×