Jump to content

Qu4ntumSpin

Members
  • Content count

    40
  • Joined

  • Last visited

About Qu4ntumSpin

  • Rank
    Member

Recent Profile Visitors

225 profile views
  1. Hi everyone, Just to shine some information here related to the following questions because it does not only apply to Serif product, but basically to anything you want. Concerning VM performance, you must understand that there is multiple type of VM's. If you are planning to start "VirtualBox" and that's it. Then you can expect very poor graphical performance for any application. What you need to get near native performance, yes you can absolutely play AAA games, your CAD app, etc, this way without much performance loss. I am using QEMU with a dedicated video card (Yes I have 2 cards in my system). The process to setup VFIO + KVM is not complicated It is built into linux ! And there is enough video tutorial out there. If you know how to modify one file. install programs on linux and you have the hardware this will likely take you an hour the first time to set up. Then about 5 to 10 minutes the following times. What you need to know : - VFIO allows you to passthrough your gpu to your VM (All of this, depends on your machine - IOMMU) - KVM if you respect architecture (Linux 64 with Windows 64) to get near native performance inside your VM - If you don't have a monitor to spare, you can use SPICE to see what's going on in your VM, but for gaming performance you will want LookingGlass (It literally copies gpu buffer from one gpu to the other, and now it allows full copy/paste between your VM and Linux) - You are running 2 systems, so 8GB RAM... no go... (16 at least for a great experience, and 32 if you want to go all in) - At least 4 cores, but preferably 6 or 8, so you can share 4/4 to each system. Yes, 4 cores and 8 threads each will allow you to play your Appex / Fortnite High settings etc. or whatever else is trending nowadays. - If you can game on it, you can compute & work on it. - More complicated, but you can also do that with MAC OS (Though, it is against their TOS, so it's up to you) How to setup QEMU+KVM (very old video - 3 years ago !): https://www.youtube.com/watch?v=16dbAUrtMX4 Learn more about QEMU: https://www.qemu.org Learn more about KVM : https://www.redhat.com/en/topics/virtualization/what-is-KVM From the creator of LookingGlass NOV 2018 (The demo starts around 4:30) : https://www.youtube.com/watch?v=a322V4yo3nY Yes you can, it's complicated, it depends on the driver most of the time. I do not recommend it for the majority of users. And you loose the ability to go back and forth between your VM and Linux machin without interupting your Linux Work. Which is exactly why this QEMU+VFIO is nice. Often people forget, but you might already have 2 gpu's. One with your intel / amd processor and your dedicated graphics card Here for example this guy is running it's display from the motherboard output (intel gpu) and shares is main dedicated gpu with the VM https://www.youtube.com/watch?v=ssfvpLXK8po If you are really going into this stuff : https://www.reddit.com/r/VFIO/ Gaming on Linux( Native & VM) From Linus Tech Tips and Wendel : https://www.youtube.com/watch?v=SsgI1mkx6iw << If you watch only one, then look at this one. 5 minutes of VFIO/IOMMU on Arch : https://www.youtube.com/watch?v=z_2dtnU4Awk There is so much resources out there, I can't share everything. But now you have all that you need to run all Serif applications and more, without leaving the comfort of your Linux desktop. EDIT: I didn't mention it, but you can pass through any device that you want of course. I am passing though a wacom tablet and a keyboard. But that's all entirely optional. You can even pass through a sound card (usb headset for example)... etc. EDIT 2: If you have pro card either from AMD or Nvidia. It is likely that you have SR-IOV https://www.youtube.com/watch?v=11Fs0NHgzIY Pro cards allow you to share a single card at the same time against multiple vm's.
  2. This starts to deviate a little bit, but, what !? Have you ever heard of a business plan or did you just misunderstood my point ? Of course your build first a MVP, then you go ask for money. I know no single startup that got money based on an "idea".
  3. Hello @Mark Ingram, Thank you for your post and information. I've recently published : And I have a request for you and your team that implies a very small dev effort. Would it be possible to add a simple toggle info to our accounts (serif accounts, not forum) to add our targeted OS ? Example, I would love to have my purchases count as towards Linux platform not Windows. Since you already implicitly do that for Windows / Mac OS, it would be nice to have an explicit button to indicate for Windows purchases that they are used by mainly Linux user. I know how complicated it can be to get real meaningful stats. And with such a topic, you might even end with bots or false positives, people that would never actually buy your product. Have a little chat with your Marketing & Business team, but I think that metric could actually help. And even if you don't do anything with it, I would actually feel pretty great about it I got this idea from Steam itself. When a user on Linux purchase a Windows game, it might or might not work on Linux, but that doesn't matter, Valve is counting it as a Linux purchase, and seriously that is awesome. Because that's a proper metric to have that reflects reality. With that, have an amazing day, keep up the good work ! I can't wait to try publisher, don't get me wrong, I love Scribus, but I was used to indesign, and your tools are simply AWESOME. This is why I vote with my wallet and keep purchasing your products. Note 1: I am absolutely against the crowdfunding, this is not the way to go for a closed source project. If you want to go that route, let us vote with our wallet and purchase a Linux version with no promise of delivery, right on your page next to the other ones. Note 2: People speaking about Akira... That project is a splendid demo on how NOT to do a modern design tool, specially in the Linux eco-system. No cross platform, uses a specific tech that very few devs work on (VALA) and is not desktop agnostic. Just that should be a show stopper for anyone with a little love for all the diversity that exists in the Linux eco-system. In addition to that : No proper feature set, no roadmap, no real activity in their code base (all branches included). https://github.com/akiraux/Akira . TLDR: doing it right is not easy, and asking for money is not enough. Want a good example on how to do it right ? Krita.org and their MANY successful crowdfunding. - 2014: 20k Euros (Accelerate dev) https://www.kickstarter.com/projects/krita/krita-open-source-digital-painting-accelerate-deve - 2015: 30.5k Euros (Let's make it faster) https://www.kickstarter.com/projects/krita/krita-free-paint-app-lets-make-it-faster-than-phot - 2016: 38.5k Euros (New text, new vector tools. svg2 support, etc.) https://www.kickstarter.com/projects/krita/krita-2016-lets-make-text-and-vector-art-awesome - 2018 : 27k Euros (Squash bugs) https://krita.org/en/fundraising-2018-campaign/ That's 116k Euros in 4 years of crowdfunding (and I most certainly forget some others they did). This doesn't included direct donations or steam purchases. That's how you ask for money, your first build something great, then you lay down a great plan for growing up and then you ask for money.
  4. Just pre-ordered the affinity publisher. (I'm using scribus atm.) I am a proud owner of all your products. Right now I use a VM with VFIO (I share GPU, mouse, keyboard, wacom tablet to a windows vm) to use those products. It's far from ideal, but I am willing to support Serif, because you are actually building amazing products that are very nice to use ! This post is simply to share my support on something happening someday on the linux support front. Whatever it is. Hope all the posts and repeated questions don't get too much on your nerves. I'm a Software Engineer also, and I know how "Support" can be a hard thing to do properly. Concerning your choice of technologies, some are questionable, but I'm sure your CTO knows what he is doing. You are invested in .NET, try having a look at .NET core 3 it also supports WPF and other things that can be installed on all platforms and architectures. https://github.com/dotnet/wpf https://github.com/dotnet/core/blob/master/release-notes/3.0/preview/3.0.0-preview5.md @Redsandro Thanks for the recap post, it saved me a lot of reading. Additional note, I'm a happy user of Davinci Resolve on Linux, and because of their entreprise support on Linux, I'm now moving my equipment and capture cards to theirs instead. My company is by no means large enough to sponsor such a development on your side. But happy to help if you want more info. Best regards,
  5. Qu4ntumSpin

    -

    Yes moaaaarrrr performance ! :D However, lets not forget performance / feature, we can't have always both because they both depend on Human resources :ph34r: Skilled ninjas!
  6. Qu4ntumSpin

    -

    I edited an older post related to the GPU. Before someone breaks the internet because of it :D Yes, absolutely. But then that changes the concept of "efficiency" no ? Specially when you are speaking about a heavily multi-threaded application :D Anyway, it was just a remark :) you're free to do what you want haha. Remember me to test again when 1.6 is out. I'll gladly do it. Best.
  7. Qu4ntumSpin

    -

    Thanks @harrym @MBd Just went trough the sheet you made with the results and the post where you explain your intention related to the "efficiency" column. I do understand what you wanted, however I don't see why you don't count threads instead of cores ? Not all cpus have SMT or HyperThreading. Example : The FX8350 has effectively 8 "native" cores, not threads. My CPU has 8 cores but 16 threads. As an extra information, threads are not as fast as a full blown core, because it shares cpu cache with the "native" core. Difficult for me to express this otherwise without going into each architecture details... And Ultra low consumption cores is something of another level where those ghz aren't that important. << But this you can forget for your sheet.
  8. Qu4ntumSpin

    -

    I am quite generous... lol More seriously, if by any means this can help build a better product I'm all for it. I like all software as long as: - They work - They make sense - They don't try to screw users. I think Serif Affinity products match that rules. So the better they are, the quicker my workflow is. Which if you are a money maniac, means more money for you hahaha. What I meant with my comment is that building a computer takes a little bit of time (not much but still) if you need to do overclocking to get most of it, it also takes time. And the sad part, is that you can't do anything while you do it. It is a very chronophage task! So basically, while I'm writing here, I allow myself to distract my mind from my work. But I can go right back to it when I want. Which is not the case when you OC. Until you are stable.... You are stuck trying again and again all the settings. PS: I love the idea of the Affinity benchmark by the way. Would love to see them actually release something officially directly in their tool. Might also be very valuable data for them.
  9. Qu4ntumSpin

    -

    All DDR4 is 2133. Everything above that is "overlocking" XMP profiling etc. So technically, they are saying the right thing on their site. Now, not all motherboard are able to use all XMP/A-XMP profiles, that's why there is a difference in price and chipsets in the motherboards. But yeah, 4x16GB at 3200 for a very very young platform (Things will get better as new UEFI comes out and AMD releases more code) is extremely impressive. Now, I wonder about his system stability... One shot run is not stability. This is why I don't overclock my CPU to 4GHZ. I just don't have the time to do that and test all the minuscule settings. I paid less for a 1700 instead of a X part and overclocked a bit so I get more for my money for very little time investment. If you can earn more than 100$/hour. Then saving 200$ but spending more than 2-3hours doing this is pointless. Just purchase somethings that works for your needs right away :) This is how I do it. ^^ time = money.
  10. Qu4ntumSpin

    -

    All of them :)
  11. Qu4ntumSpin

    -

    You're absolutely correct. But it also loves very low latency RAM. So you can live with slower speeds and low CAS. Yes, my RAM is rate 3200. But it is not a Samsung B ram chipset one... Hynix memory instead. So Dual rank and all that... So no way to boot at 3200 even living on the very newest UEFI bios: You can find all the info about RAM on the CH6 x370 board here : https://rog.asus.com/forum/forumdisplay.php?292-Crosshair-VI-Motherboards-(X370) and here: https://docs.google.com/spreadsheets/d/1YSZB70P71Kd4iAyxSpAZf0lc2GmyALKJOQ7vA1MhV2s/pub?output=html&widget=true
  12. Qu4ntumSpin

    -

    Well, kinda disagree here. Most graphical algorithms can be done in the GPU, in fact they are made for that explicitly. The problem is that there is many ways to code for a GPU, and sadly AMD/NVIDIA didn't come with a standard that "works" for all, which means a ton of work to support both. Plus there is the problem that Nvidia is king on windows, and AMD is the choice of Apple for most of their devices. They use sometimes Nvidia cards, but those are generally on the very low end mobile card :/ CUDA -> NVIDIA OPENCL -> BOTH The problem is that even if CUDA is technically "open", AMD did choose to go with OpenCL and they haven't really put efforts in it and still only support old opencl 1.1 standards... However AMD might have come to a solution so if "Affinity" Developers are reading this... : https://github.com/GPUOpen-ProfessionalCompute-Tools/HIP Convert your CUDA code to portable C++ code that will run on both AMD & NVIDIA ! Quite nice. If this reveals to be a success (most of it really depends on political issues... not engineering problems), more greatness is to come just like the Vulkan API for computer graphics. But again. I doubt it, I just hope Affinity (Serif) is able to generate a budget that is good enough to target both the main platforms. NOTE: I know the issues related to exchanging information from RAM to VRAM. I have done my optimizations in GPU, and I also know that it scales better when the data is big and that the time you spend doing the round trip to the GPU is better than the CPU doing the work. Now if you representation of the data in RAM is terrible, nothing is going to help that. PCI-E gen3 x 16 bandwith is insane, and new graphic cards are insanely fast and yes, some work like filter might better be done on the GPU, just like most filters you applied in this macro. Even 3 minutes to process this is very long! Andy Somerfield is perfectly right when he says that the architecture of mobile devices share the memory and with the api available that saves a lot of work and can result in awesome gpu usage compared to desktop. However I wouldn't simple trash the gpu work because of that.
  13. Qu4ntumSpin

    -

    @MBd, check the my post above, I edited quite a bit. Added screenshots and even measured for you the GPU usage. None of the filters you apply in this macro testing are using the gpu. EDIT: My 2 cents on Ryzen 7. It feels exactly like x99 platform from Intel but hell lot cheaper !!! (Affordable might one say) If you do any editing, video, photo, etc. Just get one of those.
  14. Qu4ntumSpin

    -

    @MBd Thanks. Firstly results SBOOK PERF - Surface Book Performance 2tb version - GPU GTX 965M - 16GB of RAM - Time: '00:19:51:05' - Aouch... this is quite slow ^^ - Plugged to the wall (no power saver active) Ryzen with "WRAP" instead of GTX1070 - Same as before, just the WRAP selected - Time: '00:03:31:31' - I guess the difference, is mainly due to me having other software running like a gazilion chrome tabs, etc. Let me answer your questions then: 1) All 16 threads are used at 100% (well 99%...) 2) Yes my cpu is overclocked but it is far from an extreme overclocking, I'll explain Ryzen bellow 3) Ryzen 7 series aren't an APU (CPU+GPU in a chip) but instead just CPU's 4) Concerning the dedicated GPU, I can use the "WARP" renderer, but this doesn't guarantee everything will be ran on the CPU. See https://msdn.microsoft.com/en-us/library/windows/desktop/gg615082(v=vs.85).aspx In clear, it should run whatever on the CPU that can't be done on the GPU of the machine. But nowhere did I find that it does enforce the rendering on the cpu. Raster rendering is generally done on the CPU, it's a technique that transforms the 3D content into a flat image of pixel (rastering), however such behavior can be done in many ways. I have no clue how the WARP thing from Microsoft works... Ryzen 7 series explained: - All ryzen 7 CPU sku are the same. 1700/1700x/1800x - They have all 8cores and 16 threads - What changes is the "stock" speed at which they run and by so the power they consume. Why is that ? Not all silicon is made equal, believe it or not, if you make 10 "same" CPU parts, they will behave very differently from one another. This is why AMD/INTEL/NVIDIA/you_name_it do chipset binning. They test every cpu/gpu/chipset with several criteria and they sell it accordingly. In clear, when you take a 1800x you pay more because they already tested your cpu at higher frequency and it is set to run at that frequency no matter what, without you doing anything. Because of that you can easily extrapolate the performance of those at "stock" for all the ryzen 7 series. http://www.guru3d.com/articles-pages/amd-ryzen-7-1700-review,10.html << Check the graphs. Look for "multi-threaded" runs, they use all the cores like Affinity Photo does, you will see the % difference between each Ryzen 7 series, it will be the same % for your macros in AP. Ryzen cpu's and the way they are manufactured (LPP) forces them to run at "lower" frequencies. It means these cpus are not ment to go above 4.1/4.2 ghz, except if you go all nuts with liquid nitrogen but that is something entirely different ! NOTE on how I did things: I executed your tests with the settings on "Best Quality" and "Dither gradients" disabled. Which means, this might also effect the speed at which your "macro/bench" is executed too. I started the timer as soon as the process starts. I stoped the timer only when the image is rendered and properly displayed on AP. NOTE 2 on this macro test: None of the filters that you apply use the gpu. Look at the GPU usage on the monitors, 0% usage.
  15. Qu4ntumSpin

    -

    Here are my results: - Time: '00:03:42:60' - OS: Windows 10 - CPU: Ryzen 7 (1700@3.8ghz) AIO x62 kraken (silence! thank you...) - RAM: 32GBDDR4 @2666 - MAIN DISK: system on a 500GB nvme ssd (960 EVO) - Lots of hardrives.... not worth mentioning. (I wish I could live without them... super noisy!) - GPU: MSI gaming x GTX 1070 @1.9ghz (totally silent) - PSU: 750w RMx from corsair (not using half of that, but most 80gold+ psu remains totally silent with 50% load) AP run great, tons of features. I have tested Capute One Pro, it just handles my RAW uncompressed files wayyyyyyyyyy better, it is really another level of precision and speed, but it is not a replacement for AD features, more a lightroom replacement. This is how I would score Serif products across all my machines (X/5 stars): AP: 4.5 stars definitely, todo: improve the raw processing speed and quality, create a DAM to go with it. AD: 3 stars (Crashes too often), lots of small quirks here and there. Will post results for the Surface Book performance later. Best
×