TonyB

Moderators
  • Content count

    1,491
  • Joined

  • Last visited

4 Followers

About TonyB

  • Rank
    Top Cat

Profile Information

  • Gender
    Male
  • Location
    Nottingham, England

Recent Profile Visitors

15,560 profile views
  1. 32 fold oversampling would have to be done on the while scene to solve blending issues. If coverage calculation is 25% of our render time then doing it with 32 oversampled would still likely cause a 8 times slower render. Please remember the all coverages are calculated by the CPU so GPU can't help here. Also if we then do a colour render and blend with the 32bit coverage mask then the line contributions due to alpha would be degraded. In 3D this often doesn't mater but it it can make a bigger difference with possibly thousands of blended alpha primitives. High-end 3D renders moved over to raytracing for good reasons.
  2. 3D applications have the same issues with rendering multiple primitives. Your example of triangles is for a single primitive made of many triangles using the same colour or pixel shader. If it was simple to solve then we would have done it as would Adobe. We can always improve singe primitive rendering and we have plans to achieve this but it will not improve multiple primitive blending as your example. The workarounds for output are as you described - supersampling. We could add a supersample option on export to achieve this.
  3. It depends on the psd files but the best thing is to download the trial and have a look.
  4. These artefacts are well know but as far as I know are not solved in any vector applications available with the exception of Flash but that creates other issues with fat lines etc. You can minimise the effect by changing the blending gamma as some application do but the edge quality is compromised.
  5. Can you post a video and the photo file you are having issues with?
  6. Live layers are always slower but give a lot of flexibility. When doing a lot of heaving editing users often switch their live layers off and then back on again when they need them.
  7. Any chance you could post a video of the performance issues on your 7 layer image?
  8. The RAW will be stored compressed but Affinity can't use lossy compression as we don't know how much detail you would like to sacrifice. 200mb is inline with what a 16bit Photo file with edits should be.
  9. Why not just switch to LAB/16 colour mode. Doesn't this give you the results you are looking for?
  10. The point is the bilinear above 100% is inventing pixels so you are not editing real pixels. The reason to edit zoomed more that 100% is usually to edit pixels.
  11. Your image is only 736x432. If you zoomed to 200% instead of 130% then the screen shots would be the same. If you bilinear over 100% then you are inventing pixels and editing pixels that don't actually exist. Why are you editing an image with such low resolution?
  12. The viewport will only use bilinear when you are zoomed out past 1:1 pixel view. This is consistent with Photoshop and other good photo editors. Why would you want to see bilinear sampling when zoomed in?
  13. Can you attach your simple CMYK document here so we can have a look.
  14. This is mainly an iOS issue as we use the native capability of the OS. iOS 11 will have other cloud drive providers available through a new system so all should be good. iOS 11 should be available later this year and we will update Affinity to support it.
  15. Sorry but Text on a path isn't included in Affinity Photo. Affinity Designer does have the feature but isn't available for the iPad yet.