Jump to content
You must now use your email address to sign in [click for more info] ×

James Ritson

Staff
  • Posts

    855
  • Joined

  • Last visited

Posts posted by James Ritson

  1. 39 minutes ago, Keith Palmer said:

    The power of the hue range mask is very clear from the video. But I'm not clear how the luminosity blend range mask in v2 differs from an ordinary mask with the blend range cog employed in v1. Can you please advise me? Thank you.

    Hi Keith, in some ways it doesn't—both use a weighted intensity calculation to blend tonal range, but "Blend Ranges" was obscure terminology for users when they were expecting to find luminosity masks.

    However, it does have some advantages over blend ranges:

    • You can save and load presets
    • It has an additional gaussian blur option, which as demonstrated in the tutorial creates a nice edge-contrast effect
    • It's a mask layer, so you can paint on it as well, and also drop it into a Compound Mask to combine multiple masks together (e.g. you could have two luminosity range masks that 'intersect' with one another)

    Hope that helps,

    James

  2. 7 hours ago, Scapa said:

    Hey James,

    Thanks for another set of great tutorials. Always loved them. One thing though: Can we please have them back on the affinity website? That was such a clean and clear overview with the well sized thumbnails. YouTube Playlist are - sorry - a mess, at best. Not to forget, Googles never ending experiment with changing the UI and UX of the YouTube Website. Using the Playlists I find it way harder to find what I'm looking for + more clicks than necessary (Click on basic > new page opens > first video starts playing > disturbs from searching through the playlist). 

    Again, nothing wrong with the content itself, great as always. Same goes for the new Suite itself. Great work!

     

    Hi Scapa, for V2's release we decided to try going directly to YouTube as it allows us to iterate quicker on video work and be more reactive to what users are after—it was one in several factors contributing to these new videos moving forward. Hopefully for the meantime the list provided on this forum will help you find what you're looking for?

  3. 1 hour ago, Colin Red said:

    I am sorry to inform you I don't do You Tube, what happened to your own video platform. Worst point I need to sign in to You Tube, not happening ....... I would love to update to V2 and have been with Affinity since the early days. So I may have to pass on V2 .... unless there is a change ..... I am disappointed 

    Hi Colin, sorry to hear you feel that way—as far as I'm aware, however, you can watch YouTube videos without needing to create an account or sign in. The videos are all demonetised so there are no adverts that will encroach on the content.

  4. Hello all, we're proud to announce that alongside the V2 launch of the Affinity apps, we've produced a completely new set of video tutorials to compliment the apps. These tutorials are all produced in-house by our Product Experts team.

    We've worked incredibly hard on these videos, and feel they represent a huge jump in both technical and presentational quality. Hopefully you all agree! The old V1 videos—now considered legacy—are still available on YouTube, consolidated into one playlist. The link for this playlist is available at the bottom of this post.

    Hope you all find the new tutorials useful!

    What's New (Overview of V2)

    Basics

    Advanced

    Layers

    Tools

    Filters and Adjustments

    Corrective & Retouching

    Workflows & Techniques

    iPad Tutorials

     

    Legacy V1 Tutorials

  5. Hello all, we're proud to announce that alongside the V2 launch of the Affinity apps, we've produced a completely new set of video tutorials to compliment the apps. These tutorials are all produced in-house by our Product Experts team.

    We've worked incredibly hard on these videos, and feel they represent a huge jump in both technical and presentational quality. Hopefully you all agree! The old V1 videos—now considered legacy—are still available on YouTube, consolidated into one playlist. The link for this playlist is available at the bottom of this post.

    Hope you all find the new tutorials useful!

    What's New (Overview of V2)

    Basics

    Advanced

    Text Tools

    StudioLink - Designer and Photo Interworking

    Workflows & Techniques

     

    iPad Tutorials

     

    Legacy V1 Tutorials

  6. Hello all, we're proud to announce that alongside the V2 launch of the Affinity apps, we've produced a completely new set of video tutorials to compliment the apps. These tutorials are all produced in-house by our Product Experts team.

    We've worked incredibly hard on these videos, and feel they represent a huge jump in both technical and presentational quality. Hopefully you all agree! The old V1 videos—now considered legacy—are still available on YouTube, consolidated into one playlist. The link for this playlist is available at the bottom of this post.

    Hope you all find the new tutorials useful!

    What's New (Overview of V2)

    Basics

    Advanced

    Vector Tools

    Raster Tools

    Text Tools

    Workflows & Techniques

     

    iPad Tutorials

     

    Legacy V1 Tutorials

  7. 6 hours ago, augustya said:

    Hi James,

    Seen a lot of your YouTube videos, like your way of explaining things 

    So can I ask you this question which I asked on the Mac Mini Forum.

    Would you say to use Affinity Photo on the M1 Mac Mini even 8GB of RAM should be good enough ? or should I go with the Mandatory 16 GB of RAM ?

    Hey again,

    I would definitely go for 16GB. You might easily outgrow 8GB, which is entirely possible with larger resolution documents that have multiple layers. You should also consider that Apple Silicon uses a shared memory architecture, so both your CPU and GPU will be using that memory pool. Back when I had the Mac Mini, it seemed that macOS would impose a restriction on the amount of memory the GPU could allocate (roughly half of the available 16GB)—having since moved to an M1 Max with 64GB, I managed to get Photo to use around 45GB (which is certainly over half), so I couldn't comment definitively on how that works, especially since they were on different OS revisions (Big Sur and Monterey).

    Bit of a long-winded way of saying always go for the maximum you can afford! With any kind of image editing workflow and any software, you may find you can easily max out 8GB nowadays. The super fast swap helps, certainly, but it's no substitute for actually having that extra memory headroom.

  8. Hi @augustya, I have previously used an M1 Mac Mini with 16GB memory.

    Affinity Photo was never really designed for loading many RAW files simultaneously—it's not a batch RAW editor so will treat each RAW image loaded in as a separate document with its own memory footprint, rather than trying to be conservational with memory. RAW files are composited in 32-bit floating point precision, which also requires more memory than 8-bit or 16-bit.

    When used appropriately, Photo's memory management on M1 devices is honestly fine. macOS is very good at managing swap memory, and uses swap quite aggressively compared to Windows anyway. I rarely had instances when editing photographic images where things would slow down because of memory issues. Creating huge live stacks full of 16-bit images and working with 32-bit multi-layer renders would occasionally eat into swap memory, causing some hitching here and there, but for general image editing you really shouldn't be worried about it.

    You will find people making spurious claims that 16GB on Apple Silicon is 'like' 32GB on other devices (similarly, how 8GB is 'like' 16GB)—I believe this may be as a result of experiencing M1/M2's very fast storage architecture, so loading in and out of swap is much faster than older hardware generations. This means that out of memory situations are not as debilitating for productivity as they may have previously been.

    What else would you be trying to do simultaneously that may require large pools of memory? If you're trying to multi-task with memory hungry applications, you may be better off looking at something with more memory, e.g. the Mac Studio with 32 or 64GB. I would be saying this regardless of what software you were trying to use, it's more about the workflow and what you expect to be doing.

    Hope that helps!

  9. Hi @cgidesign, hardware acceleration is used for all raster operations within Photo—that includes basic layer transforms through to filter/live filter and adjustment layer compositing. In cases where a GPU with strong compute performance is present, the performance difference can be huge. Take M1 Max for example: the CPU is no slouch, and scores very highly on our vector benchmark, but raster performance is only around 1000 points. In comparison, raster performance on the GPU is 30,000. That's a linear scale, so to say it's 30x faster than CPU is no exaggeration.

    The fact that all major raster operations are accelerated is a huge undertaking, and means it's not really appropriate to compare our hardware acceleration implementation with what other software has. I understand there's a lot of frustration around the choice to disable support for AMD's Navi cards, but Photo compiles new kernels every time you add an adjustment, filter, use a tool etc for the first time during the session. It was simply unusable on Navi because the kernel loading is unacceptably slow, to the point where it could take up to a minute for a RAW file to be initially processed and displayed in the Develop Persona (multiple kernels are loaded simultaneously in this scenario).

    If you start to introduce vector operations to your workflow, such as text, vector shapes, poly curves and so on, this is where the GPU must copy memory back to the CPU, and this is where most bottlenecks occur—unified memory architecture (e.g. Apple Silicon) manages to reduce this penalty somewhat, but as of this current time it's still not as fast as compositing purely on the GPU.

    The Benchmark (Help>Benchmark) will give you quite a clear idea of the performance improvement you can expect. In particular, focus on the Raster (Multi CPU), Raster (Single GPU) and Combined (Single GPU) scores. They will indicate CPU compositing performance, GPU compositing performance and shared performance (copying memory between them when using a mix of raster/vector operations) respectively.

    Hope that helps!

  10. 5 hours ago, Roland Schwarz said:

    Using Mac Studio M1 max with Mac Studio in P3-D50 mode. Same here - Photo works best with Open GL. It's weird.

    Have you tried disabling Automatically adjust brightness, as I think you meant to say you have the Apple Studio display? That has a brightness sensor and will automatically graduate the display brightness according to ambient conditions, so you will be running into the beach ball issue.

    I wouldn't advocate disabling Metal Compute and using an OpenGL view unless absolutely necessary, you'll be greatly reducing performance—do try disabling automatic brightness first and see if that helps...

  11. 6 hours ago, Darksky said:

    I am new to Affinity Photo and use it mostly for Astrophotography stacking and processing.  I noticed and could not find info on how to rotate a FITTS file while in the Astronomy Stack feature.  Some of my photos had to experience a Meridian flip and must be rotated 180-deg.  The manual address rotating in the Document view but I do not think the view while in Stack is Document view.  It will rotate but the picture goes back.  Also, the View tab only allows a rotation of every 15-deg which takes too much time to rotate.  I read that in the Document view there is a rotation command giving more degrees to rotate at one time.  So, if you are viewing a FITTs file format; can it be rotated, or do I have to convert all my FITTs files to TIFF files to rotate in a Document view?  I doubt Affinity Photo will completely flip my pictures in the Stacking process.

    Hi @Darksky,

    I believe meridian flip was addressed in 1.10, so Photo will recognise this and flip them—it aligns on star features and will transform the individual subs to achieve this. I've definitely stacked a data set with meridian flip successfully.

    There's another strange scenario I encountered when capturing via iTelescope where the capturing software applied an additional flip. I've only had this happen once, but I passed the data onto the developer responsible for the stacking functionality and he has addressed it for any future versions.

    Hope the above helps.

  12. Hi @Gary.JONES, Photo is not gamma correcting your initial colour values. The document view is gamma corrected but the actual document itself is composited in linear 32-bit RGB. The Digital Colour Meter you are using will be picking values from the transformed document view and not the original linear values.

    Try using Photo's native colour picker—you should hopefully find the values are much closer to those you are expecting to see.

    The reason the document view is gamma corrected is to match the output if the linear RGB 32-bit colour values were converted to gamma-encoded values, e.g. for typical TIFF/JPEG export. This way, you avoid the equivalent Photoshop workflow where you tone stretch, then have to flatten and convert to 16-bit with tone mapping.

    If you go to View>Studio>32-bit Preview to bring up the 32-bit preview panel, you can switch the display transform to Unmanaged, which is linear light. This will display the scene-referred values rather than display-referred.

    Do make sure you're using ICC Display Transform for editing and exporting, however—otherwise you will be surprised when your exported gamma-encoded image looks brighter and washed out.

    Don't forget that Photo also adds Levels and Curves adjustments to stretch the initial data—it does this both in the Astrophotography Stack persona and if you open a single FIT file. If you hide or delete these, you will be able to examine and colour pick the linear colour values.

    tl;dr Photo does not modify the original linear colour values—all compositing is in linear colour space, but a non-destructive gamma transform is applied to the document view by default to negate the requirement of tone mapping when exporting.

    Hope that helps!

  13. 19 hours ago, PatrickCM said:

    2 years late to the game here, but in one of the Nikon forums, it was recommended that I calibrate my camera, using this product.

    As far as I can tell, Affinity Photo is not supported. Is there an alternative?

    Hi @PatrickCM, the Affinity apps will perform document to display colour management using bespoke display profiles that are created with X-Rite measurement devices (e.g. i1Display Pro), which is a large part of ensuring colour accuracy, but they don't currently support custom camera profiles that are created by photographing and referencing the colour checker passport. I'm not sure which applications explicitly have support, but I suspect apps focused heavily on batch photographic development will have it (such as DxO).

    You can use the white balance portion of the colour checker passport however, as Affinity Photo's Develop Persona has a white balance tool that you can use to sample from an image with the passport in shot.

    Hope the above helps!

  14. 1 minute ago, nitro912gr said:

    I dont know if benchmarking photo will help because I don't use it that match to notice any lag in my workflow. But since I literally used file A at office and then open it again in home to continue and it felt unbearable.

    But that was back in March and if I remember correctly I mostly had trouble in documents with lot of text, it was some restaurant menu I think. Does the font rendering go through GPU? Because I just opened a complex vector file with the bare minimum of text and it was fine now.

    The benchmark in Photo will be representative for all the Affinity apps—the CPU single and multi vector scores illustrate the kind of performance to expect in Designer as it primarily deals with vector operations (which are all CPU-based).

    Text will be rendered on CPU as it's vector. I believe layer effects (e.g. drop shadow, gaussian blur) are also rendered on CPU in Designer but I'll double check. There could be other factors at play—it's worth doing the benchmark first on both machines to compare the scores, as it may highlight an area where one is weaker...

  15. 3 hours ago, nitro912gr said:

    this can't be truth, I have noticed considerable lag in vector and fonts affinity designer files I work in similar systems in office and home, the only difference is the home one have the 5500XT and as of that work without GPU acceleration.

    Aside that the one in office a bit tad slower in CPU and the SSD, while a bit faster in RAM speed (2400 in home, 2666 in office).

    Vector operations are not processed on the GPU. Perhaps display resolution could be a factor as it will influence the document view rasterisation resolution, e.g. a 1920x1080 display versus 2560x1440 (or even 3840x2160, which requires rasterisation at 4x the spatial resolution of 1080p).

    It may be worth benchmarking in Affinity Photo if you have it and comparing the CPU Vector Single and Multi scores (the top two entries) between your home and work machines to see which one is faster in practice?

  16. 17 minutes ago, ziplock9000 said:

    That's what I have, 5700XT Red Devil and I've had no issue with the many games I've played over the last ~2 year or with any GPU accelerated applications (of which I use a lot as a game developer). Affinity Designer is the ONLY software that has issues for me unfortunately.

    What issues are you having with Affinity Designer? OpenCL is only used to accelerate raster paint brushing in Designer, all the vector operations are performed on CPU anyway. As far as I'm aware there are no issues with DirectX view presentation on Navi cards.

  17. On 8/29/2022 at 3:21 PM, Nox said:

    Adobe products use OpenCL on AMD GPUs, they have zero issues running on there. DaVinci Resolve uses OpenCL, but surprisingly it doesn't have any issues on AMD either.

    Just to clarify this slightly, OpenCL is not utilised widespread across an entire app and is mainly used for specific functionality—e.g. Photoshop uses hardware acceleration for a small subset of its functionality like the blur gallery, camera raw development, neural filters and some selection tools (source: https://helpx.adobe.com/uk/photoshop/kb/photoshop-cc-gpu-card-faq.html). In fact, on that page OpenCL appears to be used very sparsely:

    Quote

    Use OpenCL: Enable to accelerate the new Blur Gallery filters, Smart Sharpen, Select Focus Area, or Image Size with Preserve Details selected (Note: OpenCL is available only on newer graphics cards that support OpenCL v1.1 or later.)

    Affinity Photo leverages hardware acceleration for practically all raster operations in the software—from basic resampling to expensive live filter compositing—requiring many different permutations of kernels to be compiled on the fly. Every time you add a new live filter or apply a destructive filter, paint with a brush tool, add an adjustment layer or perform many other operations, these all need to load kernels for hardware acceleration. With the majority of GPUs and driver combinations, this kernel compilation time is more or less negligible, but as Mark has posted previously with benchmarks, the Navi architecture with its current drivers appears to be problematic here. Any kind of comparison to Photoshop's OpenCL implementation is not appropriate, as the two apps use it very differently.

    I previously had a 5700XT, and loading a single RAW image was painfully slow because a number of kernels needed to be compiled simultaneously (for all the standard adjustments you can apply in the Develop Persona). We're talking almost a minute from loading in a RAW file to being able to move sliders around. The previous Polaris generation of cards are, to my understanding, absolutely fine with OpenCL kernel compilation.

  18. Hi @ABSOLUTE Drone and others in the thread, I've developed some non-destructive tone mapping macros that may help here: they're spatially invariant so are very useful for HDRIs/360 HDR imagery. You just apply the macro and it adds a new group, you can go in and set the base exposure/white balance among other things.

    They're available for free (just put "0" into the Gumroad "I want this" box) from here: https://jamesritson.co.uk/resources.html#hdr

    It may also be worth having a look at the Blender Filmic macros, available directly underneath the HDR tone mapping macros. They effectively do the same thing but are designed to emulate the Filmic transforms within Blender, so you have different contrast options. Hope that helps!

    Video tutorial here, at 11:38 in the video you'll see the 360 seam-aware tone mapping:

     

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.