Jump to content
You must now use your email address to sign in [click for more info] ×

James Ritson

Staff
  • Posts

    855
  • Joined

  • Last visited

Everything posted by James Ritson

  1. Hi Keith, in some ways it doesn't—both use a weighted intensity calculation to blend tonal range, but "Blend Ranges" was obscure terminology for users when they were expecting to find luminosity masks. However, it does have some advantages over blend ranges: You can save and load presets It has an additional gaussian blur option, which as demonstrated in the tutorial creates a nice edge-contrast effect It's a mask layer, so you can paint on it as well, and also drop it into a Compound Mask to combine multiple masks together (e.g. you could have two luminosity range masks that 'intersect' with one another) Hope that helps, James
  2. Hi Scapa, for V2's release we decided to try going directly to YouTube as it allows us to iterate quicker on video work and be more reactive to what users are after—it was one in several factors contributing to these new videos moving forward. Hopefully for the meantime the list provided on this forum will help you find what you're looking for?
  3. Hi Colin, sorry to hear you feel that way—as far as I'm aware, however, you can watch YouTube videos without needing to create an account or sign in. The videos are all demonetised so there are no adverts that will encroach on the content.
  4. Hello all, we're proud to announce that alongside the V2 launch of the Affinity apps, we've produced a completely new set of video tutorials to compliment the apps. These tutorials are all produced in-house by our Product Experts team. We've worked incredibly hard on these videos, and feel they represent a huge jump in both technical and presentational quality. Hopefully you all agree! The old V1 videos—now considered legacy—are still available on YouTube, consolidated into one playlist. The link for this playlist is available at the bottom of this post. Hope you all find the new tutorials useful! What's New (Overview of V2) What's New in Affinity Photo 2 What's New in V2 of Affinity Photo for iPad Basics Interface Overview New Document (New: 22/04/24) Placing Images Transforming Layers RAW Development Resizing and Resampling Undo, Redo and Undo History Cropping Exporting Input Mouse Wheel Scrolling (New: 4/01/24) Aligning, Distributing and Unifying Layers (New: 28/02/24) Advanced Blend Options: Ranges, Antialiasing and Fill Opacity Non-Destructive RAW RAW Tone Curve Layer States & Queries (Updated: 29/02/24) HDR Merging and Tone Mapping Panorama Stitching Panorama Source Image Mask Tools Liquify Power Duplicate PSD/PSB import, export and smart object functionality Keyboard and Mouse Brush Modifier Tool Cycling Studio Presets Scope Panel Stock Panel Channels LAB Colour Model Stacking: Object Removal Stacking: Long Exposure Emulation Export Persona Focus Merging Batch Processing Macros Luminosity Blend Mode OpenColorIO Setup and Usage HDR PNG Import and Export (New: 28/02/24) Layers Mask Layers (Updated: 18/01/24) Groups Clipping and Masking Layers Compound Masks Soloing Layers Linked Layers Luminosity Range Masks Hue Range Masks Band Pass Masks Layer Colour Tagging Pattern Layers Pattern Layers for Plans & Diagrams Masking Adjustment & Live Filter Layers Using Fill Layers (New: 11/10/23) Tools Configuring the Tools Panel Artistic and Frame Text Tools Paint Brush Tool Clone Brush Tool & Sources Panel (New: 12/01/24) Filters and Adjustments HSL Black and White Invert Adjustment Split Toning Colour Balance Lighting Filter Normals Adjustment Unsharp Mask High Pass Filter Clarity Gaussian Blur Median Blur / Dust & Scratches Procedural Texture: Non Destructive HDR Tone Mapping Displace Filter Haze Removal Recolour Adjustment Vibrance Adjustment Defringe Filter (New: 29/11/23) Astrophotography: Linear Fit Astrophotography: Remove Background (New: 05/12/23) Corrective & Retouching Inpainting Manual Lens Correction Profiles Transforming Selections Workflows & Techniques Basic Image Edit Lock Children Layers for Masking (New: 28/02/24) Colouring and Annotating Plan Diagrams Basic Architectural Visualisation Edit Basic Edits with the Develop Persona Elevation Rendering Workflow Archviz Composition Workflow Digital Collage Infrared Emulation Creating Volumetric Light Unassociated Alpha and Alpha Render Passes External Linking for Compositing High Dynamic Range Workflows (best viewed on an HDR supported display or device) Astrophotography: File Groups Moon Processing Workflow (New: 12/10/23) Wildlife Editing Example (New: 15/12/23) iPad Tutorials RAW Development Adjustment Layers Filters and Live Filters Mask Layers (New: 23/01/24) Exporting Placing Images Command Controller File Management: Loading & Saving Legacy V1 Tutorials Desktop iPad
  5. Hello all, we're proud to announce that alongside the V2 launch of the Affinity apps, we've produced a completely new set of video tutorials to compliment the apps. These tutorials are all produced in-house by our Product Experts team. We've worked incredibly hard on these videos, and feel they represent a huge jump in both technical and presentational quality. Hopefully you all agree! The old V1 videos—now considered legacy—are still available on YouTube, consolidated into one playlist. The link for this playlist is available at the bottom of this post. Hope you all find the new tutorials useful! What's New (Overview of V2) What's New in Affinity Publisher 2 What's New in V2 for iPad Basics Interface Overview (New: 22/03/24) Placing Images Transforming Layers Adjustment Layers Packaging Collecting Resources Linked and Embedded Resources Page Numbering Style Picker Advanced Smart Master Pages Studio Presets Section Manager Auto Flow Notes Quick Grid Books IDML Import Opening, Editing and Importing PDFs Marquee Select Intersection Stroke Panel Rulers, Columns & Column Guides PDF Password Protection (New: 30/11/23) Tags Panel for Alternative Text (New: 30/11/23) Layer States (New: 20/03/24) Text Tools Linked Text Frames Text Wrapping Text on a Path Hyphenation and Spelling Dictionaries Running Headers Pinning (New: 05/07/23) StudioLink - Designer and Photo Interworking Designer Persona: Vector Drawing Photo Persona: Brushing Photo Persona: Live Filters Designer Persona: Multi Stroke and Fill Placing and developing RAW images Workflows & Techniques OpenAsset Integration iPad Tutorials Photo Persona: Paint Brush Tool Placing Images Designer Persona: Vector Drawing Command Controller Auto Flow Smart Master Pages Text Wrapping Linked Text Frames IDML Import Packaging Exporting and PDF Publishing Legacy V1 Tutorials Desktop
  6. Hello all, we're proud to announce that alongside the V2 launch of the Affinity apps, we've produced a completely new set of video tutorials to compliment the apps. These tutorials are all produced in-house by our Product Experts team. We've worked incredibly hard on these videos, and feel they represent a huge jump in both technical and presentational quality. Hopefully you all agree! The old V1 videos—now considered legacy—are still available on YouTube, consolidated into one playlist. The link for this playlist is available at the bottom of this post. Hope you all find the new tutorials useful! What's New (Overview of V2) What's New in Affinity Designer 2 What's New in V2 for iPad Basics Interface Overview (New: 22/03/24) Transforming Layers Adjustment Layers Layer Effects Placing and Scaling Images Colour Panel Undo, Redo and Undo History Copy, Paste and Duplicate Blend Modes Assets Navigator Panel View Modes Styles Panel Clipping and Masking Aligning, Distributing and Unifying Objects (New: 28/02/24) Advanced Expand Stroke Artboards DWG/DXF Import & Export (Updated: 28/02/24) Placed Layer Visibility Stock Panel Ruler and Column Guides Style Picker Symbols Pixel Grid Layer States (New: 28/02/24) Vector Tools Stroke Panel Vector Warp Vector Flood Fill Pen Tool and Node Tool Shape Builder Knife and Scissor Tools Pencil Tool Corner Tool Shape Tools Artistic & Frame Text Tools Booleans and Compounds Data Entry Spiral Tool Raster Tools Flood Fill Tool Smudge Brush Tool Paint Brush and Erase Brush Tools Symmetry and Mirroring Live Perspective & Mesh Warp Text Tools Text on a Path Workflows & Techniques Vector Flood Fill with Bitmap Patterns Vector Mandalas iPad Tutorials Exporting Layers Panel Navigator Panel Assets Shape Builder Vector Flood Fill Tool Smudge Brush Tool Symmetry and Mirroring Pen & Node Tools Paint Brush & Erase Brush Tools Knife & Scissor Tools Pencil Tool Command Controller Rulers and Column Guides Style Picker Legacy V1 Tutorials Desktop iPad
  7. Hey again, I would definitely go for 16GB. You might easily outgrow 8GB, which is entirely possible with larger resolution documents that have multiple layers. You should also consider that Apple Silicon uses a shared memory architecture, so both your CPU and GPU will be using that memory pool. Back when I had the Mac Mini, it seemed that macOS would impose a restriction on the amount of memory the GPU could allocate (roughly half of the available 16GB)—having since moved to an M1 Max with 64GB, I managed to get Photo to use around 45GB (which is certainly over half), so I couldn't comment definitively on how that works, especially since they were on different OS revisions (Big Sur and Monterey). Bit of a long-winded way of saying always go for the maximum you can afford! With any kind of image editing workflow and any software, you may find you can easily max out 8GB nowadays. The super fast swap helps, certainly, but it's no substitute for actually having that extra memory headroom.
  8. Hi @augustya, I have previously used an M1 Mac Mini with 16GB memory. Affinity Photo was never really designed for loading many RAW files simultaneously—it's not a batch RAW editor so will treat each RAW image loaded in as a separate document with its own memory footprint, rather than trying to be conservational with memory. RAW files are composited in 32-bit floating point precision, which also requires more memory than 8-bit or 16-bit. When used appropriately, Photo's memory management on M1 devices is honestly fine. macOS is very good at managing swap memory, and uses swap quite aggressively compared to Windows anyway. I rarely had instances when editing photographic images where things would slow down because of memory issues. Creating huge live stacks full of 16-bit images and working with 32-bit multi-layer renders would occasionally eat into swap memory, causing some hitching here and there, but for general image editing you really shouldn't be worried about it. You will find people making spurious claims that 16GB on Apple Silicon is 'like' 32GB on other devices (similarly, how 8GB is 'like' 16GB)—I believe this may be as a result of experiencing M1/M2's very fast storage architecture, so loading in and out of swap is much faster than older hardware generations. This means that out of memory situations are not as debilitating for productivity as they may have previously been. What else would you be trying to do simultaneously that may require large pools of memory? If you're trying to multi-task with memory hungry applications, you may be better off looking at something with more memory, e.g. the Mac Studio with 32 or 64GB. I would be saying this regardless of what software you were trying to use, it's more about the workflow and what you expect to be doing. Hope that helps!
  9. Hi @cgidesign, hardware acceleration is used for all raster operations within Photo—that includes basic layer transforms through to filter/live filter and adjustment layer compositing. In cases where a GPU with strong compute performance is present, the performance difference can be huge. Take M1 Max for example: the CPU is no slouch, and scores very highly on our vector benchmark, but raster performance is only around 1000 points. In comparison, raster performance on the GPU is 30,000. That's a linear scale, so to say it's 30x faster than CPU is no exaggeration. The fact that all major raster operations are accelerated is a huge undertaking, and means it's not really appropriate to compare our hardware acceleration implementation with what other software has. I understand there's a lot of frustration around the choice to disable support for AMD's Navi cards, but Photo compiles new kernels every time you add an adjustment, filter, use a tool etc for the first time during the session. It was simply unusable on Navi because the kernel loading is unacceptably slow, to the point where it could take up to a minute for a RAW file to be initially processed and displayed in the Develop Persona (multiple kernels are loaded simultaneously in this scenario). If you start to introduce vector operations to your workflow, such as text, vector shapes, poly curves and so on, this is where the GPU must copy memory back to the CPU, and this is where most bottlenecks occur—unified memory architecture (e.g. Apple Silicon) manages to reduce this penalty somewhat, but as of this current time it's still not as fast as compositing purely on the GPU. The Benchmark (Help>Benchmark) will give you quite a clear idea of the performance improvement you can expect. In particular, focus on the Raster (Multi CPU), Raster (Single GPU) and Combined (Single GPU) scores. They will indicate CPU compositing performance, GPU compositing performance and shared performance (copying memory between them when using a mix of raster/vector operations) respectively. Hope that helps!
  10. Have you tried disabling Automatically adjust brightness, as I think you meant to say you have the Apple Studio display? That has a brightness sensor and will automatically graduate the display brightness according to ambient conditions, so you will be running into the beach ball issue. I wouldn't advocate disabling Metal Compute and using an OpenGL view unless absolutely necessary, you'll be greatly reducing performance—do try disabling automatic brightness first and see if that helps...
  11. Hi @Darksky, I believe meridian flip was addressed in 1.10, so Photo will recognise this and flip them—it aligns on star features and will transform the individual subs to achieve this. I've definitely stacked a data set with meridian flip successfully. There's another strange scenario I encountered when capturing via iTelescope where the capturing software applied an additional flip. I've only had this happen once, but I passed the data onto the developer responsible for the stacking functionality and he has addressed it for any future versions. Hope the above helps.
  12. Hi @Gary.JONES, Photo is not gamma correcting your initial colour values. The document view is gamma corrected but the actual document itself is composited in linear 32-bit RGB. The Digital Colour Meter you are using will be picking values from the transformed document view and not the original linear values. Try using Photo's native colour picker—you should hopefully find the values are much closer to those you are expecting to see. The reason the document view is gamma corrected is to match the output if the linear RGB 32-bit colour values were converted to gamma-encoded values, e.g. for typical TIFF/JPEG export. This way, you avoid the equivalent Photoshop workflow where you tone stretch, then have to flatten and convert to 16-bit with tone mapping. If you go to View>Studio>32-bit Preview to bring up the 32-bit preview panel, you can switch the display transform to Unmanaged, which is linear light. This will display the scene-referred values rather than display-referred. Do make sure you're using ICC Display Transform for editing and exporting, however—otherwise you will be surprised when your exported gamma-encoded image looks brighter and washed out. Don't forget that Photo also adds Levels and Curves adjustments to stretch the initial data—it does this both in the Astrophotography Stack persona and if you open a single FIT file. If you hide or delete these, you will be able to examine and colour pick the linear colour values. tl;dr Photo does not modify the original linear colour values—all compositing is in linear colour space, but a non-destructive gamma transform is applied to the document view by default to negate the requirement of tone mapping when exporting. Hope that helps!
  13. Hi @PatrickCM, the Affinity apps will perform document to display colour management using bespoke display profiles that are created with X-Rite measurement devices (e.g. i1Display Pro), which is a large part of ensuring colour accuracy, but they don't currently support custom camera profiles that are created by photographing and referencing the colour checker passport. I'm not sure which applications explicitly have support, but I suspect apps focused heavily on batch photographic development will have it (such as DxO). You can use the white balance portion of the colour checker passport however, as Affinity Photo's Develop Persona has a white balance tool that you can use to sample from an image with the passport in shot. Hope the above helps!
  14. The benchmark in Photo will be representative for all the Affinity apps—the CPU single and multi vector scores illustrate the kind of performance to expect in Designer as it primarily deals with vector operations (which are all CPU-based). Text will be rendered on CPU as it's vector. I believe layer effects (e.g. drop shadow, gaussian blur) are also rendered on CPU in Designer but I'll double check. There could be other factors at play—it's worth doing the benchmark first on both machines to compare the scores, as it may highlight an area where one is weaker...
  15. Vector operations are not processed on the GPU. Perhaps display resolution could be a factor as it will influence the document view rasterisation resolution, e.g. a 1920x1080 display versus 2560x1440 (or even 3840x2160, which requires rasterisation at 4x the spatial resolution of 1080p). It may be worth benchmarking in Affinity Photo if you have it and comparing the CPU Vector Single and Multi scores (the top two entries) between your home and work machines to see which one is faster in practice?
  16. What issues are you having with Affinity Designer? OpenCL is only used to accelerate raster paint brushing in Designer, all the vector operations are performed on CPU anyway. As far as I'm aware there are no issues with DirectX view presentation on Navi cards.
  17. Just to clarify this slightly, OpenCL is not utilised widespread across an entire app and is mainly used for specific functionality—e.g. Photoshop uses hardware acceleration for a small subset of its functionality like the blur gallery, camera raw development, neural filters and some selection tools (source: https://helpx.adobe.com/uk/photoshop/kb/photoshop-cc-gpu-card-faq.html). In fact, on that page OpenCL appears to be used very sparsely: Affinity Photo leverages hardware acceleration for practically all raster operations in the software—from basic resampling to expensive live filter compositing—requiring many different permutations of kernels to be compiled on the fly. Every time you add a new live filter or apply a destructive filter, paint with a brush tool, add an adjustment layer or perform many other operations, these all need to load kernels for hardware acceleration. With the majority of GPUs and driver combinations, this kernel compilation time is more or less negligible, but as Mark has posted previously with benchmarks, the Navi architecture with its current drivers appears to be problematic here. Any kind of comparison to Photoshop's OpenCL implementation is not appropriate, as the two apps use it very differently. I previously had a 5700XT, and loading a single RAW image was painfully slow because a number of kernels needed to be compiled simultaneously (for all the standard adjustments you can apply in the Develop Persona). We're talking almost a minute from loading in a RAW file to being able to move sliders around. The previous Polaris generation of cards are, to my understanding, absolutely fine with OpenCL kernel compilation.
  18. Hi @ABSOLUTE Drone and others in the thread, I've developed some non-destructive tone mapping macros that may help here: they're spatially invariant so are very useful for HDRIs/360 HDR imagery. You just apply the macro and it adds a new group, you can go in and set the base exposure/white balance among other things. They're available for free (just put "0" into the Gumroad "I want this" box) from here: https://jamesritson.co.uk/resources.html#hdr It may also be worth having a look at the Blender Filmic macros, available directly underneath the HDR tone mapping macros. They effectively do the same thing but are designed to emulate the Filmic transforms within Blender, so you have different contrast options. Hope that helps! Video tutorial here, at 11:38 in the video you'll see the 360 seam-aware tone mapping:
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.