Jump to content

James Ritson

  • Content Count

  • Joined

  • Last visited

About James Ritson

  • Rank
    Product Expert

Contact Methods

  • Website URL

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

10,145 profile views
  1. @Danielcely @Russell_L @Griffy apologies for bumping this thread, but there's hopefully a solution—do any of you happen to be using desktop mice, and if so are they gaming mice, Intellimouse, other high-end devices etc? Big Sur has an issue with mice that have high polling rates, and it's causing all sorts of stuttering and performance issues across the board. I have an Intellimouse Pro and brush work in the Affinity apps is unusable—in fact, anything relying on the mouse pointer, so manipulating layers, selecting UI elements etc. Trying to navigate around the Big Sur UI (especially the dock) causes stutter and slowdown as well. The ham-fisted solution was to download the Microsoft Mouse and Keyboard Centre app on Windows, change my mouse polling rate from 1000hz to 125hz, and now everything in macOS is fine, including performance in the Affinity apps. Worth a try on the off chance that you are all using non-Apple pointer devices? You should be able to find software for most manufacturers (e.g. Razer has a management app for both Mac and Windows) to change the polling rate and see if it improves things. Here are a few links to the polling rate issue: https://www.reddit.com/r/MacOS/comments/juj8zs/mouse_lagging_in_big_sur/ https://discussions.apple.com/thread/252047056 https://www.reddit.com/r/MacOS/comments/jzd8iq/macos_big_sur_1101_bug_with_mouse/ https://www.reddit.com/r/MacOS/comments/k3880z/for_mouse_user_fixing_big_sur_lag_when_using_mouse/
  2. Thanks, we're aware of various unlinking issues, let us know if you find any more! So far it seems to be dragging layers up and down the layer stack (using Arrange options is fine), and moving layers in/out of enclosures or moving them to different enclosures. @Old Bruce with your mention of dragging the pair of layers, it might just be worth trying the Arrange menu options (or shortcuts, CMD+brackets/CTRL+brackets for Mac/Windows) to see if that works.
  3. Hi @CJI, I'll do my best to address the issues you've raised. I do a lot of 3D retouching work from Blender and 3ds Max/V-ray (EXR output) so will hopefully be able to give you a few pointers here. First of all, I assume you're using OpenEXR or Radiance HDR formats in 32-bit? 32-bit in Photo is obviously a little bit different as it's not just linear but also unbounded, whereas Photo's 16-bit and 8-bit formats are gamma corrected and bounded. This probably means there are two main points to address: tone mapping and compositing. How would you usually achieve tone mapping in Photoshop? In Photo, one option is to use the Tone Mapping Persona (not Develop), which will give you methods of mapping unbounded values to within the 0-1 range. You can also use OpenColorIO transforms—for example, with Blender, you can apply the Filmic view transform and looks. I did a video on that a couple of months ago: You can also try various adjustments for a more manual approach—the Exposure slider to shift extreme highlights down, for example, then Curves and Levels with a gamma adjustment. This brings me onto compositing—everything operates in linear space (scene linear) within 32-bit, then you have a gamma corrected view transform applied afterwards. It does mean that adjustments in particular may behave differently or seem more "sensitive". Photo allows you to use pretty much the entire roster of tools, adjustments and filters (with the exception of median operators) in 32-bit, but there are a few caveats. Most adjustments should avoid clipping or bounding values, but pre-tone mapping I would stick to Exposure, Curves, White Balance, Channel Mixer, Levels etc. Brightness & Contrast will only operate on a 0-1 value range, but won't clip any unbounded values, and Curves will operate on the 0-1 range by default but you can change the minimum and maximum input values—so if you wanted to only manipulate bright values above 1, for example, you can set minimum to 1 and maximum to 100 (or whatever the brightest value is in your document). Adjustments like HSL, Selective Colour and others that tend to focus on colour manipulation are best saved post-tone mapping, where your pixel values will be in the 0-1 range. If you use OCIO adjustment layers (or check out my Blender Filmic macros which do away with the OCIO dependency) you can add these adjustments above the tone mapping layers or group. If you want to use the Tone Mapping Persona, I'd advise you to do Layer>Merge Visible and then tone map the merged pixel layer, then put your adjustments above this. Hopefully by following the above advice, you'll avoid the clipping and washing out that you describe. I suspect you may not have tone mapped your image and are trying to use adjustments like HSL, Selective Colour on the linear values? Converting to 16-bit at this point will not help the issue, since unbounded values outside 0-1 will be clipped. You need to tone map first using methods described above, then you can manipulate colour freely. That said, as I've covered above, there are certain colour manipulations you can do on the linear values pre-tone map. Channel Mixer, for example, won't clip values, nor will White Balance. I also do a lot of stacked astrophotography editing in 32-bit linear, and sticking a White Balance adjustment before tone stretching is a really powerful way of neutralising background colour casts. It's useful with render compositing too since you can completely shift the temperature and tint without introducing artefacting. One final caveat, then—have you configured OpenColorIO at all with your Photo setup? This throws people off, because when you do have it configured, opening an EXR or HDR document will default to the OCIO transform method (the final document to screen conversion) rather than ICC-based. This is great if you intend to export back to EXR/HDR and simply want to do some retouching work, but if you plan to export from Photo to a standard delivery format like TIFF/PNG/JPEG etc you need to be using ICC Display Transform for an accurate representation. You can configure this on the 32-bit Preview Panel (View>Studio>32-bit Preview). To follow on from this, you mentioned profiles at the bottom of your post—I think you might be referring to document profiles etc? Don't get too hung up on this—in linear unbounded, the colour profile is somewhat arbitrary, and is only used with ICC Display Transform to convert and bound the linear values you're working with into gamma corrected values. With Unmanaged or OpenColorIO view options, this profile does not matter. If you're aiming for web delivery, stick to sRGB (Linear) with ICC Display Transform set and everything will be fine! Apologies for the small essay, hope you find all the above useful.
  4. Well, you would need to just research whether the listed monitors will accept an HDR10 signal. If they advertise HDR specifically and you can find a peak brightness value listed such as 1000nits/600nits etc then yes, chances are you will be good to go. Even better if you can find detailed specifications and confirm they will accept HDR10. I cannot speak about the USB-C connectivity on those monitors and whether they would also carry an HDR10 signal, but since you’re using a Mac with a discrete GPU I guess that’s not an issue here... Unless any third party display specifically advertises it—none that I know of—EDR is limited to Apple displays and (possibly) LG’s third party Thunderbolt displays. However, just to reiterate @Johannes, you would first need to update to Catalina or Big Sur. I believe EDR support is available on Mojave, but not HDR, you will need Catalina minimum for that. However, for all intents and purposes, they behave the same in the Affinity apps, and are simply both referred to as EDR on the user interface.
  5. Hi @Johannes, Apple opened up HDR support to most displays that support an HDR10 signal with Catalina (EDR is their term for their own displays). I've tested it successfully on a Samsung CHG70, Acer XV273K and a couple of larger monitors/TVs. Apple displays themselves will typically have EDR which is just activated automatically, and is available within the Affinity apps when working with 32-bit format documents. Displays include the 2019 MacBook Pro, 2020 iMac (I think?), Pro Display XDR etc. At one point the third party LG 5K monitors supported it in a public beta, but I'm not sure if that's still the case. Any modern display with genuine HDR support should be fine—be aware of lower end models that have some kind of "HDR picture processing", they need to actually support an HDR10 signal. If in doubt, look for VESA certified models, e.g. HDR400, HDR600, HDR1000 etc. Looking at those models you've listed, as long as they actually accept an HDR signal they should theoretically work with macOS's HDR compositing. The bigger issue is making sure you have the right connectivity and OS updates—I noticed you're on Mojave, I believe you need Catalina as a minimum. The RX580 is Polaris architecture so you should be OK there. Use an up-to-date DisplayPort cable (1.4) or HDMI cable (2.0 or higher). Try and use DisplayPort, since HDMI 2.0 has various a and b permutations which affect the maximum chroma sampling and refresh rate you can achieve at 4K HDR etc. I don't have the figures to hand at the moment but I think the RX 580 might only be HDMI 2.0, so just stick with DisplayPort to be safe. You're on a Mac Pro with a discrete GPU, so you don't need to mess around with dongles—good news there! There are USB-C DisplayPort dongles that support 4K60 HDR etc, and Apple's HDMI adapter will do 4K HDR but I'm not sure about chroma sampling limitations there. Assuming all your hardware supports it, and you've got Catalina or Big Sur installed, you should have an HDR toggle on your display preferences: Then in your Affinity app (typically Photo or Designer), make sure your document is in 32-bit and open the 32-bit Preview panel (View>Studio). Enable EDR and then values above 1.0 will be mapped to the extended dynamic range: And that should be it! Hope that was helpful.
  6. Hey @jomaro, hopefully this should give you enough info to work with.. Looks like you have an OCIO configuration set up, and your EXR file is appended with "acescg"? So Photo will convert from ACEScg to scene linear because it's found a matching profile. It's just the way 32-bit linear in Photo works—everything is converted to scene linear, then the colour space is defined either by traditional display profile conversion (ICC) or via OCIO. You choose which using the 32-bit preview panel (View>Studio>32-bit Preview). Bear in mind that if you plan to export from Affinity Photo as a gamma-encoded format (16-bit, 8-bit JPEG/TIFF etc), you need to preview using ICC Display Transform. So just to clarify, the document's colour profile (e.g. sRGB, ROMM RGB etc) is arbitrary and only applied when using ICC Display Transform—then the scene linear values are converted and bounded to that profile. If you choose Unmanaged, you'll be looking at the linear values. OCIO Display Transform will convert the scene linear values according to the view and device transforms that are mandated in the combo boxes. You would need to install a given ICC profile for that to show up. However, I suspect you probably want to be managing with OpenColorIO. Looks like you've already managed to set it up, but here's a reference video just in case: Within the 32-bit preview panel, you will want to choose OCIO Display Transform (it will be enabled by default with a valid OCIO configuration). Then you set your view transform on the left and device transform on the right. The OCIO adjustment layer is for moving between colour spaces within the same document—you might want to do this for compositing layers with different source colour spaces, for example. You can also bake in colour space primaries if you wish to do that as well (typically go from scene linear to whichever colour space you require). Yes, Photo can convert to profiles on import/export by appending the file name with a given colour space. For example, if you imported a file named "render aces.exr", it would convert from the ACES colour space to scene linear. Similarly, if you append "aces" to your file name when you export back to EXR, it will convert the document primaries from scene linear back to ACES. Hopefully the above all helps? Let me know if you have any further questions!
  7. Hi again @nater973, that's interesting regarding the monitor difference. I have a 4K monitor which is running nearer 5K (4608x2592) before being downsampled, and I saw no issues. I wonder if your BenQ has some artificial sharpening being applied, or whether it's some kind of retina scaling issue within Photo on Windows. I'll try and hook my 4K display up to my Windows box later and see if I can reproduce it. Just something to try if you have time—in Edit>Preferences>Performance you'll see a Retina Rendering option. Does the grid pattern change/disappear if you set this to Low or High quality rather than Auto? If it prompts you to restart, you can instead just go to View>New View for your settings to take effect. If that's the noise level from stacking, what did your original exposures look like? There's still a lot of chromatic noise in your final image which really should have been cancelled out to some degree by stacking—are you sure the image stacked successfully? PS if you don't need star alignment (I'm guessing due to it being a wide field shot you don't), you could always try pre-processing the NEF files to 16-bit TIFFs then stacking them in Photo using File>New Stack (don't stack RAWs as you get no tonal control). With a Mean operator (default is Median) you might be surprised at the noise reduction you can achieve. Possibly worth a try anyway! Thanks again, James
  8. Hi @nater973, there are a couple of things to dissect here (I'm internal staff so the samples have been shared with me, hopefully that's OK?). Firstly, are you seeing the grid pattern with a particular image viewer, or perhaps when uploaded? I've checked your output JPEG in Photo and macOS Preview and cannot see any grid artefacting. I've also tried resampling the TIFF you've provided using different resampling methods (Bilinear for softest resampling, for example) and cannot see any particular grid patterning. What's in your layer stack when editing? Are you using any live filters for example? Perhaps try flattening your document before using Document>Resize Document to see if the results differ? Whilst I can't see any grid patterning, I can see shimmering when moving between different zoom levels. There is a lot of high frequency content in your document as a result of the noisy sky. Have you applied some additional sharpening as well? I would propose at the very least removing all the colour noise—this may help with your grid pattern issue, since it should hide obvious bayer pattern noise, but it will also help greatly with compression efficiency when you come to export. You can de-noise destructively (Filters>Noise>Denoise) or non-destructively (Layer>New Live Filter Layer>Noise>Denoise). Whichever method you use, just bring the Colour slider up to about 15% and you should find it gets rid of the colour noise without sacrificing your star colour information. For export, I would also recommend using a small amount of luminance de-noising as well (before resampling your document, if you choose to do so). This will just take the "edge" off all the high frequency detail and make it more compressible. If you want to go further, some kind of low pass may help as well (e.g. small amount of gaussian blur). Just by de-noising, I managed to halve the file size of the eventual export at full resolution (JPEG, at 90 quality): it went from 31MB to 15MB. Hope the above helps—if the issue persists, could you perhaps capture a screengrab of however you are viewing the image when the grid pattern is visible? Finally, to give this post some interesting visuals, here's a comparison of the image before/after de-noising in the frequency domain! You can see this by using the Scope panel (View>Studio>Scope) and selecting Power Spectral Density. Useful for finding out the frequency "composition" of your image (and potentially how noisy it is).
  9. Hey all, recently I've been re-recording some old videos and producing some new ones too. They've been going up on YouTube but I've now updated the original post with the new links. Here are all the new/re-uploaded videos: New document with templates Tool cycling Keyboard and Mouse Brush Modifier Stacking: Object removal Panoramas Sky replacement Bitmap pattern fills Selecting sampled colours Selection Brush Tool Freehand Selection Tool: Freehand, Polygonal and Magnetic modes Procedural Texture: Nonlinear Transform Correction Editing metadata Retouching scanned line drawings Applying Blender Filmic looks Compositing 3ds Max and V-Ray render passes As always, hope you find them useful!
  10. You can create an intensity brush from a mask layer—bear in mind however that it will use the document bounds for the brush unless you make an active selection of the mask contents first (CMD-click). I’ve queried this behaviour as I can’t think of any scenario where you would want the entire document bounds being used for the brush contents..
  11. Hey @Roland Rick, it's worth asking whether you are applying tone mapping following on from an HDR merge, or whether you're just applying it to single exposure images? A resultant HDR merge will typically be much cleaner (as it's equalising then merging exposures based on the most detailed pixel information), I'd be surprised if you were getting very noisy results from a bracketed set of images. If you're tone mapping using single images it would make more sense that local contrast especially would bring out noise. If that's the case, have you tried the Clarity filter instead? It's not quite the same effect, but might be suitable for your requirements. You can use the destructive version (Filters>Sharpen>Clarity) which will let you apply it quite aggressively, or you can use it non-destructively (Layer>New Live Filter Layer>Sharpen>Clarity) which will apply it dynamically at the expense of overall strength (the effect will be less aggressive). You may notice on the HDR Merge dialog there's an option for noise reduction–this is typically recommended when merging RAW images, so perhaps ensure this is enabled if that's what you are doing. With JPEGs however, there's typically some in-camera noise reduction that has already been applied, so I would avoid applying more noise reduction (although every camera is different, so always try for yourself!). Hope that helps.
  12. Apologies if I’ve misunderstood this, but couldn’t you just use 32-bit format for your document? 32-bit in all the apps uses linear compositing but has a gamma-corrected view transform applied non-destructively so the document view looks consistent with the exported result (as you would typically export to a non-linear format with gamma encoded values). You can toggle this behaviour on the 32-bit preview panel if you need the linear unmanaged view (or an OpenColorIO transform) but it defaults to ICC Display Transform which is the exact workflow you describe: all colour blending operations happen in linear space, with a final view transform to non-linear space. If you wanted linear compositing within a non-linear RGB document format (so 8-bit or 16-bit), you could approximate this by using two live Procedural Texture layers: one underneath the layers you wish to blend in linear space, and one above. The one beneath you would create equation fields for each channel and transform the channel value by 2.2 (e.g. R^2.2), then the one above you would transform by the reciprocal power value, so R^0.454 etc. Just an approximation, but perhaps close enough for your needs? I’ve attached some screen grabs: I exported two 16x16 images with a black triangle—one composited in 32-bit, one in 16-bit. To visualise them, I loaded them back into Photo and screen grabbed the resulting document views: Finally, here is a 16-bit non-linear sRGB document using the live Procedural Texture technique I mentioned above: Hope that helps!
  13. Well, I've waited almost nine years for an upgrade so I finally pulled the trigger—swapped everything out this afternoon and did a fresh install. Was sceptical about how much faster a new CPU and RAM combo would be over my old setup, but... i9 10850k at stock speeds, 64GB RAM 3200Mhz, 2070 Super. Might push for an overclock and see if that raster GPU score goes any higher. I'm also surprised that the integrated Intel 630 graphics beefs up the multi GPU score significantly, really wasn't expecting that.. I wonder if anyone's gotten hold of an nVidia 3080 yet?
  14. Sooo.. overclocked my 3930K CPU to 4.6GHz (from 3.2GHz) and my 2070 Super score has shot up considerably—talk about CPU bound! Here's 3.2GHz: And 4.6GHz:
  15. MacBook—8-core i9, 64GB RAM, 5500M 8GB & Intel UHD 640: Windows: i7 3930K 3.2GHz, 32GB RAM, GeForce 2070 SUPER Not too bad given the age of the CPU (starting to feel like it's bottlenecking the GPU)—I see @jc4d has almost double the GPU raster score when paired with a beefy CPU! I did previously run a benchmark where my score was lower at 3743 as well.
  • Create New...

Important Information

Please note the Annual Company Closure section in the Terms of Use. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.