Jump to content
You must now use your email address to sign in [click for more info] ×

James Ritson

Staff
  • Posts

    854
  • Joined

  • Last visited

Everything posted by James Ritson

  1. Hey @Morten Lerager, you can indeed import FIT files into Affinity Photo and it will open them in a linear 32-bit pixel format. The data and compositing is actually linear, but there are a couple of things to note: A gamma-corrected view transform is applied (but only to the view, not the actual document pixel values) so that the result on-screen looks consistent with a final export to a gamma-encoded format such as JPEG. Levels and Curves adjustment layers are added by default for basic tone stretching. You can hide or delete these. To go back to a linear format, Photo does offer the ability to save as a 32-bit TIFF. Alternatively, you can use JPEG-XL, OpenEXR or Radiance HDR (not sure if PixInsight supports these). I'm not sure how inserting Affinity Photo into a PixInsight workflow pre-tone-stretch would be beneficial though—wouldn't you just tone stretch in PixInsight then export to Photo for further editing? It may be worth pointing out that you can achieve all manner of tone stretching methods in Photo, they're just not readily available as easy filters that you can apply. Some of them (e.g. colour preserving tone stretch, similar to Arcsinh) are implemented via macros which you can download. Hope the above helps!
  2. @DanAllen you need to be using a Metal view in order for EDR compositing to work (not to be confused with Metal Compute). Designer defaults to OpenGL, but you can change it by going to the title menu and Settings>Performance. Look under Display and it will be set to OpenGL. Change this to Metal and restart Designer, and you should then find that Enable EDR can be checked. Here's a screenshot: Hope that helps!
  3. This result could be that Windows 10 is tone mapping the HDR values to SDR, or it's falling back down the list of chunks in the metadata (the HDR signalling is contained in the cICP chunk, but if the decoder doesn't support the cICP or iCCN chunks it should use sRGB, iCCP, gAMA/cHRM in that order). I wonder if it's the latter, as the bright highlights on the shield don't look sufficiently tone mapped and are still slightly blown out. Did you screenshot the Windows photo viewer to post it here? That could add another layer of colour management that prevents us from seeing the exact result you're describing. That said, if the PNG imports fine back into Affinity Photo then the HDR pixel values and chunks signalling the transfer function and colour space are in-tact. You might get the intended result if you import the PNG into something like an NLE that supports HDR display mapping (perhaps Davinci Resolve?)
  4. Hi @BryanB, we did address the main issue with Navi/Big Navi architecture cards—in conjunction with driver updates issued by AMD—which was OpenCL kernel compilation times (they were interminably slow). This was last year, for 2.2 release I believe. Photo accelerates the majority of its raster operations with OpenCL, so many kernels need to be loaded quickly when required. As you can imagine, this caused a bit of a bottleneck with those GPUs. I did test on a typical AMD Ryzen/GPU combo last year (I think it was a 6000 series GPU) and found performance to be much more acceptable. I've also recently used one of the weaker workstation GPUs (7000 series) and that was OK as well. An nVIDIA card would still—for the time being—give you more peace of mind in terms of driver compatibility for the way OpenCL is leveraged in Photo. I've used a 2070, 3090 and 4090, and apart from a driver issue early last year which was mitigated in-app, they have all been reliable. This is, however, one person's experience, so do bear that in mind. I build my own PCs and start with a fresh Windows install, then keep any additional software outside of what I need to use to a minimum. There is often a recommendation to stick to the Studio drivers, but in practice I haven't found the general gaming ones to be any less reliable. Hope the above is helpful in some way!
  5. Hi @johnptd, the automatic creation of adjustment layers when opening a FIT file has always been present—from your description, it sounds like it's inadvertently being performed twice, however? I can't reproduce that here on macOS, are you on Windows? Regarding the FIT file crashing, that's something the developer responsible for the astrophotography features would likely want to look at—are you able to provide a sample file? If you're happy to upload one, we can provide a private Dropbox upload link. Thanks!
  6. Hey Irving, I downloaded the data from Telescope Live today and had a quick go at processing. I came up with the attached image. I've intentionally gone a bit over the top (especially with the colour processing) to hopefully demonstrate that Photo can indeed stack the data quite well and that you can bring out plenty of detail. It would be easy enough to tone the result down if the colours are too garish. I did the following: Mono Log Stretch on the L-RGB data layers L-RGB colour mapping Linear fit using Blue data as the base scale Star separation using StarXTerminator Super Texture on the starless layer HSL and Selective Colour to increase saturation and then skew some of the colours to make them more vibrant Reduce Background Luminosity Deepen Colour Detail Enhance Red/Green/Blue/Magenta/Yellow signals Enhance Blue/Green Detail Boost Red/Yellow Detail HSL to reduce saturation slightly Merged to Pixel layer in order to run NoiseXTerminator (not particularly necessary for this image since noise was pretty good, but just to clean it up slightly) Enhance Structure Highlight Protected Tone Lift Highlight Recovery Bandpass Sharpening (Regular) Stars layer at very top (with Add blend mode) Finally, I did rotate the image 180 degrees to match the version you posted above.
  7. It was introduced last year: https://www.w3.org/TR/png-3/ So far, FCPX and Davinci Resolve appear to support this (I haven't tried other editors yet), as well as macOS Sonoma and various web browsers, so it's getting fairly decent adoption. It looks to be a good solution for interchanging HDR broadcast imagery in a lossless format. OpenEXR is supported by most NLEs but has non-user-friendly colour management. JPEG-XL was possibly going to be a good solution, but Chrome dropped it (not long before/after we released V2, I think) and support for it isn't widespread. The cICP chunk in this PNG format allows the image to be tagged and processed with various video-centric colour spaces, which is more robust for broadcast workflows where the imagery needs to integrate seamlessly with video content, rather than being dependent on ICC (particularly for HDR content where you have HLG/PQ colour spaces with different transfer characteristics).
  8. Hey @Greg Zaal, how different is the hardware between the two machines? Are you running identical versions of Photo? You can check by going to Help>About and looking at the specific version number. The EXR drag-drop/place crashing has been fixed—not sure when that will make it in, but we are currently running a 2.4 beta cycle. If it goes in, I'll let you know! Also, could I get you to check something based on some testing: if you go to Edit>Settings and choose the Performance category, what is your RAM usage limit set to? If you set it to a sensible value, such as half your installed RAM (e.g. 4096MB for 8GB), do you still experience crashing when opening multiple EXRs and copy-pasting between them?
  9. Hey again @Greg Zaal, thanks for the files. I've been able to reproduce the 'place' crash on both macOS and Windows—interestingly, this happens on macOS V1 as well for me (1.10.8), but Windows V1 appears to be fine. Opening two EXR documents side by side, then copy-pasting the RGB pixel layer from one to the other is fine on both platforms here though, I'm struggling to reproduce that (both V2 and V1). Is that consistently reproducible for you? (Perhaps with OpenCL disabled as well?) We'll get the place crash logged and see if anyone else internally can reproduce the copy-paste crash.
  10. Hi @Greg Zaal, is there any chance that in V1 you had disabled OpenCL acceleration via Edit>Preferences>Performance? V2 will have this enabled by default, and although OpenCL support has improved greatly since V1 (particularly with out of memory situations), your workflow may be exposing a shortcoming there, especially as Photo will be using your device's Intel HD integrated graphics: Intel HD devices and drivers have always proven problematic with the way OpenCL is used in Photo (fairly "aggressively", as almost all raster operations in the app are accelerated). Are you on a desktop or laptop machine? On desktop, you could disable the Intel HD graphics via Device Manager (or through the BIOS)—this would enable you to continue using the 4070 for OpenCL, which may be fine (or not—see below). On a laptop, however, you would have to disable OpenCL entirely to stop the integrated graphics being used. With OpenCL active, performance and responsiveness is very much dependent on the amount of available VRAM, so even though you have 96GB of RAM, you will be constrained by 8GB on the 4070. Disabling OpenCL and using software (CPU) rendering will remove that bottleneck, the downside being that CPU-based compositing is less performant. However, if you are primarily retouching HDRIs rather than stacking multiple live filters, this may be less of an issue anyway. Based on the bit depth and dimensions of what I can see in your screen recording, just one of those documents would require at least 3.5-4GB VRAM, without accounting for overheads. Trying to copy-paste or place another document with similar requirements will likely saturate the VRAM of your GPU entirely. This shouldn't ordinarily be an issue—as mentioned above, out of memory situations were handled poorly in V1 but improved for V2—but perhaps that is what is happening here. Hopefully the above is relevant to the issues you're having, as they can then be easily mitigated. If not, a member of tech support or QA should be able to read the crash dumps and see what's actually happening.
  11. Hey @djwalters, I haven't experimented with mosaic stitching but I would likely try one of the two following workflows: Stack each mosaic individually with the relevant calibration frames (if required) Commit the stack, tone stretch the data (using whichever method you wish) then export to gamma-encoded 16-bit TIFF Use File>New Panorama and stitch the separate TIFF files together Continue editing Or—and I'm not sure whether the panorama stitching process would work properly here—try stitching the linear stacked data instead: Stack each mosaic individually with the relevant calibration frames (if required) Commit the stack, then export to 32-bit TIFF. On the export dialog, there is no explicit preset for this, so select TIFF and then under Pixel format choose RGB 32-bit Use File>New Panorama and stitch the separate TIFF files together Tone stretch the resulting panorama then do any further editing as necessary Panorama stitching will use exposure equalisation, but depending on your tone stretching method I'm not sure if using the first process may result in colour discrepancies that could look odd when the separate tiles are stitched together. That's why I think it may be worth giving the second method a try. Hope that helps!
  12. Hi, colour decontamination is actually performed during selection refinement, there's just no explicit option for it (the selection refinement preview already has it applied). You have to choose New layer or New layer with mask as the output option for it to apply: the Selection and Mask options don't destructively alter pixel content, so you won't get decontamination with those since the process involves modifying pixel colour values. I'm not aware of that one, do you mean the option under Layer>Matting?
  13. Hey @Maxbe, the Affinity apps have had EDR/HDR display support for quite a while, since 2017/2018 if memory serves. It may be worth watching this tutorial (available in HDR of course 😉) as it will cover the different scenarios for authoring HDR images, e.g. bracketed exposure merging, single exposure processing and other non-photographic workflows:
  14. Just a thought, are the TIFF files you're trying to access actually offline copies? I had this issue a few weeks ago—Publisher will report the images as "unsupported" if they're online only, it seemingly won't trigger Dropbox's mechanism to download offline versions of the files when they're accessed. Worth a try, as apart from that I've never had any issues working off Dropbox across the apps..
  15. Hi @Ruyton, thank you for providing the files so I could experiment with the full stacked result. From a bit of reading on various forums (PixInsight, Cloudy Nights etc), there are generally a few things to try in order to correct the green cast including an unlinked screen transfer function, photometric colour calibration, background neutralisation, SCNR etc. I have cobbled together a workflow that does the trick, although it does involve using my astrophotography macros (free download, link here). I've attached a screen recording to this post, and here are the steps: Delete the Curves and Levels adjustments (these are providing rudimentary tone stretching, but we are going to use a colour preserving stretch instead) Run the SCNR Green Max/Additive macro. This will mostly neutralise the green colour cast Run the Colour Preserving Tone Stretch macro. This will colour balance and produce a decent stretched output Now use Filters>Astrophotography>Remove Background and sample tones until you neutralise the background. In your data there is a gradient so this requires slightly more work Without the macros, you could achieve a similar result with a bit of manual work. I would use Remove Background on the Stacked Image 1 layer (sampling multiple points to remove the gradient), tweak the Curves adjustment to avoid blowing out the core of the nebula, then use a White Balance adjustment above Stacked Image 1 but below the other adjustments. Moving the Tint slider towards a magenta bias rather than green will help eliminate the green colour cast. Hope the above helps! Here is the screen recording: Screen Recording 2023-11-21 at 15.11.49.mp4
  16. Hopefully I've understood this correctly! @Kahrkura perhaps frustratingly, there is no brush within all the categories that exactly mirrors the default brush settings when you first run the app. This is something I found difficult for a workflow where you switch to a textured brush but then just want a basic round brush with tight spacing (e.g. for masking purposes). The round brush presets on the Basic category have wider spacing which is insufficient for this. Try the Masking category on the Brushes panel. They're designed to be as close as possible to the default brush, so you can pick something like "Tight Spacing - Medium Soft", then bring the Hardness up to 80% if you want to match the default brush value. These use a small spacing value so are good for painting with masks. Alternatively, under any brush category click the panel options icon (the three bars to the right) and choose New Round Brush. This will create a basic round brush with the default parameters: Hardness at 80%, Spacing at 15%, Flow at 100% and Width at 64px. Hope that helps.
  17. To expand on what @firstdefence has said here: A good non-destructive option would be to use a Channel Mixer adjustment layer and set its colour model to Gray, then use any blending methods you wish (e.g. Soft Light with a low opacity value). This produces the same weighted intensity/luminosity result as CMD+Option-clicking on a layer, except it will update dynamically if you modify layer content underneath it. If you just want to apply a blend mode to a duplicate of the image content, you can use an adjustment layer such as Channel Mixer rather than duplicating the image layer. Any adjustment that doesn't immediately modify the pixel values will suffice, e.g. Curves, White Balance, Levels (adding them but leaving the parameters alone will produce an identity result). This is useful for preventing redundant copies of your image content. @joneh as you experiment further with Photo, it's worth noting that it has some non-destructive functionality that isn't immediately obvious—so whilst your techniques from PS will generally port over, please don't hesitate to ask on the forums as there may be a more effective approach in Photo. Live filter layers, for example, let you apply various filters as layers rather than having to merge your work into a single layer and apply a smart filter to it. One example would be recreating something like the Orton Effect: instead of duplicating your layer and applying a blur to it, you can instead go to Layer>New Live Filter Layer>Blur>Gaussian Blur and choose a suitable radius value. You can then set the blend mode of this live gaussian blur layer to Overlay (or whichever blend mode you prefer), then fine tune its blending using Blend Ranges (similar to Blend If): https://affinity.help/photo2/English.lproj/pages/Layers/layerBlendRanges.html This will allow you to blend the effect away from the darker tones but keep it in the brighter tones. You can position this layer above your other layer work and modify it at any point. For photography, there are also live implementations of filters like Clarity and High Pass, which are useful for for texture/structure enhancement and detail enhancement respectively. Hope the above is helpful!
  18. Just a quick thread bump to inform you that I've improved the method used for this colour correction (it now uses gamut compression as detailed in an ACES paper). The result is noticeably higher quality and more 'pleasing' than the previous thresholded gamut shift method. I've updated the screen recording and .afmacros attachment in the post above. Hope it helps!
  19. The issue is related to complexity of colour and values falling outside of the gamut being used for colour processing (Photo uses ROMM RGB internally for raw development). I wouldn't advise underexposing whilst shooting to try and mitigate the issue, as that will compromise the quality of your images: rather, it's something that needs to be addressed by Photo's RAW development (it is being looked at). Magenta solarisation in saturated blue areas is typically seen when one or two of the components (R,G,B) have negative values—from user reports, this is the most common form of artefacting. Other issues can occur as well, such as worsening of colour fringing and banding around intense areas of light. There are a handful of solutions that exist for this issue, and thanks to the open nature of the VFX community they are fairly well documented. I've recreated one of these solutions (a colour matrix shift that protects 'core' values based on a threshold) as a macro. If you want to continue developing your RAW images in Affinity Photo, you're welcome to try it and see if it helps? To use it effectively, you need to develop your RAW images to a linear unbounded colour format. This is easily done via the Develop Assistant settings. Here are the steps: Install the macros (drag-drop the .afmacros file onto Affinity Photo's user interface) Without opening a file, go to the main assistant settings (the robot icon), then click the "Develop Assistant" button to go to the RAW development settings Change RAW output format to "RGB (32 bit HDR)". Important: leave Tone curve set to "Take no action" Open your RAW file and perform any initial editing, then develop it Run the "Gamut Compression (sRGB)" macro. You can also try the Tone Mapping variant, which will compress the dynamic range. This may be useful for scenes with intense lighting. The ROMM variants are for if you want to edit in a wider colour space—the usual caveats with colour management apply here... The image will be corrected, then converted to 16-bit per channel precision so you can continue regular editing Here is a screen recording of the process as well: Screen Recording 2023-11-09 at 11.58.06.mp4 I've included the old gamut shift method in the macros as well, as you may find that you prefer the result depending on your own imagery (it tends to produce a more saturated result). Hope that helps, James JR - Out of Gamut Colours Fix.afmacros
  20. @Demys this may not address all the problems you're having, but there's an issue with the Blender configuration when combined with Affinity Photo at the moment because the mode we use with OCIO v2 (called Legacy mode) combines ops into a single low-precision LUT. Many of Blender's colour spaces, including AgX and Filmic, use log2 allocation transforms which require a lot of precision—so rolling these into a LUT is pretty catastrophic. This likely accounts for the clipped and almost "diffuse" highlight detail you are seeing. I adapted the configuration to work with Photo and posted a link to it in the comments of the OpenColorIO video tutorial available here: Here's a link to the configuration: https://www.dropbox.com/scl/fi/geo9h5wklq23fjge4habp/JR-Blender-4.0-OCIO-adapted-for-Affinity-Photo-2.2.zip?rlkey=dg9dxababy633erxgyin89efu&dl=0 You're welcome to download and try it with both Windows and iPad versions to see if it solves your problems—do let me know!
  21. Hey @cajhin, what you're saying is appreciated: there has always been an effort to maintain an 'offline' version of the help for users, the downside being that different search methods have to be implemented for each platform (let's not even talk about the help viewer frameworks and their issues). As you've discovered, due to OS updates and other factors, the experience with the search is not always wonderful, and easy (or rather, easier) access to an online version could be worth considering. Thank you for the feedback.
  22. Hi @110volts, Linear Fit was introduced in version 2.1 and isn't available in 1.10.x I'm afraid.
  23. Hey @Affinity Rat, yes, it is generally expected: the RAW file will be a greyscale 14-bit bitmap (so only contains one channel), and potentially will also have lossless or lossy compression depending on your camera settings. Once the RAW data is debayered to a full colour image and mapped to a colour space, the resulting .afphoto file will be (by default) a 16-bit per channel RGBA document, which is where the file size increase comes from. Adding more bitmap layer work will of course increase this. If file sizes are a concern, you could try using the RAW Layer (Linked) option on the Output dropbox when developing your RAW files. This will reduce the file size significantly (typically under 1MB until you start to add more layer work). Do be aware, however, that every subsequent load of the .afphoto file will then require access to the original RAW file so it can be 're-developed'. Hope that helps!
  24. Hey @Announcement, would you be able to post a screenshot of your layer stack? Are you using any live filters or adjustments? Regarding a difference in sharpness, if you flatten your Photo document temporarily (Document>Flatten) and view at 100% zoom (Ctrl/CMD+1), does it then look the same as the exported JPEG at the same 1:1 zoom level?
  25. Hi @Sonny Sonny, as I mentioned above it's not an issue. Possibly Photopea does not interpret unassociated alpha?
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.