Jump to content

James Ritson

Moderators
  • Content Count

    665
  • Joined

  • Last visited

Posts posted by James Ritson


  1. On 7/6/2020 at 11:43 AM, Dmi3ryd said:

    Hello.

    Once again I want to thank the Affinity Photo development team for amazing package for working with graphics.

    I have a very simple question.

    After rendering, I have an alpha channel as a separate file (render pass file).

    Could you please tell me how to quickly convert an image (layer) into a channel for selection?

    Most importantly, when converting an alpha image with to spot channel, there should be no loss in white gradation.

    Or use this alpha image as a brightness mask, where white hides the image, black leaves.

    Thanks.

    Hi @Dmi3ryd, the video Gareth posted would help with making selections from Material/Object ID render passes (or Cryptomatte in its bitmap form). However, it sounds like you just want to convert a pixel/image layer to a mask, is that correct? If so, copy/paste or place your alpha image into your document, then go to Layer>Rasterise to Mask. Alternatively, you can right click the layer thumbnail and Rasterise to Mask will be on that menu too.

    Once the layer is a mask, you can mask other layers with it (drag-drop over the layer thumbnail, not the label text), and you can CMD-click (Mac) / Ctrl-click (Windows) the mask layer to create a selection from it. Finally, with the mask layer selected, you can also go to the bottom of the channels panel and right click [Layer Name] Alpha then choose Create Spare Channel. This will create a channel from the mask which you can load into other masks or into your active selection.

    PS if you want to invert a mask, just select it and use Layer>Invert, or CMD+I (Mac) / Ctrl+I (Windows).

    One final note: if you need to multiply or divide alpha by the colour values, you need to flatten the mask into its parent pixel layer first. With a mask clipped to a layer, right click the parent layer and choose Rasterise. Now, on the Filters>Colours menu, you have Multiply by Alpha and Divide by Alpha. You can also do this non-destructively with a Live Procedural Texture filter, but that's for another day 😉

    Hope that helps!


  2. Hi all, I'm pleased to share with you some macros I've been working on—these are predominantly intended for users of Blender who are retouching their renders in Affinity Photo, but you can also use these macros with EXR/HDR renders from other 3D software and even merged HDR photographs from bracketed exposures. They enable you to easily achieve whichever Filmic look you require in Affinity Photo with just one click—no messing around with OpenColorIO configurations, no confusion with which colour management option to use, and certainly no flattening and converting to 16/8-bit to apply any kind of final transform!

    The macros and accompanying readme can be downloaded here: http://jamesritson.co.uk/resources.html

    Here are the main talking points of this macro set:

    • Emulates the Filmic view transform and looks (e.g. Very High Contrast, Low Contrast) that you can apply in Blender.
    • These macros are intended for HDR documents (OpenEXR, Radiance HDR). When saving to these formats, Blender writes out linear scene-referred values, so you do not get the Filmic view transform and looks applied.
    • Applying the Filmic view transform and looks in Affinity Photo is possible but complicated, and involves copying the Blender OpenColorIO configuration files and pointing Affinity Photo to them.
    • Instead, these macros can be added to a fresh install of Affinity Photo—no other dependencies like OpenColorIO—and you can apply the Filmic look you want non-destructively.
    • If you want, just apply the Filmic Log transform—no look—and shape the tones yourself using Affinity Photo's adjustment layers.
    • For convenience, move between different colour spaces non-destructively (Rec.709, Rec.2020, ROMM RGB, ACES CG, DCI-P3). Profiles are included in the macro file so are portable—no dependencies.

    I've also recorded an instructional video here: 

     

     

    And some comparison images:

    comparison_03.jpgcomparison_01.jpgcomparison_02.jpgcomparison_04.jpg

     

    Thanks very much for reading, hope you find them useful!


  3. Hi @Steven T, just so I understand—using the manual white balance feature on your Nikon won't work correctly, even if you point it at some foliage (or whatever you want to be your artificial "white point")?

    Bear in mind that most cameras will actually report an error—my Sony A6000 does this and I previously used an Olympus Pen that did something similar—but will apply the white balance shift nonetheless. Perhaps just double check if that's the case? If your image looks fairly grey and neutral but the sky has an orange/red tint you're in the right ballpark.

    What infrared pass is your Nikon, or is it full spectrum? Either way, you can shoot at 590nm and get plenty of false colour in the red spectrum—just use the White Balance picker in Develop and pick an area you want to set the white point from—regardless of how extreme the shift, it will be applied. If you don't manually white balance (and so your entire image will be mostly red), the auto exposure on your camera will avoid clipping the red channel, but this will underexpose the blue and green channels and so your overall image may be noisier when white balanced. Not the end of the world, just not ideal!

    Filter strength depends entirely on your artistic choice, really—if you want false colour for that "Goldie" look, 590nm is great. 720nm is a nice balance but colours aren't as rich, then 800nm upwards is more tailored towards black and white conversion.

    Regardless, you shouldn't have any issues within Photo, it can white balance any type of infrared image.

    Hope that helps!


  4. Hi @norbre, try setting your output on the Selection Refinement dialog to New Layer or New Layer with Mask.

    The preview you are seeing includes colour decontamination—which means the background colour contribution to edge pixels is disregarded, eliminating the typical halo effect you would see. However, if you output to a selection or mask, Photo cannot perform this since it’s not manipulating the pixels of the layer—just masking them.

    Using New Layer or New Layer with Mask allows the edge pixels to be decontaminated—hopefully this should solve your issue. In fact, if you choose New Layer with Mask and then turn off the mask layer, you will see exactly how the edge pixels and surrounding areas have been treated!


  5. 12 hours ago, London said:

    I played with these macros a bit today. they worked great in ICC Display Transform mode. (After your earlier comments about ICC DT vs. OICC DT, I kept with ICC. And I just read your note in the PDF about doing that if you've already set up OICC, which I'd done :) ). If you have the time for it, I'd love to understand what each of the layers is doing in the transformation. I'm not sure I know what crosstalk does for this (or at all).

    In the process of testing this, I used an older file, one I'd previously toned and printed (love the soft proof node: I was able to use the printer's profile to make sure everything was how I wanted it). I realized that while I thought I'd saved the file, when using an .EXR, it doesn't seem to actually save the file. I just tested this and, indeed, the saved file was the same as the original (this is really obvious because my 3D app saves EXR files as about 12 stops over exposed and I'd put in a LUT that changed the color to be much warmer). Is there a way to force Photo so save EXRs in afphoto format, or ask?

    Hi, it was a case of studying the OCIO configuration and picking it apart—that only gets you so far, unfortunately. Filmic uses a normalised log2 transform which has to be approximated at the moment in Affinity Photo, and there are transforms either side of that which are just basic maths. They're not exposed in the OCIO configuration file, however, so a bit of digging was required..

    The 3D LUT is what performs the crosstalk—essentially, emulating an organic film-like response. In the digital domain, a colour becoming more intense would just become more saturated. The LUT emulates an organic response: instead, as these colours become intense, the channels "crosstalk"—so an incredibly bright red colour would gradually start to desaturate as it mixes with the blue and green channels. That's my understanding, anyway—might not be correct!

    The "looks" are actually just 1D LUTs that are applied once the colour values are in Filmic Log space—then there's an sRGB device transform at the end (another LUT) which completes the chain.

    Photo has a write-back feature for most bitmap formats (JPEG, TIFF, PNG etc) when you Ctrl+S or CMD+S to save. Seems this applies to EXR files as well. The workaround is just to use File>Save As and save as a native .afphoto document—hope that helps!


  6. To clarify the information in this thread based off a subsequent discussion in another thread, it's most likely caused by using certain blend modes whose mathematics do not interact well with unbounded floats—typically yielding negative values.

    An easy way to test this is to produce a 32-bit document with two rectangles that overlap. Set both rectangles to pure red, then increase their intensity—for example, increase one by +1 and the other by +2. You could use any float values here greater than 1, e.g. 2.54 and 5.43. Scroll through the blend modes on the top-most rectangle and you will quickly see which blend modes produce negative values: Screen, Overlay, Soft Light, Hard Light, Vivid Light, Exclusion, Subtract, Negation, Reflect and Glow. We've discussed adding warnings to these blend modes in the dropdown when a user is in 32-bit, and potentially clamping them as well, with the understanding that this would clip the unbounded value range.

    @London your issue should be somewhat mitigated by using those Filmic transform macros I've provided, since all of the transforms will tone map to 0-1 range. Just place any composite layers with the problematic blend modes above the Filmic transform group and you should be good to go!


  7. Just bumping this @London — I've revised the macros because I was working off an inconsistency. Looks like LUT transforms with GPU acceleration on Mac aren't performed correctly. I've adjusted all the macros to render correctly across Windows and Mac when in software (CPU rendering). Not sure which platform you're on?

    It may not be a huge deal, since the difference can be slight for most image cases, but if you're on Mac and you absolutely want a 1:1 recreation of the Filmic looks, you'll have to disable Metal compute for now. You can do that through Preferences>Performance, where you can uncheck "Enable Metal compute acceleration". If you're on Windows, no worries, continue as usual 🙂

    Revised macros are available at the same download link above..

     


  8. Hi @London, I'd appreciate your feedback: http://www.jamesritson.co.uk/downloads/macros/jr_macros_blender_filmic.zip

    I'll make this available low-key for now to see how people get on with it. There's an included PDF with installation and usage instructions. A couple of things to note:

    • The first run of a macro whenever you start up Affinity Photo will take a couple of seconds (or possibly longer) before you see a result—this is due to a complex 3D LUT being loaded. Subsequent macro applications will be instant.
    • Don't expect an absolute 1:1 match with blender—this is designed to allow people to bake Filmic looks into their 3D renders when using ICC Display Transform (traditional document-to-screen colour management). This colour management differs to the sRGB EOTF device transform within Blender, so I don't believe you could ever get a 1:1 match. Even TIFF files with a Filmic look baked in look different in Affinity Photo when compared to Blender. In short, you can match any exported bitmap image from Blender (e.g. a TIFF with Very Low Contrast), but you might not be able to match exactly what you see in Blender's viewport.

    I could have gotten a slightly closer match to Blender by using an OCIO adjustment layer, but that would be dependent on the user setting up Blender's OpenColorIO configuration within Affinity Photo—something I wanted to avoid. Instead, these macros use a bunch of maths to approximate a log transform 🙂

    Let me know how you get on! Here's a comparison I'll use for forum posts etc:

    comparison 01.jpg


  9. On 6/16/2020 at 7:56 PM, London said:

    Thanks, @JamesR_Affinity, I was going to submit a bug report, but when I tried to reproduce it with new images, I couldn't. I opened up my early files and the problem is still there, so I don't know what's going on. With both the new, working and old, bugged images, I use lighten mode to stack the outputs from each light in the scene on top of the environmental pass. The attached image shows the problem with my original files. Re: the troubleshooting, since I'm using multiple light passes, things look different if I set one to normal, since it's only part of the light. However, the darker image is set to normal with only an exposure node above it (my renderer outputs EXRs at +12 stops)—the problem persists. Enabling EDR does show the overwhites as white in ICC mode but not in OCIO mode (without Enabling EDR, the overwhites are displayed as in the attached images with either OCIO or ICC)

    Thanks for the tip on ICC Display Transform vs OCIO. I am using Photo to composite and finish my renders before exporting them as JPEG for posting. Looking at the two files I opened to compare old vs. newer images, there wasn't a huge difference between them, but it explains why my exported images weren't always matching what I was seeing on screen. 

    I'll anticipate your macro pack…and am going to spend some time with your tutorials. My last project taught me my decade+ of Photoshop experience doesn't necessarily mean I know how to do things in Photo. That was often to my great frustration (the amount of time it took me to make a mask almost made me switch back), but also revealed some powerful new approaches (procedural textures, at least once I figure out how to use them properly).

    Hi again, the macro pack is pretty much finished so I need to test it cross-platform and then I'll upload a test version—I would be really grateful if you could give it a try and let me know how you get on! It's quite exciting because I've implemented the log transform using existing Photo filters and functions, so you don't actually need the relevant OCIO configuration set up—anyone can use these macros and get the Filmic looks without having to extract the configuration files from blender.

    Straight away, I suspect Screen mode is your problem. The maths involved mean that it would be very easy to produce negative values. For example, let's say you were blending two float pixel values that were both out of range (greater than 0-1). Take 1.323 and 5.12 as an example:

    1-(1-1.323)*(1-5.12)

    This results in a pixel value of -0.33076.

    If you want to blend light areas on top of your main pass (for example, I do this with the bloom pass you can now get using Eevee), just use Add. Simple addition means you won't end up with negative values. Use Opacity or an Exposure adjustment clipped into the layer to control the blending strength.

    By the way, regarding EDR—do you have an actual EDR/HDR capable display? If not, there's not much point to EDR, since it will simply "unclip" the pixel values from the document colour profile, which by default is sRGB. The overwhites you might be seeing will be outside the coverage of sRGB. If you are actually using an EDR display (or an HDR display with HDR compositing enabled) then ignore this 🙂

    Finally, procedural textures are awesome, especially for 3D and compositing workflows! I'll post back here once I've organised those macros.


  10. Hi @London, I've looked through your post history to get an idea of your workflow etc. I work with blender and OCIO quite frequently but I only come across strange colour artefacting when I'm using blend modes that don't work well with unbounded float values (e.g. Screen, Soft/Hard/Vivid Light). Certain tonal adjustments (e.g. Selective Colour, HSL) may also saturate colour values very quickly in linear space, which can often lead to unwanted results.

    Are you using any particular blend modes or adjustment layers to produce the dark blue? If you grab the colour picker and pick one of these offending colours, you will often find that one of the channels has a large negative value, or the readout may in fact say "nan" (not-a-number). This is normally a sign that you're using a blend mode which doesn't work well with unbounded values (typically >1).

    Just to troubleshoot, if you open up your EXR document and make no changes apart from configuring the OCIO device and view transforms, do you see any odd colour values or does everything look correct?

    One final thing worth mentioning: Affinity Photo's OCIO integration is mainly for pass-through editing (e.g. when you want to slot it into a post-production workflow chain with other software that uses OCIO). If your purpose is to export a retouched render from Photo to a non-linear format (e.g. 8-bit JPEG, 16-bit TIFF), you will instead want to use ICC Display Transform as this represents how your document will look with the typical document-screen profile conversion.

    This does of course complicate things when you want to match the Filmic transforms (or indeed any other OCIO transforms you might be using), since the transform chain for these usually involves a non-linear transform at the end—sRGB OETF, for example. Add this on top of your existing non-linear view transform (using ICC Display Transform) and everything ends up looking too bright. You can use a live Procedural Texture to perform a 2.2 power transform on the RGB channels which will counteract this. Alternatively, I'm almost finished with a small macro pack for adding Filmic transforms and will link to them here shortly...


  11. On 3/22/2020 at 10:16 AM, Murfee said:

    Hi In 1.8.3.175 there is an annoying issue when a greyscale mask is introduced, the greyness sliders take priority over the other colour adjustments when you select a normal layer. The only way that I have found to stop this is to unlock the greyness sliders, then lock the RGB sliders, then change to the colour wheel...the problem is that you need to go through this cycle at least twice before the colour wheel stays when you swap layers 🤔

    If I've understood correctly, this is the automatic switch over to the greyscale box colour model whenever you select a mask layer? (And the subsequent annoyance it can cause when trying to perform colour manipulations on other layers)

    I believe it's easier to fall foul of this on Mac because there's a UI issue—whenever an enclosure layer is created (e.g. new mask layer, live filter layer) it doesn't appear to be selected on the Layers panel UI, thus you naturally click it again to ensure it's highlighted. In doing so, you "reselect" the mask layer, and this breaks the toggle back to the previous colour model you may have been using.

    Whenever you add a mask layer, try selecting another layer type that isn't a mask without clicking again to actively highlight that mask layer. You should hopefully find that it toggles back to the previous colour model.

    This tends to be less of an issue on Windows, since the Windows UI correctly highlights an enclosure layer when it's added. The layer is still technically active on Mac (so you can paint on/off the mask) but is not reflected in the UI.

    I've also noted an inconsistency with the lock colourspace feature and the way it interacts with the model switching.

    Is this the correct issue you're referring to, or is there another behaviour?


  12. Hi @kyle-io, a screenshot of your layer stack and also your 32-bit preview panel might be useful here, but generally speaking the steps outlined in my video you posted should work as long as you're using ICC Display Transform and not OCIO. The OCIO transform in Photo is non-destructive and does not affect the colour values in your exported document, so if your goal is to convert/export to a non-linear format (8/16-bit) then you need to be using ICC Display Transform.

    I've got a slightly different approach that you can try which allows you to apply the Filmic looks:

    • Configure OCIO with the blender configuration (e.g. extract it from the application folder)
    • 32-bit Preview Panel, Display Transform to ICC Display Transform
    • Add an OCIO adjustment, go from Linear (scene linear) to Filmic sRGB
    • Add a LUT adjustment and browse to where you put the blender OCIO configuration files. Look in the filmic folder and you'll have several spi1d LUTs that correspond to the looks, e.g. Medium contrast, Very High contrast etc.
    • The values go from lower to higher contrast, so filmic_to_0-35_1-30.spi1d is very low contrast, filmic_to_1.20_1-00.spi1d is very high contrast. For Medium contrast which is the default in blender, pick filmic_to_0-70_1-03.spi1d

    That should hopefully get you much closer to the Filmic view transform and whichever look you're using. One way to check everything before exporting is to quickly flatten (Document>Flatten) and then convert to 16-bit or 8-bit (Document>Convert Format/ICC Profile). This will switch from linear to non-linear compositing, and will show you what your document will look like when exported to a non-linear format like PNG or JPEG. There shouldn't be any tonal changes when doing this—if there are, you may need to check your 32-bit preview settings and make sure you're not using OCIO Display Transform.

    Hope that helps!


  13. Hi @Fritz_H, the issue is not related to a 10-bit panel, wide gamut display, anything like that. I simply think it's your colour preferences and how your workspace colour settings are configured.

    I appreciate you want to work in ROMM RGB for more intense colours, but because you have changed the default colour profile (rather than changing it on a per-image basis whenever you develop a RAW file), you must take into consideration how this affects all imagery you bring into Affinity Photo.

    Normally, with panorama stitching, the colour profile will be inferred from the images you provide during the stitching process. For example, if you are stitching JPEGs tagged with Adobe RGB, the final panorama document will be in Adobe RGB. This is true, even if your default RGB Colour Profile in Preferences>Colour is different.

    However, there are circumstances when your RGB Colour Profile choice will be used: if the images do not contain any colour profile tagging (you will typically receive a message in the top right explaining your image has been assigned a colour space), or if you have explicitly enabled Convert opened files to working space in Preferences>Colour.

    The only way I could reproduce your issue with panorama stitching is by going to Preferences>Colour and enabling the Convert opened files to working space option—could you check your own preferences? If that option is enabled, disable it, then see if your colours are correct. Note that it is disabled by default, so at some point I suspect you may have manually enabled it?

    Hopefully that will solve your issue. However, do be aware that I'm unsure as to whether your Xiaomi smartphone JPEGs are intended to be in sRGB or a wider colour space—similar to iPhones and how their images are tagged with Display P3. If you open a single JPEG image, what does the colour profile entry read in the top left?


  14. Hi @desdemoli, from reading various discussions and threads about it, ridged multifractal noise appears to be perlin noise modified by an absolute function and inverted. I make no guarantee as to whether this works, but I believe you can do both using a live Procedural Texture filter. Add a new equation field and enable the RGB channels for it, then try:

    1-abs(perlinsc(rx/200,ry/200,7,0.6))*0.5

    Here's a screenshot of what it looks like:

    abs_01.jpg

     

    However, there's this quote from a StackOverflow thread:

    Quote

    That assumes that the perlin noise generator generates the values in range <-1,1> (or another range centered on 0). ~50% of the pixels are expected to have value greater than 0 and ~50% are expected to have value less than 0.

    The ABS causes sharp ridges to be created where the 0 is, since all the bilinear/trilinear interpolation would make sure that there used to be a smooth slope passing through the 0 value.

     

    For perlin noise with values in the range of -1 to 1, you would want to use a harmonic perlin noise function, so the equation would need to be adapted like so:

    1-abs(perlinhsc(rx/200,ry/200,7,0.6))*0.4

    And another screenshot:

    abs_02.jpg

     

    You will want to experiment with the multiplication value at the end—I lowered it to 0.4 for the harmonic perlin noise example.

    It's also possible to adapt the comprehensive Perlin Noise preset that ships with the app (not Simple Noise, but the Perlin Noise preset) like so:

     

    var v=vec2(rx,ry)*cells/w; tocui(1-abs(perlinhsc(v,oct,pers))*cont)+br

     

    You can then use the parameters such as Octaves, Persistence, Cell count etc whilst having the ridged multi fractal perlin noise—at least that's what I think would happen.

    Hope that helps!


  15. Hi @creamelectricart, I've previously tested ACES 1.0.3 with no issues, and managed to get 1.0.2 working here. I would suggest copying the OCIO configuration file and supporting files to a general folder rather than Application Support—there might be a sandboxing/permissions issue here. If you copy the files to Documents for example, then point the configuration and support directory options to them, does it load successfully?

    Hope that helps!


  16. On 3/31/2020 at 1:37 PM, fochardo said:
    
    I would like to know if it is possible, as happens in Adobe Camera Raw, to be able to define the points per inch -dpp- of a RAW before to start the 
    processing of the file with a higher resolution than the RAW originally has. Thank you. 

    Quisiera saber si es posible como pasa en Camera Raw de Adobe poder definir los puntos por pulgada -dpp- e iniciar el procesamiento del raw con una resolución mayor de la que tiene en origen el RAW,  Gracias.

    Hi @fochardo, I'm not sure that what you're asking is relevant to RAW development—PPI (or DPI, as we refer to it in the Affinity apps) has no bearing on the pixel resolution here. It does have meaning regarding the relationship between physical measurements and the pixel resolution, but this does not factor into RAW development.

    Are you asking whether it is possible to "resample" the RAW file before developing it? If so, this is not possible (I'm not sure why you would do this either). Affinity Photo processes RAW files at their original given pixel resolution—DPI/PPI is irrelevant here, so please don't be concerned about losing quality or resolution.

    Affinity Photo will arbitrarily use the DPI value specified in the RAW metadata (often tagged as XResolution and YResolution)—e.g. in Sony ARW files this tends to be 350, whereas I've just opened a Canon CR3 which is set to 72. Again, though, this DPI value does not affect the pixel resolution of the developed RAW file—you are always developing at the original (and highest) resolution.

    Hope that helps!


  17. No problem, definitely give 32-bit a try! I use DeepSkyStacker—not sure if APP's export settings are similar but you'll want to use 32-bit float/rational, not integer.

    A couple of other useful things I've found:

    You can use a live Clarity filter (Layers>New Live Filter Layer>Sharpen>Clarity) to bring out structure in nebula objects etc. Add a live Clarity, then invert it (Layer>Invert) and you can paint the effect back into specific areas. If you duplicate the main Background pixel layer, you can apply the destructive version (Filters>Sharpen>Clarity), which has the benefit of being more aggressive since it doesn't have to render in real-time.

    Synthetic flat frame creation: I've had to use this, not for lens vignetting as I shoot flats, but for light pollution and skyglow that creeps in when you do extreme tonal stretching. Unfortunately, Median filtering is too computationally expensive in 32-bit and cannot be used, so what I do is:

    • Create a merged layer of my work so far (Layer>Merge Visible), copy it, then go to File>New from Clipboard.
    • WIth this new document, I'll convert it to 16-bit (Document>Convert Format / ICC Profile), run Dust & Scratches to get rid of all the stars, then use the Inpainting brush to remove any traces of deep sky objects.
    • Finally, I'll convert back to 32-bit, run Gaussian Blur at its max radius of 100px, then copy and paste the layer back into my 32-bit document.
    • Once the layer is pasted in, set its blend mode to Subtract. The sky will be way too dark: to counteract this, add a Levels adjustment layer and clip it into your pasted layer (drag it over the layer's text/label). Now raise the black level (input, not output) until you're happy with the result.

    Hope that helps!


  18. Hi @schmety, hope you're well. Good old Astrobackyard, been following Trevor since the start of this year as I've gotten into astrophotography!

    I believe you can approximate the Photoshop workflow by using Select>Select Sampled Colour. Click on the middle of a star, then adjust Tolerance to select the rest (you might have to go up to 100%). Also, you may want to try switching the colour model from RGB to Intensity, as stars will typically be the brightest parts of the image.

    Once you've made the selection by clicking Apply, go to Select>Grow/Shrink. Grow the selection by what is appropriate (I ended up with 16px for my image) and check the Circular option for nice round selection marquees around the stars.

    Finally, go to Edit>Inpaint and the stars will be removed. This might take a couple of minutes to complete if you're working in 32-bit—which you should be! Not sure if you're aware but Affinity Photo can use practically all of its tools, filters, adjustments etc in 32-bit, so there's no need to tone stretch then flatten and convert to 16-bit. The only time I've needed to use 16-bit is when creating a synthetic flat frame to combat gradients/skyglow etc.

    You can also use Select>Tonal Range>Highlights to make the initial selection, but you get less flexibility as there's no Tolerance option. Also, if you're in 32-bit Linear, what constitutes highlight detail will differ from 16-bit Nonlinear, so you may not find all the stars are successfully selected.

    Hope that helps!


  19. On 3/13/2020 at 12:03 AM, tobecg said:

    My workflow with this rendering was using the OCIO with source color: ACES — ACEScg, output — sRGB and the LUT is used to convert the image from linear to ICC srgb, so I can assign the sRGB IEC61966-2.1 (Linear) ICC Profile and get a correct export. But as I'm said, I'm new to this so its totally possible that I'm also making a mistake here. So far, the final export looked like it should. 🤔

     

    Thanks for all the help!

    Hi again @tobecg, a couple of suggestions based on what you've said here:

    When you say using the OCIO with source colour, are you referring to the OCIO adjustment layer (and not the OCIO transform on the 32-bit preview panel)? If so, one thing you should be aware of is that EXR documents are converted to scene linear upon import. Since EXR doesn't have the concept of tagged colour spaces, you can specify an input colour space by affixing the document file name with a valid colour space name from the configuration.

    For example, if you are using the ACES OCIO configuration and you know that your linear colour values in the EXR file are in ACES colour space, you can rename your file to "filename aces.exr". When you import it into Affinity Photo, you'll get a toast in the top right saying it has converted from that ACES colour space to scene linear—the colour values in your document should now look correct without having to do that initial OCIO adjustment layer transform.

    However, if it's too late for that, you could also achieve the same transform by adding your OCIO adjustment layer and on Source Colour Space choosing "ACES - ACES2065-1". Leave the Destination Colour Space as "role_scene_linear" since that's what you want to be converting to.

    You could then theoretically add another OCIO adjustment layer going from "role_scene_linear" to "out_srgb", which I think is what you were referring to in your post? That would give your document a gamma-corrected nonlinear appearance.

    A further complication is that in Affinity Photo EXR documents are (by default) given a linear sRGB colour profile, since Affinity Photo needs something to colour manage with. If you are working in Unmanaged (linear) or OCIO transform modes then this colour profile is completely arbitrary. It does however factor in if you use ICC Display Transform, since the colour values will be bounded to sRGB space during the document to screen profile conversion. Colour values outside sRGB are still there since you're working in unbounded floats, but I believe you won't see them as long as your document profile is linear sRGB.

    Hope that helps!

    [Edit] Forgot to add—I don't think this will affect you since you've mentioned there's no need to export back to EXR, but you can also affix the export filename with a colour space and Affinity Photo will convert colour values from scene linear to that colour space during export (e.g. "export filename acescg.exr"). Useful if you're taking your EXR into other software and need it to be in a certain colour space.


  20. Hi @tobecg, I've had a look at the files you provided to try and puzzle out what's happening. I can't replicate your exact setup as I'm not sure which OCIO configuration you're using? Unless you're using a filmic transform from a blender configuration?

    It looks like there are two separate issues working in tandem, both related to groups and how group nodes render. You've got a mask on your beauty pass group, and also a separate group of adjustment layers. This is what was throwing me, because with your workflow both of these are contributing to the rendering issue.

    There are a couple of quick workarounds you can try whilst we investigate this further: the first is to simply drag your OCIO, Lut, Gain group and clip it into the Beauty group—so drag and offer it to the text (not the thumbnail) of the Beauty group. In terms of node positioning, the mask in the Beauty group will now render above the adjustments, so all should be OK there.

    Alternatively, Ctrl/CMD-select both the OCIO, Lut, Gain and Beauty groups and group them together, then drag the mask out from the Beauty group and offer it to the thumbnail of your newly-created master group. This will basically achieve the same thing as the above approach, but just means you can have child groups and stay more organised.

    It should look like this:

    Screenshot 2020-03-12 at 11.30.06.jpg

    I've tried reproducing the issue with my own renders, and I can see a distinct difference in alpha rendering when the mask is rendering on a group as opposed to a pixel layer. I have however been unable to reproduce the pixellation you're experiencing. I'm not familiar with Octane—is it very expensive for render time to create a straight alpha pass? I can't help but wonder about the object colour pass and whether it's intended for use with alpha masking, because it seems to contain erroneous pixel values. For example, if you open the original EXR document and push the exposure of the object colour pass, you will see this:

    Screenshot 2020-03-12 at 11.28.08.jpg

    Whereas if we look at the same Exposure adjustment on a genuine straight alpha pass we will see:

    Screenshot 2020-03-12 at 11.29.18.jpg

     

    I'm certainly not trying to absolve Affinity Photo of responsibility here 😉 there's definitely an issue with group nodes that needs looking into, and thank you for bringing it to our attention. Would it be worth trying a straight alpha pass on your end however to see if the issue can be mitigated that way?

    Also, I noticed with your document file that you've chosen Unmanaged in the 32-bit preview panel, so you're seeing scene-referred linear values rather than managed display referred values. It's worth asking what your intended delivery is: are you planning to export to nonlinear 8-bit/16-bit delivery formats? If so, you have to use ICC Display Transform otherwise you'll get a nasty shock when you look at your exported images. ICC Display Transform exists so you can accurately preview what a linear 32-bit document will look like when exported to a non-linear, display-referred delivery format.

    This causes an issue if you try and use certain OCIO transforms. Case in point, blender's Filmic transform, which is designed to go from linear to nonlinear gamma. You effectively double-up the nonlinear gamma transform which makes the resulting image look too bright and washed out. An easy solution to this is to add a live Procedural Texture filter (Layers>New Live Filter Layer>Colour>Procedural Texture). Click the plus icon three times to add red, green and blue channel entries, then raise each channel value to the power of 2.2 (linear transform) like so:

    Screenshot 2020-03-12 at 12.41.11.jpg

    Given that most OCIO transforms (that I know of) are bounded and so will clip values outside 0-1, you will probably want to put this Procedural Texture filter underneath your OCIO adjustment layer.

    If you're using Affinity Photo as an intermediary in a compositing pipeline (and so will be exporting back to EXR), you'll want to use the OCIO transform option instead of ICC Display Transform.

    Hope the above helps!


  21. Out of curiosity, I extracted the embedded JPEG from the DNG file. It's a low resolution copy but looks like this:

    embedded_jpeg.jpg

    I realise everything is subjective, but I don't find this result particularly pleasant, and some light editing in Affinity Photo can absolutely achieve a much better image.

    The camera tech in the Pixel 3A clearly has some shortcomings, namely severely narrow dynamic range which appears to be compensated for with underexposure and heavy in-camera image processing. Looks like some degree of tone mapping is applied to compress the tonal range, but unfortunately it can't mitigate the ugly clipped dark tones. These are especially noticeable in the foliage on the left hand side of the image.

    Affinity Photo's RAW engine doesn't apply any kind of "adaptive" tone curve or initial processing, which leads users into believing it has shortcomings in this department when they are presented with a dark, underexposed image to begin with. The Pixel 3A has intentionally underexposed the image and then attempted to correct for the underexposure with in-camera processing. However:

    On 3/3/2020 at 4:01 PM, Chimes said:

    What loading on to affinity has done is to make my picture so awful that no amount of edits could improve it.  It render your editor as unusable.

     

    On 3/3/2020 at 4:01 PM, Chimes said:

    Fix it or die, Affinity.  I do not expect to have to come on here because your app destroys the photo before I've even so much as begun.  It's a lousy update.

    Ignoring the "fix it or die" comment which is a little inflammatory, I can reassure you that the app isn't "destroying" the photo. RAW files are processed in linear, unbounded floats. This means that no pixel colour values are being clipped or compromised during the development process. Yes, the DNG is initially quite dark. With processing, however, you can not only achieve the result you see with the in-camera JPEG, but frankly you can far surpass it.

    Editing is mostly subjective, a matter of taste, therefore everyone's opinion on how you would process this image will be different. I had a quick go at editing: I removed the default tone curve, used Curves for my own tone curve, then developed the image. I then used some adjustment and filter layers in the main Photo Persona to create a fairly balanced edit. My main goal was simply to even out the tonal range of the scene and have a pleasing, natural looking result. I also straightened/rotated it, as the original orientation was a little crooked.  I've attached it below:

    processed_raw.jpg

     

    The reality is that Affinity Photo may not be for you (and there's nothing wrong with that). If you want the RAW development to match the exact result you get with the in-camera processed image, that cannot be guaranteed. Aside from an adapted tone curve, there may be additional in-camera processing (e.g. structure enhancement, tone compression, noise reduction, sharpening) that is introduced to take what is technically a very underwhelming initial result and make it more palatable. You can of course apply all of these techniques and more in Affinity Photo, but that requires engaging with the software and finding a workflow that suits you.

     

    On 3/4/2020 at 5:48 PM, Chimes said:

    What difference would it make which camera I use? 

    There is actually an argument to be made here. I've mentioned it above briefly, but for how much smart phone photography has come the last few years, they are still technically compromised devices: tiny image sensors, trying to cram in greater pixel density in the name of the megapixel race, coupled with small optics. Any linear RAW data from a smartphone camera is going to be quite underwhelming, and there's been a lot of focus recently on AI enhancement/machine learning to overcome these physics-based limitations. Generally speaking, photographers shooting on larger sensor devices like typical DSLRs/compact system cameras have far fewer complaints with Affinity Photo's RAW processing. There are fewer issues with dark images because the camera metering generally exposes for a balanced scene, rather than underexposing to compensate for limited dynamic range. The other factor here is that photographers generally want manual control over their camera settings, so will be more likely to determine their own exposure, and this is reflected accurately when you open the RAW file in Affinity Photo.

    The main sticking point is, as you mention, not automatically applying some "smart" initial processing which would have to vary between each RAW format, and indeed each camera. There is perhaps some work that could be done here, but crucially not so that it compromises existing photographer's workflows who are happy with the RAW handling. Hopefully that's understandable.

    Thanks for reading!


  22. Hi all, to accompany the 1.8 updates for the Affinity apps, I'm going to be releasing one tutorial video a day on YouTube—all videos are already available on the website here: https://affinity.serif.com/tutorials/photo/desktop

    I'll be updating this thread for every new video released. Hope you find them useful! Here's the first:


  23. Hi @grunnvms, as @Murfee mentioned above you can choose a colour profile on export under the More options dialog.

    I'm not so sure about converting to a Fuji paper profile on export, however—my understanding is that you should use that during the print process (via the print dialog) for print colour management. E.g. on Mac, you have the choice of using printer colour management or ColorSync. With the latter, you can specify a colour profile, and that's where you would use the paper profile.

    You can always preview the effect of the paper profile by using a Soft Proof adjustment layer, but don't forget to hide/disable it before exporting your image.

    Finally, I would recommend against using 32-bit unless you have a specific need for it. By all means, use ROMM RGB to avoid clipping colour values (and then export to sRGB for web), but 32-bit is a linear compositing format as opposed to nonlinear. This means adjustments, filters and tools that perform tonal modifications will behave differently. Because 32-bit is unbounded, you can also run into issues when using certain adjustments or blend modes that do not clamp properly (although we're working on making this more user friendly). Improper clamping may result in NaNs (not a number) rather than colour values, or negative values, both of which may cause issues when converting to a bounded format on export. Unless you work predominantly with HDR imagery, 3D renders or astrophotography, I would advise staying away from 32-bit, 16-bit will give you more than enough precision for 99% of image editing cases.

    Maybe this article on Spotlight might be worth a read if you're trying to figure out colour profiles and colour management? https://affinityspotlight.com/article/display-colour-management-in-the-affinity-apps/

    Hope the above helps!

×
×
  • Create New...

Important Information

Please note the Annual Company Closure section in the Terms of Use. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.