Jump to content

James Ritson

Moderators
  • Content Count

    664
  • Joined

  • Last visited

Everything posted by James Ritson

  1. Hi again, I think the export error is because of your layer stack. Check out the help topics on OpenEXR at https://affinity.help — particularly this one: https://affinity.help/photo/English.lproj/pages/HDR/openexr.html Generally, all of your pixel layers should have an .RGB or .RGBA affix (for example "Layer02.RGB" or "Layer02.RGBA" if it has a distinct alpha channel). If you don't specify layers as RGB pixel data they will be discarded on export. The reason you have to do this is because Affinity Photo has lots of different layer types: pixel layers, image layers, curves, fills, groups, etc. The OpenEXR spec is much narrower and you have to dictate what type of channel information each layer becomes when exporting to EXR. If you don't specify the channel type, Photo will ignore or discard the layer in question during export. If you have a pixel layer with a mask inside it, you must use the .RGBA prefix if you want to export the alpha channel separately—if you just use .RGB the alpha channel will be baked into the pixel layer and will be non-editable. Separate colour channel information can also be exported, so you could affix layers with .R, .B, .G and .A etc and they will be stored as individual channels in the EXR document. You don't appear to be working with any spatial layers like .Z or .XYZ so you can probably ignore them. Embedded documents and placed images should be correctly rasterised as long as you give them an .RGB or .RGBA prefix. Groups you will have to rasterise, even if you give them an .RGB prefix, otherwise they won't export. Right click the group and choose Rasterise to do this (make a duplicated backup beforehand and hide it though). I'm actually struggling to reproduce the error message now—if you try the naming conventions listed above and are still getting the error message, please could I send you a private Dropbox link to upload the document so I can have a look? It would only be used internally and then deleted once the issue is addressed. Thanks and hope the above helps, James
  2. Hi @spoon48, I've seen this before and can't quite remember how I fixed it—would it be possible to see a screenshot of your entire layer stack, and also your export settings (the expanded settings on the More dialog)? Thanks! James
  3. Hi @extremenewby, on Affinity Photo's Levels adjustment dialog you'll want to use the Gamma slider as an equivalent to Photoshop's mid-point slider. If you have any other questions about editing astrophotography please ask, I've been working with various deep sky imagery lately and adapting various techniques to an equivalent approach in Affinity Photo. Hope that helps!
  4. Hi @marisview, you should be able to select the mask layer, then use Layer>Merge Down (Ctrl+E or CMD+E), which will "rasterise" or flatten the mask into the alpha channel of the parent layer—does this work for you? This also works if you have the mask layer above the target layer. Note that this only applies to Pixel layers—Image layers (if you drag/place an image into your document) are immutable and cannot be merged. You must rasterise them first (Layer>Rasterise), but doing so will actually achieve the same effect as Merge Down, so no additional steps are required! Hope that helps.
  5. Hi @SeanZ, here's a quick breakdown of the pixel formats and what happens: As you've observed, when you open a RAW file in Affinity Photo it's processed in a working pixel format of 32-bit float, which is unbounded. This prevents values outside the range of 0-1 from being clipped and discarded, and allows for highlight recovery. Colour operations are processed in ROMM RGB (ProPhoto), which helps colour fidelity even if the intended output colour space is sRGB. You are essentially working in the highest quality that is reasonable. When you click Develop, by default, the pixel format is converted to 16-bit integer, which is not unbounded. Any values outside the range of 0-1 will be clipped and rounded, so you should ensure you are happy with the tones before you proceed (e.g. using highlight recovery, changing black point etc). The colour space is also converted to whichever option you have chosen—by default, this is sRGB, but you can change this by clicking the Profiles option on the right hand side and choosing another output profile like ROMM RGB or Adobe RGB. I say by default, because you can change the output format to 32-bit HDR on the develop assistant (the tuxedo/suit icon on the top toolbar). Be very mindful, however, that 32-bit in Affinity Photo is a linear compositing format as opposed to nonlinear. Adjustments will behave differently (typically they will seem more sensitive) and blend modes may produce unexpected results. I would avoid working in 32-bit unless you either want to author HDR/EDR content—this is not the same as HDR merging and tone mapping—or you need the precision for more esoteric genres like deep sky astrophotography. A lot of customers think working in 32-bit is going to offer them the best quality possible. Whilst this is technically true, there are many caveats and I would seriously advise against it unless you have a specific requirement for this format. To answer your questions: Technically, the answer is yes, but it's like I mentioned above: unless you have a very specific requirement to work in 32-bit, any loss in quality will be negligible. 16-bit nonlinear precision is more than enough for 99% of image editing use cases. Here's an example of my workflow: I will often choose a wider colour profile like ROMM RGB, then make my image look as flat as possible in the Develop Persona, usually by removing the tone curve and bringing in the highlights slightly if they are clipping. I'll then develop to 16-bit and do all of my tonal adjustments in the main Photo Persona. I have yet to complain about any loss in quality The functionality is more or less the same, but the Develop Persona has more intuitive sliders. In fact, in demos I will frequently explain to people that if you want to do some simple destructive edits to an image, you can simply go into that persona and use a few sliders rather than have to build up a layer stack. One key difference will be the white balance slider: when developing a RAW file, it has access to the initial white balance metadata and the slider is measured in Kelvin. Once an image is developed, however, this slider then becomes percentage based and takes on an arbitrary scale. Whatever works best for you, I think. My approach, which I explained above, is pretty fool proof, but you can do as little or as much work during the initial development as you feel is appropriate, e.g. you might want to add luminance noise reduction during the development stage, perform defringing, change white balance etc. Just be aware that if you perform certain steps like noise reduction during the initial development, you can't undo them. With noise reduction, I prefer to add a live Denoise layer in the Photo Persona, then mask out the areas I want to keep as detailed as possible. Again, though, it's down to personal choice. Hope that helps!
  6. Hi @ijustwantthiss**ttowork OK, so my suspicion is that the blade.icc profile is deficient—out of curiosity, is this the Razor Blade laptop? Can I ask where you got the blade.icc profile from, did you download it from a website or did it come preinstalled with the laptop? So if you switch the colour profile to the sRGB device profile, everything looks fine inside Affinity Photo? No issues, and when you export to JPEG/TIFF etc with an sRGB document profile everything looks good with an external image viewer etc? This is expected, as Affinity Photo needs to restart to colour manage with the newly chosen profile if you change it on the fly. Definitely don't do this! The reason it looks OK if you use blade.icc as your document profile is because you're matching the document profile with the display profile, negating colour management entirely—the document numbers are being sent to the display with no alteration. It might look OK for you, but it won't for anyone else. Colour managed applications are supposed to use the display profile to translate the colour values in the document as they're sent to the display. This is especially important for wide colour profiles beyond the scope of sRGB. The thing that seriously isn't working has to be the blade.icc profile—not sure where you got it from but it's defective as regards compatibility with colour management. We've seen this a lot with various monitors that people are using, Windows Update seems to push specific display profiles that just don't work with colour management solutions at all—in the article that was linked above you'll see I invited people to search for "whites are yellow" in relation to apps like Photoshop because it's not just Affinity Photo that is affected. Have you got access to a colorimeter, e.g. i1Display Pro, Spyder, ColorMunki etc? If so, I would download either the manufacturer's profiling software or DisplayCal and use that to profile your own monitor—any profile created by the software will be valid and work correctly with Affinity Photo. If you can't profile the display by yourself, the best solution is simply to use the sRGB display profile rather than this factory-calibrated blade profile—when you say factory calibrated, that instantly makes me sound very skeptical. Again, this all points to the blade profile being incompatible with colour management. Not many applications are actually colour managed—to be honest, I've lost track of whether Windows Photos/Photo Viewer/whatever it is in Windows 10 is colour managed or not. I think web browsers by and large should be colour managed by now, but there's no guarantee there. The fact that things look different in Affinity Photo is a clear sign that it's performing colour management based on your active display profile, but unfortunately if that display profile is incompatible then everything is going to look whacked. As I just mentioned above, unless you can create your own accurate and compatible profile with a measuring device, I think your only solution here is to use the sRGB display profile. From your screen grabs, you haven't touched Affinity Photo's default document colour profiles (they're left on sRGB) which is good—just avoid using blade.icc as your document profile altogether and stick with standardised device profiles like sRGB, Adobe RGB, ProPhoto/ROMM RGB etc. If you do use a wide profile like Adobe RGB, don't forget to convert to sRGB on export if you're going to view it in other applications that aren't colour managed—the article explains how to achieve that. Hope all the above helps! [Edit] From a bit of searching, I found someone had posted their Razer Blade 2019 profile that they had created with DisplayCal here: https://drive.google.com/file/d/1l07D8CtFjXYVsDpeTAyLBEkZpELNomYo/view (from this Reddit discussion: https://www.reddit.com/r/razer/comments/ase96x/razer_blade_15_2019_color_profile/) I definitely wouldn't recommend using it for professional/production work (I'd seriously advise getting a colorimeter and creating your own profile to the standards you require) but it's worth installing and switching to it just to see if it helps. It could go either way, since if that person's display is wildly different to yours it will still look terrible, but it could also produce a much better result than the blade.icc profile you're currently using..
  7. Hi again @wlickteig, this sounds like you had OpenColorIO configured on the old MacBook (or you changed the 32-bit view to Unmanaged), whereas on your new MacBook it's likely defaulting to ICC Display Transform, especially if you don't have OpenColorIO configured. In this case, you don't actually need to add the nonlinear to linear layer, just try exporting it without. The reason I developed the nonlinear to linear macro was for users who were trying to match the transform view from 3D software like blender e.g. when using the Filmic view transform. You can emulate Filmic by using an OCIO adjustment layer and going from Linear to Filmic sRGB, but because you have the added non-linear transform when you convert to 8-bit or 16-bit (this is what ICC Display Transform emulates) you need to add that final correction. If you're using ICC Display Transform and are happy with the results you're seeing on screen, just export without adding the correction. Easiest way to check what view transform you're using is to go to View>Studio>32-bit Preview and see which option is checked. ICC Display Transform will show you what your image will look like with a non-linear colour profile (e.g. in 8-bit or 16-bit), whereas Unmanaged will use linear light. If configured, you can use an OpenColorIO device/view transform as well.
  8. Hi @wlickteig, are you using OpenColorIO or have you accessed the 32-bit preview panel (From View>Studio) and set the view to unmanaged (linear light)? If you switch the view to ICC Display Transform you will see what your document looks like with a non-linear gamma transform, which is what will be used when it is converted to 16-bit or 8-bit. Thankfully there’s a simple way to emulate the linear view: add a live Procedural Texture filter with three channels and do a power transform of 2.2 for each one. Please see this thread for an explanation: Also, there’s no real need to copy to clipboard and paste as new image if all you want to do is export—just add the procedural texture layer at the top then export to TIFF/JPEG etc, all the bit depth and colour profile conversion will be taken care of during export. There is also a macro that will automate this process in my workflow bundle, it might be useful to you: Hope that helps!
  9. Hi @smadell, hopefully this will give you a clearer answer! What upgraded GPU option are you considering over the standard one, and is it the 21" or the 27" model you're after? Looking at the specs, for the 21" the cheapest model has Intel Iris Plus graphics whereas you can upgrade to a Radeon 555X or 560X. With the 27" model the cheapest option is a 570X, and the most expensive upgrade gets you a 580X. Any of the upgraded GPU options will give you an appreciable boost in performance with pretty much all operations in Photo: the vast majority of raster operations are accelerated with a few exceptions—for example, blend ranges will currently fall back to software (CPU). Vector layers/objects are also not accelerated. The biggest advantage I've noticed is being able to stack multiple live filters without significant slowdown: I typically use unsharp mask, clarity, noise reduction, gaussian blur, motion blur and procedural texture filters with compositions and photographs, and being able to use compute is incredibly helpful as it keeps the editing process smooth and quick. Export times are also drastically reduced when you have live filters in your document: in software (CPU), as soon as you start stacking several live filters the export time can easily take over a minute, whereas with compute on the GPU this is reduced to no more than several seconds. However, the biggest limiting factor in my experience has been VRAM, and you will need to scale your requirements (and expectations) in accordance with a) the screen resolution, b) the pixel resolutions you typically work with and c) the bit depth you typically work in. To give you a rough idea, 4GB of VRAM is just about sufficient to initially develop a 24 megapixel RAW file and then work in 16-bit per channel precision on a 5K display (5120x2880). If you move down to 4K (3840x2160) 4GB becomes a much more viable option. This is somewhat subjective, but I would say forget about 2GB VRAM or lower if those are your baseline requirements—you simply won't have a good editing experience as the VRAM will easily max out and swap memory will be used, incurring a huge performance penalty. Ironically, if you can't stretch budget-wise to a GPU with 4GB or even 8GB of VRAM, the Intel Iris Plus graphics option may provide a better experience since it dynamically allocates its VRAM from main memory, therefore it can grow to accommodate larger memory requirements. From early Metal Compute testing I often found that disabling my MacBook's discrete GPU (with 2GB VRAM) and only using the Intel integrated graphics would alleviate the memory bottleneck. I believe the memory management has improved since then, but if you're on a budget that is an option to consider. However, if you're looking at the more expensive iMac models, I think you should weigh up your requirements and what content you work with in Photo. Here are a few scenarios I can think of: 4K resolution, light editing—development of RAW files, then adding some adjustment layers and live filter layers—you could get away with a 2GB VRAM GPU. 5K resolution, light editing—definitely go with a 4GB VRAM GPU. 4K resolution, moderate editing—development of RAW files, lots of adjustment layers and live filter layers, some compositing work with multiple layers—go with a 4GB VRAM GPU. 5K resolution, moderate editing—4GB VRAM GPU minimum. 4K/5K resolution, heavy editing—working with compositions that have many image/pixel layers, clipped adjustments, live filter layers—absolutely consider an 8GB VRAM GPU. 4K/5K resolution, 32-bit/16-bit compositing e.g. 3D render work, using render passes, editing compositions in 16-bit—8GB VRAM GPU minimum. However, if budget allows, do also consider the possibility of an external GPU: this might even work out cheaper than having to choose an upgraded iMac model. In the office I have a MacBook with a 560X GPU that has 4GB VRAM—this is sufficient for demoing/tutorials at 4K or the MacBook panel's resolution (3360x2100) but I work with an external monitor at 5K and for that I use an eGPU enclosure with a Vega 64 that has 8GB VRAM. The additional compute power is incredibly useful, but it's mainly the larger pool of VRAM that helps out here. You don't have to use a Vega, I believe the new Navi cards are supported so you could look at a 5500XT with 8GB VRAM which is a reasonably cheap option (although you would still have to get the GPU enclosure...) As you mentioned, your timeframe is 6 months so it might be worth waiting for Apple to refresh the iMac lineup as they will hopefully switch to Navi-based cards like they have with the new MacBook range. No doubt the cheapest option will have 4GB VRAM but models with 8GB should also be available. Apologies for the wall of text, but hopefully that gives you some more information to work with!
  10. Hi Sean, best advice is not to touch the default colour settings in Preferences: leave the RGB/RGB32 colour profiles set to sRGB. RAW files are not mapped to sRGB/Adobe RGB etc, the camera setting is used for the in-camera JPEG and may also be used to dictate the initial output profile with RAW development software. After demosaicing, the colours are translated from the camera's colour space to a working space, then mapped to the output colour space which you get to choose. By default this is sRGB. If you wish to use a wider profile, check the Profiles option in the Develop Persona and choose an option from the dropdown. If you want to edit in ProPhoto, you can pick ROMM RGB which is the equivalent that ships with the Affinity apps. As mentioned above, the easiest option is to keep everything as sRGB. If you choose to work in a wider profile like Adobe RGB or ROMM RGB and you export to JPEG/TIFF etc you will generally want to convert to sRGB during export because other apps/web browsers/image hosts may not be colour space aware or colour managed. To do this, click More on the export dialog, and from the ICC profile dropdown choose sRGB IEC61966-2.1 There is a good argument for using a wider colour space if you tend to print your work, but you also need to be able to see these colours on your screen—do you know what coverage your monitor has of Adobe RGB/ProPhoto RGB, i.e. have you profiled it? If not, the safest tactic is simply to stick to sRGB again. Hope that helps!
  11. Hi @GarryP, the new tutorials are very much focused on this approach, are you referring to them or the legacy set? For example, the new set (https://forum.affinity.serif.com/index.php?/topic/87161-official-affinity-photo-desktop-tutorials/) has a Selective Colour tutorial amongst other tutorials that look at specific filters and adjustments—Curves, Gradient map, Levels, Shadows/highlights, Displace, Denoise, Radial blur, Clarity, Channel mixer, White balance, Black & white, Zoom blur, Procedural Texture, etc... With the new tutorials, there's less of a focus on multiple techniques in one video, although sometimes this approach may be required. The videos also need to be kept fairly to-the-point because we now get them transcribed and translated into 8 different languages (which is particularly expensive)—there's also the issue of engagement, where many viewers don't make it through lengthy videos, and we have to take that into consideration as well. I would love to be able to produce more videos in-between major releases but time is quite limited, so it will tend to be during or just after updates to the app when you'll find more tutorials being made available. Hope the above helps, and thanks for your feedback!
  12. Hi kat, check out the hintline (bottom of the user interface), it will list the modifiers/shortcuts you can use with whichever tool you have selected. In the case of the Paint Mixer Brush, you can use L to load the brush and C to clean. Hope that helps!
  13. Hi @Anila, are you sure they're actually blurry, or it is more likely that you're just seeing it without any sharpening applied? Affinity Photo does minimal enhancement when developing RAW files. You can either add sharpening during the development stage (Details panel>Detail Refinement) or once you've developed your RAW image and moved to the Photo Persona (Layer>New Live Filter Layer>Unsharp Mask). I understand a concern might be that you're losing detail—don't worry as this isn't the case, the image would look soft in most RAW development applications if you removed the sharpening. However, if your images are actually really soft (e.g. sharpening doesn't solve the issue) it would be very useful if you could provide a sample RAW file to test with. A private Dropbox link can be provided if you don't want the file to be seen publicly. Hope that helps!
  14. Hi @1drey and @Jeremy See, please check out this post I've made in another thread as I'd love to get some feedback about whether these macros can solve your issue: Alternatively, here's a download link: http://jamesritson.co.uk/downloads/macros/jr_360.zip And a copy/paste of the post: On macOS, you can drag-drop them straight into Affinity Photo and it will automatically import them and open the Library panel. On Windows, you'll need to open the Library panel manually (View>Studio>Library) then click the top right icon and Import Macros. There are four macros: Tone Map SDR (seam aware) Local Contrast (seam aware) Clarity (seam aware) Inpaint alpha (transparent) areas I've tested on a variety of imagery from HDRI Haven, HDR Labs, some customer files and my own 360 images. By and large, the three seamless macros will work very well. The only problematic one may be Clarity, in which case you'll end up with a seam that runs through 1/4 of the edge rather than all the way around, so it's much easier to retouch. I have found that Clarity in particular may also expose any existing stitching errors that usually wouldn't be obvious without heavy pixel modification, so bear that in mind as well. Hope the above helps!
  15. Hi @Kronpano, I'd be interested to know if these macros solve any issues for you: http://jamesritson.co.uk/downloads/macros/jr_360.zip On macOS, you can drag-drop them straight into Affinity Photo and it will automatically import them and open the Library panel. On Windows, you'll need to open the Library panel manually (View>Studio>Library) then click the top right icon and Import Macros. There are four macros: Tone Map SDR (seam aware) Local Contrast (seam aware) Clarity (seam aware) Inpaint alpha (transparent) areas I've tested on a variety of imagery from HDRI Haven, HDR Labs, some customer files and my own 360 images. By and large, the three seamless macros will work very well. The only problematic one may be Clarity, in which case you'll end up with a seam that runs through 1/4 of the edge rather than all the way around, so it's much easier to retouch. I have found that Clarity in particular may also expose any existing stitching errors that usually wouldn't be obvious without heavy pixel modification, so bear that in mind as well. Hope the above helps!
  16. Hi Scott, not an intentional plug but I wrote a procedural texture filter that acts as a green screen keyer—it's a lot quicker than creating manual selections and you can adjust the matte spill, antialiasing and fringing (green saturation). It's in my Workflow Bundle pack (https://forum.affinity.serif.com/index.php?/topic/100491-jr-workflow-bundle-shortcuts-macros-hdr-tools-brushes/) but I've attached it to this post instead. It'd be great to see if it works well for your imagery (I've only tested it on stock imagery with distinct green/blue backgrounds). Basically, you just run the macro on whichever layer (usually Background, the image layer) and then double click the Green Screen Key layer to access the controls. I remember I did a quick tweet with a video clip that shows it in action as well: Hope that helps! JR - Matting & Keying.afmacros
  17. I agree with you, as controversial an opinion as that may be There appear to have been several threads about this lately—the main thing that needs to be said is that Affinity Photo's histogram does not represent luminance/luminosity. What you're seeing is the overlap/addition of the RGB channels. That's why it was changed from white, because users were mistaking it for luminosity. If you want a better representation of luminosity, check out the Intensity Waveform on the Scope panel (View>Studio>Scope). That gives you an IRE readout and an abstract representation of your image, which is much more useful for seeing where the tones in your image are. You can also utilise an RGB Parade for a more accurate idea of where your colour channels may be clipping.
  18. Hi @AverytheFurry, looks like a colour management issue. The apps you're viewing the image in either won't be colour managed (looks like Photos isn't at all) or might be colour managed to a specific colour space. Being that it's for video streaming/recording, OBS might be assuming Rec.601 or Rec.709 colour primaries (or assuming sRGB and converting to Rec.601/Rec.709). As @GarryP mentioned, could you check what colour profile your document is using? To do this, make sure you have the View Tool selected (H on the keyboard) and look at the context toolbar readout. You'll have the pixel resolution followed by the colour profile. To ensure colours look as consistent as possible with apps that aren't colour managed, you're best off working in sRGB. You can either convert your document colour profile whilst working on it (Document>Convert Format / ICC Profile) or during export. To do it at export time, click the More dialog option and set ICC Profile to sRGB IEC 61966-2.2. If OBS is your intended destination, you might also want to try setting your document colour profile to Rec.709 (HD) or Rec.601 (SD) to see if that makes the result in OBS look consistent with what you're seeing in Affinity Photo. In the OBS Settings dialog, you've got the Advanced category where you can set the Colour Space. I believe it defaults to 601 but you can change this to 709. That said, from your screen grabs it looks like the result in OBS is very similar or the same as what you're seeing in Affinity Photo—which apps are you having issues with? [Edit] If you have a spare 10 minutes this colour management article on Spotlight may be of use (specifically the part near the end which covers document colour profiles: https://affinityspotlight.com/article/display-colour-management-in-the-affinity-apps/
  19. Just to point something out—colour decontamination is actually applied to the previews (Overlay, Black/White and Transparent) and carries across when you output as a New layer or New layer with mask. You should always use one of these two options for cutouts/foreground extraction. Outputting to a selection or mask will forego colour decontamination because they don't alter the pixel content. Could I ask what your source file for this document was? I'm noticing some blocking and compression artefacting especially around the edges of the subject, suggesting the original image was quite noisy and was then compressed—not making excuses, it simply appears that Affinity Photo's selection refinement doesn't cope too well with compression artefacting. Hopefully the support team can take a more detailed look at this in the coming week!
  20. I agree that it's not telegraphed very well, but essentially colour decontamination is linked to the issue @carl123 has mentioned as well as yourself in the above part I've quoted with using output to mask. Basically: Output to Selection/Output to Mask is for when you intend to mask an adjustment or filter layer, or perhaps create mask bounds for brush work. Because the pixel layer is not modified, no colour decontamination can be applied, so you only get the refined selection. Output to New layer/New layer with mask is for when you want to cut content out or isolate it from its background. Because the pixel content is modified, it means colour decontamination can be performed. The confusion may arise because the preview modes (Overlay, Transparent, Black/White) apply the colour decontamination procedure, whereas if you output to an active selection or mask there is of course no way to apply this because it requires modifying the pixel content. If you need to do this, you are better off outputting to a selection or mask as you mentioned. The artefacts are a result of discarding the background colour contribution and using only the foreground colours over the matted areas—the intention is for these to always be hidden by the mask (or discarded if you just output to a new layer). Should you choose to output as a new layer with mask, you also have other options to modify the mask: Add a Curves or Levels adjustment, drag it over the thumbnail of the mask (this will place it beneath the mask). Set the channel target to Alpha. You can now control the matte blending. Right click the mask layer and choose Refine Mask. Uncheck Matte edges (since the masking is already matted and you don't want to further matte the decontaminated areas) and set the preview mode to Transparent. You can then use Smooth, Feather and Ramp to further adjust the mask.
  21. This isn't broken—the additional pixels you see have been treated for colour decontamination to optimise edge detail when the subject is cut out/isolated. As @walt.farrell mentioned above, you would not ordinarily see this because the mask is applied. If you were to take the destructive approach and only output as a New Layer you would never see these pixels, as they would simply be discarded.
  22. Hi @Kutsche66, the difference you're seeing is expected regardless of whether you're in 16-bit or 32-bit, here's a quick explanation: When developing a RAW file, Photo knows the initial white balance (white point) and is able to change this which in turn offsets all the relative colour values. When entering the Develop Persona from a pixel layer or using the White Balance adjustment layer, the initial white balance is no longer known, plus the image has already been mapped to a colour space and relative white point. Therefore, rather than presenting a Kelvin scale, you instead have percentages—here, the adjustment is simply performing an offset based on an arbitrary white point, and so the result will look different (this includes the usage of the colour picker). That said, the scaling of the percentage version is not set in stone: it could be modified to more closely match the scale of the Kelvin version. It would however have an implication on existing documents that use the White Balance adjustment layer, so would require care if it was to be modified. There's an additional complexity with your image in that it requires a very dramatic white balance shift, e.g. if you use the white balance picker on the white coral the Kelvin value shifts to around 9000K. The adjustment version with its percentage slider can't go this far, so you won't be able to get the two to match. Hope that answers your question! I'll investigate the white balance scaling and see if it's possible to make some improvements here.
  23. Hi Dave, you've got a blend range set on the Curves adjustment (linear 100% black to 0% white), this is probably what's causing it unless I'm missing something. Resetting the blend range then doing as you described (top left to clip the entire image) behaves as expected. Hope that helps!
  24. Hey @SpartaPhoto, from my understanding you want to do the following: Blend the lighter doorway area from layer-1 into the composite of layer-2 Blend through only the luminance and not the colour information If this is correct, there are a couple of pointers: Setting the blend mode on the mask (e.g. Luminosity) will not do anything It looks like you have the layers the other way round in Affinity Photo when compared to your ON1 Photo Raw screen grabs (and presumably Photoshop as well?) To replicate the result you're getting in ON1 Photo Raw, just do the following: Put layer-1 above layer-2 Set layer-1's Blend Mode to Luminosity (the pixel layer, not the mask) Add a layer mask and invert it (Layer>Invert) Use the Paint Brush Tool to paint in over the doorway I've attached a quick video to demonstrate (see attached at bottom of post). Hopefully this is what you're trying to achieve? Note there is still a little bit of orange colour bleed in the top left of the doorway but this is consistent with the result from ON1 Photo Raw. You could always add a quick HSL adjustment layer, desaturate, invert (Layer>Invert again) and paint back in over that area if it concerns you. Hope that helps! luminosity_masking_trimmed.mp4
  25. Hi @travisrogers, yes, managed it! (Albeit with some very minor differences when zoomed in and doing a quick A/B comparison) When opened in Affinity Photo, the reference TIFF is being colour managed from sRGB to the display profile, so for this challenge I'm assuming this result is the correct one. I'm also using ICC Display Transform and not OCIO Display Transform with the 32-bit EXR. You can append the colour space to the file name (e.g. "filename aces") and Photo will convert from that colour space to scene linear, but in practice with the file provided it made little to no difference. The next step is to add an OCIO adjustment layer and go from scene linear to out_srgb (the pointer/friendly name for this is "Output - sRGB"). Finally, to match the look of the 16-bit TIFF, we need to add a simple gamma transform—this is because ICC Display Transform uses the display profile's non-linear transform to ensure parity with the results you'll get when you export to a non-linear image format like JPEG. A Levels adjustment isn't flexible enough here, so the easiest way to achieve this is to add a live Procedural Texture filter and do a power transform of 2.2 for each colour channel. I've attached a screen grab below to illustrate: Note that you can also get this step as a macro in my workflow bundle (plus some other HDR authoring macros). And that should be it! I've attached a side-by-side comparison. This is the best match I've been able to achieve so far..
×
×
  • Create New...

Important Information

Please note the Annual Company Closure section in the Terms of Use. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.