Jump to content
You must now use your email address to sign in [click for more info] ×

James Ritson

Staff
  • Posts

    854
  • Joined

  • Last visited

Everything posted by James Ritson

  1. Hi @Anila, are you sure they're actually blurry, or it is more likely that you're just seeing it without any sharpening applied? Affinity Photo does minimal enhancement when developing RAW files. You can either add sharpening during the development stage (Details panel>Detail Refinement) or once you've developed your RAW image and moved to the Photo Persona (Layer>New Live Filter Layer>Unsharp Mask). I understand a concern might be that you're losing detail—don't worry as this isn't the case, the image would look soft in most RAW development applications if you removed the sharpening. However, if your images are actually really soft (e.g. sharpening doesn't solve the issue) it would be very useful if you could provide a sample RAW file to test with. A private Dropbox link can be provided if you don't want the file to be seen publicly. Hope that helps!
  2. Hi @1drey and @Jeremy See, please check out this post I've made in another thread as I'd love to get some feedback about whether these macros can solve your issue: Alternatively, here's a download link: http://jamesritson.co.uk/downloads/macros/jr_360.zip And a copy/paste of the post: On macOS, you can drag-drop them straight into Affinity Photo and it will automatically import them and open the Library panel. On Windows, you'll need to open the Library panel manually (View>Studio>Library) then click the top right icon and Import Macros. There are four macros: Tone Map SDR (seam aware) Local Contrast (seam aware) Clarity (seam aware) Inpaint alpha (transparent) areas I've tested on a variety of imagery from HDRI Haven, HDR Labs, some customer files and my own 360 images. By and large, the three seamless macros will work very well. The only problematic one may be Clarity, in which case you'll end up with a seam that runs through 1/4 of the edge rather than all the way around, so it's much easier to retouch. I have found that Clarity in particular may also expose any existing stitching errors that usually wouldn't be obvious without heavy pixel modification, so bear that in mind as well. Hope the above helps!
  3. Hi Scott, not an intentional plug but I wrote a procedural texture filter that acts as a green screen keyer—it's a lot quicker than creating manual selections and you can adjust the matte spill, antialiasing and fringing (green saturation). It's in my Workflow Bundle pack (https://forum.affinity.serif.com/index.php?/topic/100491-jr-workflow-bundle-shortcuts-macros-hdr-tools-brushes/) but I've attached it to this post instead. It'd be great to see if it works well for your imagery (I've only tested it on stock imagery with distinct green/blue backgrounds). Basically, you just run the macro on whichever layer (usually Background, the image layer) and then double click the Green Screen Key layer to access the controls. I remember I did a quick tweet with a video clip that shows it in action as well: Hope that helps! JR - Matting & Keying.afmacros
  4. I agree with you, as controversial an opinion as that may be There appear to have been several threads about this lately—the main thing that needs to be said is that Affinity Photo's histogram does not represent luminance/luminosity. What you're seeing is the overlap/addition of the RGB channels. That's why it was changed from white, because users were mistaking it for luminosity. If you want a better representation of luminosity, check out the Intensity Waveform on the Scope panel (View>Studio>Scope). That gives you an IRE readout and an abstract representation of your image, which is much more useful for seeing where the tones in your image are. You can also utilise an RGB Parade for a more accurate idea of where your colour channels may be clipping.
  5. Hi @AverytheFurry, looks like a colour management issue. The apps you're viewing the image in either won't be colour managed (looks like Photos isn't at all) or might be colour managed to a specific colour space. Being that it's for video streaming/recording, OBS might be assuming Rec.601 or Rec.709 colour primaries (or assuming sRGB and converting to Rec.601/Rec.709). As @GarryP mentioned, could you check what colour profile your document is using? To do this, make sure you have the View Tool selected (H on the keyboard) and look at the context toolbar readout. You'll have the pixel resolution followed by the colour profile. To ensure colours look as consistent as possible with apps that aren't colour managed, you're best off working in sRGB. You can either convert your document colour profile whilst working on it (Document>Convert Format / ICC Profile) or during export. To do it at export time, click the More dialog option and set ICC Profile to sRGB IEC 61966-2.2. If OBS is your intended destination, you might also want to try setting your document colour profile to Rec.709 (HD) or Rec.601 (SD) to see if that makes the result in OBS look consistent with what you're seeing in Affinity Photo. In the OBS Settings dialog, you've got the Advanced category where you can set the Colour Space. I believe it defaults to 601 but you can change this to 709. That said, from your screen grabs it looks like the result in OBS is very similar or the same as what you're seeing in Affinity Photo—which apps are you having issues with? [Edit] If you have a spare 10 minutes this colour management article on Spotlight may be of use (specifically the part near the end which covers document colour profiles: https://affinityspotlight.com/article/display-colour-management-in-the-affinity-apps/
  6. Just to point something out—colour decontamination is actually applied to the previews (Overlay, Black/White and Transparent) and carries across when you output as a New layer or New layer with mask. You should always use one of these two options for cutouts/foreground extraction. Outputting to a selection or mask will forego colour decontamination because they don't alter the pixel content. Could I ask what your source file for this document was? I'm noticing some blocking and compression artefacting especially around the edges of the subject, suggesting the original image was quite noisy and was then compressed—not making excuses, it simply appears that Affinity Photo's selection refinement doesn't cope too well with compression artefacting. Hopefully the support team can take a more detailed look at this in the coming week!
  7. I agree that it's not telegraphed very well, but essentially colour decontamination is linked to the issue @carl123 has mentioned as well as yourself in the above part I've quoted with using output to mask. Basically: Output to Selection/Output to Mask is for when you intend to mask an adjustment or filter layer, or perhaps create mask bounds for brush work. Because the pixel layer is not modified, no colour decontamination can be applied, so you only get the refined selection. Output to New layer/New layer with mask is for when you want to cut content out or isolate it from its background. Because the pixel content is modified, it means colour decontamination can be performed. The confusion may arise because the preview modes (Overlay, Transparent, Black/White) apply the colour decontamination procedure, whereas if you output to an active selection or mask there is of course no way to apply this because it requires modifying the pixel content. If you need to do this, you are better off outputting to a selection or mask as you mentioned. The artefacts are a result of discarding the background colour contribution and using only the foreground colours over the matted areas—the intention is for these to always be hidden by the mask (or discarded if you just output to a new layer). Should you choose to output as a new layer with mask, you also have other options to modify the mask: Add a Curves or Levels adjustment, drag it over the thumbnail of the mask (this will place it beneath the mask). Set the channel target to Alpha. You can now control the matte blending. Right click the mask layer and choose Refine Mask. Uncheck Matte edges (since the masking is already matted and you don't want to further matte the decontaminated areas) and set the preview mode to Transparent. You can then use Smooth, Feather and Ramp to further adjust the mask.
  8. This isn't broken—the additional pixels you see have been treated for colour decontamination to optimise edge detail when the subject is cut out/isolated. As @walt.farrell mentioned above, you would not ordinarily see this because the mask is applied. If you were to take the destructive approach and only output as a New Layer you would never see these pixels, as they would simply be discarded.
  9. Hi @Kutsche66, the difference you're seeing is expected regardless of whether you're in 16-bit or 32-bit, here's a quick explanation: When developing a RAW file, Photo knows the initial white balance (white point) and is able to change this which in turn offsets all the relative colour values. When entering the Develop Persona from a pixel layer or using the White Balance adjustment layer, the initial white balance is no longer known, plus the image has already been mapped to a colour space and relative white point. Therefore, rather than presenting a Kelvin scale, you instead have percentages—here, the adjustment is simply performing an offset based on an arbitrary white point, and so the result will look different (this includes the usage of the colour picker). That said, the scaling of the percentage version is not set in stone: it could be modified to more closely match the scale of the Kelvin version. It would however have an implication on existing documents that use the White Balance adjustment layer, so would require care if it was to be modified. There's an additional complexity with your image in that it requires a very dramatic white balance shift, e.g. if you use the white balance picker on the white coral the Kelvin value shifts to around 9000K. The adjustment version with its percentage slider can't go this far, so you won't be able to get the two to match. Hope that answers your question! I'll investigate the white balance scaling and see if it's possible to make some improvements here.
  10. Hi Dave, you've got a blend range set on the Curves adjustment (linear 100% black to 0% white), this is probably what's causing it unless I'm missing something. Resetting the blend range then doing as you described (top left to clip the entire image) behaves as expected. Hope that helps!
  11. Hey @SpartaPhoto, from my understanding you want to do the following: Blend the lighter doorway area from layer-1 into the composite of layer-2 Blend through only the luminance and not the colour information If this is correct, there are a couple of pointers: Setting the blend mode on the mask (e.g. Luminosity) will not do anything It looks like you have the layers the other way round in Affinity Photo when compared to your ON1 Photo Raw screen grabs (and presumably Photoshop as well?) To replicate the result you're getting in ON1 Photo Raw, just do the following: Put layer-1 above layer-2 Set layer-1's Blend Mode to Luminosity (the pixel layer, not the mask) Add a layer mask and invert it (Layer>Invert) Use the Paint Brush Tool to paint in over the doorway I've attached a quick video to demonstrate (see attached at bottom of post). Hopefully this is what you're trying to achieve? Note there is still a little bit of orange colour bleed in the top left of the doorway but this is consistent with the result from ON1 Photo Raw. You could always add a quick HSL adjustment layer, desaturate, invert (Layer>Invert again) and paint back in over that area if it concerns you. Hope that helps! luminosity_masking_trimmed.mp4
  12. Hi @travisrogers, yes, managed it! (Albeit with some very minor differences when zoomed in and doing a quick A/B comparison) When opened in Affinity Photo, the reference TIFF is being colour managed from sRGB to the display profile, so for this challenge I'm assuming this result is the correct one. I'm also using ICC Display Transform and not OCIO Display Transform with the 32-bit EXR. You can append the colour space to the file name (e.g. "filename aces") and Photo will convert from that colour space to scene linear, but in practice with the file provided it made little to no difference. The next step is to add an OCIO adjustment layer and go from scene linear to out_srgb (the pointer/friendly name for this is "Output - sRGB"). Finally, to match the look of the 16-bit TIFF, we need to add a simple gamma transform—this is because ICC Display Transform uses the display profile's non-linear transform to ensure parity with the results you'll get when you export to a non-linear image format like JPEG. A Levels adjustment isn't flexible enough here, so the easiest way to achieve this is to add a live Procedural Texture filter and do a power transform of 2.2 for each colour channel. I've attached a screen grab below to illustrate: And that should be it! I've attached a side-by-side comparison. This is the best match I've been able to achieve so far..
  13. Hi @johncena, there are a couple of things to break down in this reply: Firstly, regarding your OCIO configuration not working, which configuration are you using? If you've just copied the blender OCIO configuration from the blender directory, maybe try putting it somewhere that doesn't require elevated privileges (e.g. Documents folder or somewhere other than Program Files). Regarding the actual filmic transform, there's a little caveat you need to be aware of with blender's OCIO configuration. When the configuration is successfully loaded and you open an EXR file, the 32-bit Preview Panel will actually use "None" as the initial device transform, which gives you an unmanaged linear light view. If you intend to edit your document and export back to EXR, you'll want to switch this over to sRGB and choose Filmic as the view transform. This whole process is non-destructive and only applied to the view, though—the colour values in your document are not altered. If Affinity Photo is your final destination and you intend to export to a non-linear format (e.g. JPEG/TIFF/PNG), I'd recommend switching over to ICC Display Transform instead, which will apply a non-linear view transform based on the document's colour profile. This will then ensure parity with how the exported image will look. At this point, however, everything will look too bright and washed out compared to blender's view. You'll need to add an OCIO adjustment layer going from scene linear (usually just "Linear") to Filmic sRGB. Then you'll need to add a second OCIO adjustment layer going from sRGB to Linear. A bit strange, I know, but essentially your document is still in a linear colour space—there's just a non-linear view transform being applied when it's presented to screen. If you're unsure, I did a tutorial on this here: Hope the above helps!
  14. Hi @Arifin, are you sure? Look at the Colour panel where you have the RGB sliders, they appear incorrect as well. Is there any chance you could screenshot the Colour tab under System Preferences>Displays and attach it here? (Shift+CMD+4 then tap Spacebar and click on the window to screenshot it). Affinity Photo will be colour managing based on whatever display profile you have active... are there any other details you could provide, e.g. what monitor type you're using?
  15. Hi Dziga, the option you're looking for is under Preferences>Colour, near the bottom you have several OpenEXR options. Enable "Associate OpenEXR alpha channels" to premultiply the alpha channels into the .RGB pixel layers. Hope that helps! Also, there's no need to use the flood select tool. As @firstdefence mentioned above, you can also right click any layer including the greyscale alpha layer and choose Rasterise to Mask to convert it to a usable mask.
  16. Hi @bvz, apologies as I've also been out of office. Looks like you have messaging turned off (or I'm blind and can't find the option to PM you ) so I've sent a file request to your email address, hope that's OK. Look forward to seeing the config and trying to puzzle this one out..
  17. Hi, are you viewing on two different monitors? The screen grab on the left is significantly sharper as if it were on a high DPI monitor (look at the text), whereas the screen grab on the right looks much softer, as if it were captured on a standard monitor...
  18. Hey @bvz, I've just had a quick test with OCIO on Windows and it appears to be working fine with multiple OCIO configurations (blender, Nuke, ACES etc). My first suggestion would have been to check that your document was in 32-bit but you've already covered that, so we should then move onto what your configuration actually contains. Affinity Photo exposes display/view transforms in the 32-bit preview panel—these do have to be explicitly defined as view transforms and not just general colour space transforms however. Even if you don't have any view transforms, you can still apply the colour space transforms by using an OCIO Adjustment layer (under Layer>New Adjustment Layer). Do your colour space transforms show up in this list? If so, you can likely choose Linear (or whatever role is defined as scene linear) as the Source, and whichever colour space you wish to transform to as the Destination. Hope that helps—if you do have view transforms defined in the configuration and it looks like Affinity Photo isn't picking them up, are you able to attach the OCIO configuration file and relevant directories? (If privacy is required we can send you an internal file sharing link)
  19. Hi all, thanks for your feedback (it is being read!) in this thread. We're rounding off this new set of tutorials with these three: Cropping Straightening Images Procedural Texture: Tone Mapping HDR to SDR More tutorials will of course be on the way in the future—thanks again and hope you find them useful. As per usual, the first post has been updated too.
  20. R16 is a single channel format that contains red channel information—it's typically used for height maps in landscape creation software/game engines etc. Affinity Photo doesn't have any direct R16 export capabilities. However, you could try the following @davide445 : Flatten your document if you have layer work (Document>Flatten). On the Channels panel, scroll down and right click Pixel Green then choose Clear. Do the same with Pixel Blue. Note that Pixel will be whatever your layer is named (so it might be Background if you haven't flattened your document). Ignore the Composite channels at the top as they won't offer the option to clear the channel data. You should now be left with just red channel information. Go to File>Export and choose the PNG format. Click the More button and find Pixel format. Choose Greyscale 16-bit from the dropdown and then export. Your mileage may vary with this—some people have reported success importing these 16-bit greyscale PNGs into the software they're using. Alternatively, if you don't specifically need the greyscale bitmap and just wanted to export an RGB image with only the red channel data, ignore the PNG export steps and just use whatever format you wish. Hope that helps!
  21. Hey again, these new videos have been trickling out on YouTube over the past few days, so I've updated the first post with them: Pixel vs Image layers Using Matte ID render passes for masking Lock children (Masking) As always, hope you find them useful!
  22. Hey all, I've been on a roll with some new Photo tutorials this week—stay tuned daily for new videos, but I've updated the first post with three of the new ones so far: Pen tool Zoom blur Selective colour Thanks, James
  23. Hi 0Kami, I think in this instance even grey would have been misleading. A possible alternative approach which we've discussed is to represent the additive secondary colours (yellow, cyan and magenta) where the channels overlap rather than the singular blue. Regarding the waveform, it's not too dissimilar so don't be put off! Look at it vertically rather than horizontally: the bottom represents 0 IRE (pure black) and the top represents 100 IRE (pure white). You can easily see how brightness is distributed horizontally across your document/image and whether it's clipped. Thanks for your comment on the videos. It's been busy recently and production of new videos has taken a back seat, but hopefully that's going to change soon!
  24. The reason it was changed from white to blue, I believe, is because users were mistaking it for a representation of luminosity—it’s not. It’s describing the addition of the three separate red, green and blue histograms (or the overlap, if you like). If it was a luminosity histogram, the result would often look very different to the RGB histogram since it’s calculated from a weighted average of each pixel's RGB data with more precedence given to the green channel. As an alternative for the time being, I could suggest trying the intensity waveform found on the Scope panel—this gives you a plot of the luminance values independent of any colour information within the document/image.
  25. Unless I'm missing something, the reasoning behind write-back to a TIFF/PSD being so slow is because you're using multiple live filters stacked on top of one another. Live filters are expensive (slow) to render at export time on the CPU, and they have to be rasterised when writing to a format that doesn't support them. Affinity Photo supports layered TIFF/PSD write-back and will of course write its layer data into the files, but no other software supports its implementation of live filter layers, therefore it has to create a full resolution composite that can be shown and also edited in this other software. With the native .afphoto format it doesn't have to do this, it only needs to write a small resolution thumbnail for file browsing. Looking at your screen grab, you have live Clarity, Shadow and Highlights and Noise Reduction layers—Clarity and Noise Reduction are almost certainly slowing down the export time significantly because you'll likely be CPU-limited. I don't believe your GeForce GPU is supported for Affinity's Metal compute hardware acceleration, as this would reduce the save/export time dramatically—likely no more than 5 seconds on a moderate AMD GPU found in the recent MacBook Pro models (2016 and newer). As you've discovered, doing a Merge Visible operation also takes a similar amount of time, as it's essentially doing the same thing—merging a composite layer that has the filter effects baked in. If you go to File>Export and try exporting to any format, do you experience the same export time? If round-tripping is an essential part of your workflow, it's not the most elegant solution but I would recommend trying the old school non-destructive approach to filters: duplicate your image layer and apply the destructive filter to it. It's not ideal, but short of upgrading to a more recent Mac system that supports Metal compute, I'm not sure if there's an immediate solution. Live filters are great for non-destructive workflows, but are hugely taxing on the CPU with large resolution documents. I've tested and confirmed my above explanation on a 24-megapixel 16-bit TIFF file with just live Noise Reduction, Shadows & Highlights and Clarity filters. On a Core i9 (mobile) CPU, writing back as a layered TIFF takes about a minute. It appears to hang (beach ball) for 10 seconds or so, then continues with the export. Whilst this is quicker than your 4-5 minute export time, the CPU architecture is more mature and powerful, so would explain the difference here (plus I didn't have additional adjustment layers and other layer work). If you're able to, however, there is one thing to check based on the GPU you've listed in your specs. Could you go to Preferences>Performance (under the Affinity Photo menu, top left) and see what is listed under the Hardware Acceleration checkbox option? Is it greyed out, or is your GPU listed with the checkbox enabled? I'm fairly sure it won't be supported, but if it is, that GPU only has 1GB VRAM, which will be insufficient for 16-bit large resolution document work and may actually be causing a performance bottleneck. In this case, I would try disabling Metal compute and seeing if things improve. Hope the above helps—apologies as it looks like the round-tripping workflow you're wanting to use is subject to limitations. These can be worked around, but it's not ideal.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.