Jump to content
You must now use your email address to sign in [click for more info] ×

kirkt

Members
  • Posts

    440
  • Joined

  • Last visited

Reputation Activity

  1. Thanks
    kirkt got a reaction from Ulysses in Feature Request - Alpha Channels (AGAIN!)   
    @Gregory Chalenko
    A couple of thoughts in the meanwhile - 1) You can reset the view from individual channels to the composite by clicking on the clockwise circular icon with the arrowhead in the upper right of the channels view.
    2) You can replicate your mask construction process in AP but it works a little differently, partially because channels and masks work differently in AP compared to PS.
    a) Starting with your source image, make a duplicate of the image layer upon which you want to base your mask (CMD-J) and make this layer active.  We will call this the "LayerForMask" in the layer stack.
    b) Inspect your channels to see which one (or a combination of more than one, in overlay mode as you demonstrate in your YouTube clip) you want to use as the basis for your mask.  
    c) If you want to combine channels in Overlay mode, for example: With the "LayerForMask" as the active layer, go to Filters > Apply Image... and choose to "Use Current Layer As Source."  Set the Blend Mode to "Overlay."  Finally, select the "Equations" check box - let's say in this example, you want to combine the Green and Blue channels in overlay mode like you do in PS using Calculations.  Here you will set "DG = SB" [Destination Green equals Source Blue) and "DB = SG" - you are basically switching the two channels and combining the result in Overlay mode.  This will give you a high-contrast result in the G channel that you can use as the basis for the mask.
    d) Click on the Composite Green channel in the Channels panel of the resulting image that now occupies the LayerForMask layer - this will display the grayscale result of the operation you just performed, and you can inspect the result to see if it is satisfactory to use as a mask.  This is because the top layer in the stack is the result of the Apply Image process (therefore, the Composite layer is the top layer and you can view its channels).  Also take a look at the Blue channel.  In this example, suppose you want to use the resulting Green Channel as the basis for your mask.
    e) Below the Composite layer channels in the Channels panel will be the LayerForMask channels listed.  In this case we want to use the Green channel for our mask, so Right-Click on the Green channel for this layer and select "Create Grayscale Layer" - this will create a grayscale copy of the Green channel at the top of the layer stack.  This is a pixel layer that you can edit with all of the tools like dodge and burn, etc. to construct and refine your mask.  We will call this "WorkingMaskLayer."
    f) Once you have perfected your mask on the WorkingMaskLayer, this pixel layer can stay in the layer stack for further editing if you want, or stored as a spare channel, etc.  In any case, in the Channels panel, Right-Click on any of the channels in the WorkingMaskLayer and select "Create Mask Layer" - this will create a new Mask Layer out of the grayscale image from WorkingMaskLayer and you can drag the new mask layer onto the layer to which you want to apply the mask.
    A Mask Layer is a special kind of layer in AP - it is similar to the layer mask layer that is attached to a layer in PS, except it is a separable element that you can move up and down the layer stack and nest with other layers.  You can edit and paint on a mask layer as well, if you prefer to refine your mask that way - you can view the mask itself (instead of its effect on the layer stack) by OPT-Clicking (ALT-Clicking) on it, just like in PS.
    Kirk
  2. Thanks
    kirkt got a reaction from keena in Feature Request - Alpha Channels (AGAIN!)   
    @Gregory Chalenko
    If you want to create a grayscale image from the Spare Channel there are a couple of ways, one of which may be better than the other for your application:
    1) Make a new pixel layer and fill it with White - we will call this new layer "BlankLayer."
    2a) Right-click on the Spare Channel that you have stored and select "Load to BlankLayer Red" - repeat but select "Load to BlankLayer Green" and ""Load to BlankLayer Blue."  Now you have a grayscale pixel layer that is a copy of the Spare Channel.  You can make a Macro that will do the sequence of steps.
    or
    2b) Right-click on the Spare Channel and select "Load to BlankLayer Alpha."  The Spare Channel will be transferred to the BlankLayer Alpha channel.  In the Channels panel list, right-click on the BlankLayer Alpha channel and select "Create Grayscale Layer."  In this case, the BlankLayer is a temporary layer that you use to hold the Alpha channel so you can make the grayscale image from it.
    I suggest that you right-click on all of the channel thumbnails in the Channels panel and see what options each one has - there is a lot going on there, but it is sort of hidden until you realize that the options exist!  Also, even though these work-arounds require additional button presses and steps, I think that all of the steps are able to be recorded in a macro, so you could automate the process by recording a Macro.
     
    Kirk
  3. Thanks
    kirkt got a reaction from Ulysses in Feature Request - Alpha Channels (AGAIN!)   
    You're welcome.  It has taken me a while to get my brain around how channels in AP work and redo a lot of muscle memory from using PS for decades.  I still use both applications, and I understand how you feel!
    kirk
  4. Like
    kirkt reacted to Gregory Chalenko in Feature Request - Alpha Channels (AGAIN!)   
    This is a really neat workaround, I haven't thought about duplicating the image first.
    Creating a grayscale layer from the context menu is also a great function, I haven't noticed it before. I guess, the reason why it's not quite obvious, is that it's available only for the current layer channels, while creating of a spare channel is present in every channel's menu.
    Thank you Kirk!
  5. Thanks
    kirkt got a reaction from keena in Feature Request - Alpha Channels (AGAIN!)   
    @Gregory Chalenko
    A couple of thoughts in the meanwhile - 1) You can reset the view from individual channels to the composite by clicking on the clockwise circular icon with the arrowhead in the upper right of the channels view.
    2) You can replicate your mask construction process in AP but it works a little differently, partially because channels and masks work differently in AP compared to PS.
    a) Starting with your source image, make a duplicate of the image layer upon which you want to base your mask (CMD-J) and make this layer active.  We will call this the "LayerForMask" in the layer stack.
    b) Inspect your channels to see which one (or a combination of more than one, in overlay mode as you demonstrate in your YouTube clip) you want to use as the basis for your mask.  
    c) If you want to combine channels in Overlay mode, for example: With the "LayerForMask" as the active layer, go to Filters > Apply Image... and choose to "Use Current Layer As Source."  Set the Blend Mode to "Overlay."  Finally, select the "Equations" check box - let's say in this example, you want to combine the Green and Blue channels in overlay mode like you do in PS using Calculations.  Here you will set "DG = SB" [Destination Green equals Source Blue) and "DB = SG" - you are basically switching the two channels and combining the result in Overlay mode.  This will give you a high-contrast result in the G channel that you can use as the basis for the mask.
    d) Click on the Composite Green channel in the Channels panel of the resulting image that now occupies the LayerForMask layer - this will display the grayscale result of the operation you just performed, and you can inspect the result to see if it is satisfactory to use as a mask.  This is because the top layer in the stack is the result of the Apply Image process (therefore, the Composite layer is the top layer and you can view its channels).  Also take a look at the Blue channel.  In this example, suppose you want to use the resulting Green Channel as the basis for your mask.
    e) Below the Composite layer channels in the Channels panel will be the LayerForMask channels listed.  In this case we want to use the Green channel for our mask, so Right-Click on the Green channel for this layer and select "Create Grayscale Layer" - this will create a grayscale copy of the Green channel at the top of the layer stack.  This is a pixel layer that you can edit with all of the tools like dodge and burn, etc. to construct and refine your mask.  We will call this "WorkingMaskLayer."
    f) Once you have perfected your mask on the WorkingMaskLayer, this pixel layer can stay in the layer stack for further editing if you want, or stored as a spare channel, etc.  In any case, in the Channels panel, Right-Click on any of the channels in the WorkingMaskLayer and select "Create Mask Layer" - this will create a new Mask Layer out of the grayscale image from WorkingMaskLayer and you can drag the new mask layer onto the layer to which you want to apply the mask.
    A Mask Layer is a special kind of layer in AP - it is similar to the layer mask layer that is attached to a layer in PS, except it is a separable element that you can move up and down the layer stack and nest with other layers.  You can edit and paint on a mask layer as well, if you prefer to refine your mask that way - you can view the mask itself (instead of its effect on the layer stack) by OPT-Clicking (ALT-Clicking) on it, just like in PS.
    Kirk
  6. Like
    kirkt reacted to timvandevelde in Lens Profile for Sony FE 200-600 mm F5.6-6.3 G OSS (SEL200600G)   
    @kirkt Thanks for pointing that out and providing the links!
    Distortion is indeed minimal but I overlayed a jpg (in camera distortion correction) on a raw image yesterday and the difference was more noticable than I thought it would be.
  7. Thanks
    kirkt got a reaction from Roland Rick in Can I extract masks from adjustment layers?   
    @Roland Rick
    Nothing happens when you offer the levels adjustment to the recolor adjustment because it makes no sense to adjust the levels of the recolor adjustment.
    All adjustment layers have a built-in mask, as you know; however, you do not need to use that built-in mask to dictate where your adjustment gets applied selectively.  You can add a mask to the adjustment layer that behaves just like a regular old mask on a pixel layer and sits in the adjustment layer as a child of the adjustment layer.  In your case, let's say you want to change the color of the insect's eyes with a Recolor adjustment - apply the Recolor Adjustment layer in the layer stack, tweak the settings and then - INSTEAD of painting on the built-in layer mask, ADD a mask to the Recolor adjustment layer just like you would with a pixel layer mask.  That mask should appear as a child of the Recolor Adjustment.  Cool.  Even easier, OPTION (ALT) - Click on the add mask button to add a mask that is all black so you can hide the adjustment by default and then just paint with white to reveal the Recolor adjustment on the eyes.  
    OK, so far so good.  Now, you decide you want to add some extra adjustment to the same exact region - for example, your Levels adjustment, again, just to the eyes.  Add the Levels adjustment to the stack and make the tweaks to the eyes.  Of course, this affects everything in the stack below the Levels adjustment.  Select the Levels Adjustment and the Recolor adjustment and make a new Group out of those two layers.  Because the mask you created for the eyes is a regular mask and not the built-in mask, you can just drag the mask out of the Recolor layer and drop it onto the Group icon.  Now it masks the entire group.  It does not matter if you offer it to the group folder thumbnail (a clipping mask) or to the Group text (a regular mask).  It will act the same.
    If you do not want to group the adjustments and use a single mask applied to the group folder, you can just select the mask in the Recolor layer and copy it (CMD-C) and paste it (CMD-V) and then drag that new copy of the mask to the Levels layer, if you want each adjustment to have its own copy of the mask applied.  In PS, you would just hold down OPT and drag a copy of an existing mask on an adjustment layer to a new adjustment layer - AP does not offer this shortcut as far as I know.  Bottom line - add a mask to an adjustment layer, instead of using the built-in mask, and then you don't need to mess with creating spare channels and targeting the alpha channel of each adjustment layer's built-in mask.
    Even with the added mask applied to the adjustment layer, you can paint on the built-in mask to add to the selective application of the adjustment.  This is similar in PS to applying an adjustment layer, adding a mask, painting on the mask and then adding just that single layer to its own group so you can add another mask to the group. 
    This was a very wordy description for a very easy and quick process, but it gets to the heart of the issue, which is that Affinity Photo's masking and layer structure behaves differently than Photoshop's, and reprogramming muscle memory from PS to AP takes some effort - I'm still trying to undo decades of PS thinking and wrap my head around AP's different process.
    Have fun!
    Kirk
    Here are some tutorials that will help:
     
     
  8. Like
    kirkt reacted to Andy Somerfield in Affinity Photo Customer Beta (1.8.0.163)   
    Status: Beta
    Purpose: Features, Improvements, Fixes
    Requirements: Purchased Affinity Photo
    Mac App Store: Not submitted
    Download ZIP: Download
    Auto-update: Not available.
    EDIT: Link fixed (was pointing to an old DMG, not the new ZIP). Apologies..
     
    Hello,
    We are pleased to announce the immediate availability of the first beta of Affinity Photo 1.8 for macOS.
    Photo 1.8 is a significant change to the currently shipping 1.7 version, so, as ever, I would strongly urge users to avoid the 1.8 beta for critical work. This is a pre-release build of 1.8 - we will release more builds before 1.8 ships. Things will be broken in this build - please use it to explore the new features and not for real work.
    If this is your first time using a customer beta of an Affinity app, it’s worth noting that the beta will install as a separate app - alongside your store version. They will not interfere with each other at all and you can continue to use the store version for critical work without worry.
    It’s also worth noting that we aren’t done yet - we are still working through reported bugs from previous releases and hope to fix as many as possible before the final release of 1.8. We also plan to add more features prior to the 1.8 release.
    With all that said, we hope that you will enjoy the new capabilities introduced in this release and we look forward to any and all feedback you give to us.
    Many thanks (as always),
    Affinity Photo Team  
     
    Changes This Build
    - Overhauled the “File -> New” dialog - user templates are now supported.
    - Implemented unified toolbar for appropriately modern macOS versions.
    - Updated PANTONE palettes to include new Solid and Bridge sets.
    - Renamed EXIF panel to Metadata and added support for user-editable fields, including IPTC).
    - Added ability to read metadata from an XMP sidecar file (enable in Preferences).
    - Added native CR3 support to the SerifLabs RAW engine.
    - Added a new lens profile selector to Develop.
    - Reimplemented the HSL filter's HSV option.
    - Improved quality & file size of results when exporting JPEG.
    - Switching to a mask node will now automatically switch to the grey colour slider.
    - Improved lens correction of RAW files coming from lenses with fixed focal length.
    - Improved unsharp mask “threshold” slider.
    - Improved selection refinement performance & quality.
    - Improved the noise reduction filter result when applied to JPEGs.
    - Improved precision of Gaussian algorithm (to reduce banding).
    - Improved reporting of file load errors (could previously report file corruption incorrectly).
    - Improved performance of operations with large selections.
    - Improved memory use with alternate futures and when replacing image layers.
    - Improvements to Photoshop plugin support, notably for DxO plugins (but also for many others).
    - Fixed Styles to show styles that would normally be invisible.
    - Fixed crash when loading some corrupt JPEGs (valid image data, but corrupt following data).
    - Fixed deletion of brushes / styles / etc. not reducing disk usage correctly.
    - Fixed PSD export of hidden layers.
    - Fixed angle cursors when the view has been rotated.
    - Fixed “Create Palette” from CMYK images resulting in RGB colours.
    - Fixed Pixel Selection appears incorrectly when started near page edge.
    - Fixed rasterising a fill layer goes wrong after changing document size.
    - Fixed Marquee Selection not constraining to a square if you drag across the right diagonal.
    - Fixed mis-identification of some Tamron lenses mounted to Canon bodies.
    - Fixed failure to load some PSB files (and failing to report errors).
    - Fixed missing EXIF data for CRWs.
    - Fixed crash importing PSD files containing embedded colour profiles with unicode characters in their names.
    - Assorted other small fixes and improvements.
    - Localisation improvements.
    To be notified about all future Mac beta updates, please follow this beta notification thread 
    To be notified when this update comes out of beta and is fully released to all Affinity Photo customers, please follow this release notification thread
  9. Like
  10. Like
    kirkt got a reaction from Jowday in Straight-forward Image Duplicate   
    I second this request.   The Photoshop equivalent is Image > Duplicate and is useful for myriad tasks, especially when you want to perform an action or a color mode change for a particular purpose and then introduce the result back into the master working document without having to Save As... and create another file in the OS (for example, you would like to create a mask from the L* channel of your current RGB document - dup the current RGB document, change the dup's mode to L*a*b*,  and Apply Image > L* channel to the RGB doc's active layer as a mask).  The duplicated version is simply a document that exists in memory and is usually something that is temporary or can be saved after the fact if that is its intended purpose (for example, if you create an action that duplicates the master document, flattens it, reduces the size and performs output sharpening and color space change for output).  This seems like a no-brainer and I am repeatedly surprised after major updates that AP has not implement this.
     
    Kirk
  11. Like
    kirkt got a reaction from IPv6 in Straight-forward Image Duplicate   
    Thee are usually a bunch of different ways to do things, I agree.  Apply Image must be partially broken in Lab equation mode on the Mac in V1.7.2.  I can use Lab equations (DL = SL, Da = 0.5, Db = 0.5 - if you do all SL, you get a color image that reflects the mix of the a and b channels) to generate the equivalent of the L* channel.  Let's say I want to apply a Curves adjustment through an inverted L* mask, a typical task to boost shadows or target shadow tones for color correction, etc.  So I have a layer stack that has the working image (background layer) and a curves adjustment layer above it.  Recall that adjustment layers have their own inherent mask built in.  So, I select the Curves adjustment layer to make it active (the target of the Apply Image command):
    1) Open the Apply Image dialog (Filter > Apply Image)
    2) Drag the background layer onto the dialog box to make it the source of the Apply Image operation
    3) Use Equations - DL = SL, Da = 0.5, Db = 0.5, DA = SA
    And apply image.
    The result gets applied to the mask of the Curves adjustment, but the resulting image is distorted and scaled up and is not the L* channel of the background. (see attached screenshot).  To make this work properly, I have to create an empty pixel layer and generate a pixel-based representation of the L* channel - i.e., new pixel layer, fill it with white, then do the apply image operation to generate the L* channel on this pixel layer (the second screenshot is what the L* channel should look like).  Then I have to rasterize it to a mask, then drag it to the curves layer.  That is a lot of steps for a simple operation.
    The use of the Levels filter with your settings at the top of the stack will also generate the desired L* output, but then you have to stamp the layer stack, rasterize the resulting pixel layer and then apply it as a mask.  It is a nifty way of doing things though.
    I prefer the Duplicate Image method because I can work in Lab mode with Lab tools to choke the mask, etc. and then simply apply the L* channel to the working master image (to an adjustment layer or as a pixel-based layer, etc.) when I am finished in the duplicated image.  I can also leave the duplicate image open and tweak the operations and reapply the result to refine or edit the result for use in the original.
    Kirk
     


  12. Like
    kirkt reacted to WorthIt in Straight-forward Image Duplicate   
    Hi folks,
    I did a search for this and couldn't find anything directly related, so,.. I hope it's not been covered though, I'm sorry if it has.
    I'm almost at the end of my trial on A.P and was wondering why there's no (apparent) direct way to duplicate what it is you're working on - by that I mean a straight forward: Image>Duplicate so you can quickly have 2 copies of what it is you want, side by side (tabs).
    I know it's possible to select, copy then create 'New From Clipboard', but that's somewhat slow and inefficient, no? In Photoshop IIRC, it's possible to create an exact and ready-to-use copy of what it is you wanted to copy, but this seems not possible in A.P.
    Can this be implemented in a future update?
  13. Like
    kirkt reacted to Murfee in Affinity can't handle shadows...   
    Thanks...start to finish about 5 mins, that includes a rough & ready paint job on the mask. I tried to do a macro for you, just leaving the mask painting to do...but the macro doesn't allow saving from the tone mapping persona, hopefully this will be improved soon.
    As much as I love working in Affinity Photo I do not like the develop persona as it is. I only use it as a method for opening my raw files, I work in 32 bit from raw, these tend to open darker than I like, I just increase the exposure a bit, occasionally I alter the white balance...however this will depend on the image. If my highlights are blowing then I might pull them down but only enough to control the blown out parts. I then select the colour profile either ProPhoto or ROMM. Then hit develop. I can now get serious about editing, using a combination of adjustment layers that I can go back and alter if needed.
    I would really like the team to make improvements to the develop persona, the shadows & highlights adjustments could do with a user adjustable range control, currently they seem to be operating too far into the mids, this leaves a flat appearance (however they are a lot better than they were), blackpoint & contrast cause tone clipping a bit too quickly. The settings that have been used also need to be fixed at the point the develop button is clicked, the image can then be returned to from the photo persona, users can pick up exactly where they left off.
    Presets could also do with a bit of attention, at the moment the basic panel needs to keep all of the settings, it keeps some, white balance needs the tick applying and profiles are completely ignored. There is also a need for all settings in all tabs to be saved as a master preset, this will then enable users to use the exact settings for multiple files, this can sort of be achieved at the moment but they have to be done in each tab...which is a bit of a pain, especially if one tab gets forgotten.
    These improvements would go a long way to enabling users to do more work in the develop persona, meaning less adjustment layers needed in the photo persona...this will help keep file sizes down 
    These suggestions have been made before in various posts and I am sure the Devs are aware, however it doesn't hurt to give a gentle reminder now and again, particularly in the beta thread where things are being worked on 
  14. Like
    kirkt got a reaction from IPv6 in Affinity Photo Customer Beta (1.7.2.146)   
    Hey now!
    Here is a link to a ZIP with a bunch of icons and art, including a "Beta" splash screen.  Put them in the Resources folder in the .app package if you want to give it a try.  It includes the green "beta" icon for the Photo persona, dimmed and active, light and dark interface, as well as Dock icons, and app icons.
    https://www.dropbox.com/s/d0xpstb9fmdajcx/APPICONS Test.zip?dl=0
     
    kirk
     

  15. Haha
    kirkt got a reaction from Patrick Connor in Affinity Photo Customer Beta (1.7.2.146)   
    If you start to rummage around in the Resources folder, you can probably figure out which PNG files can be swapped to help you identify the Beta App's photo persona icon, again to help you from getting confused visually once you are in the app itself...
    kirk
    Also - MODS - if what I am doing is violating terms of the beta usage, or is just morally reprehensible in general, please let me know and I will delete these posts.  Just trying to be helpful based on the previous posts.
    kirk

  16. Like
    kirkt reacted to Ulysses in Any ideas   
    Can we back up for a moment? 
    Although iOS — the iPad in particular — is waiting for upcoming improvements in connectivity with external drives, this isn't really a huge limitation because there are so many means to get data from storage devices and networks via the cloud. For example, your WD My Cloud drive has an app enabling you to share data to all of your devices, including your iPad.
    Getting at this data is not the problem here. As @DM1 mentioned earlier in this thread the issue is this: "iPad version of AP only gets to use the Apple supplied RAW decoder, not Affinity's own version."
    My guess is that the Fuji S5000 was simply overlooked by Apple either due to its age (camera announced in late 2003), or its lack of popularity compared with other models.
    As I mentioned, unfortunately even after converting this file to DNG via the Adobe DNG Converter software I was still unable to open it in Affinity Photo for iPad. I was surprised by this as usually this straightens things out. Must be something odd about either this camera model or with the specific raw file you provided.
    EDIT 01: Oops... just now seeing @kirkt's reply and his successful use of Adobe DNG Converter to convert the RAF to DNG. I'll give that a try to see if I can verify his findings.
    EDIT 02: Confirmed. Adobe DNG successfully creates a DNG that can be used by Affinity Photo for iPad. But in my case, I was able to create a custom compatibility for DNG 1.4, while also checking the box for Linear (demosaiced). That last bit seems to be the key. 
  17. Like
    kirkt got a reaction from Dan C in Publisher/Studiolink - Mixing apps bought on Apple App Store and through Affinity   
    I apologize - it looks like this topic has been covered, with the answer being that I need to update the App Store versions to the latest version (1.7.1) of Photo and Designer, which are available now.  The Mods can delete this topic if they would like.
    Kirk
  18. Like
    kirkt reacted to Patrick Connor in Publisher/Studiolink - Mixing apps bought on Apple App Store and through Affinity   
    @BloiseA
    Welcome to the Serif Affinity forums  
    Please check that you have installed the (recently released) 1.7.1 updates of Affinity Photo and Affinity Designer from the Mac App Store and run each of them at least once after updating.
  19. Like
    kirkt got a reaction from Kasper-V in Publisher/Studiolink - Mixing apps bought on Apple App Store and through Affinity   
    NICE!  Glad everything got sorted out.
    kirk
  20. Like
    kirkt reacted to Kasper-V in Publisher/Studiolink - Mixing apps bought on Apple App Store and through Affinity   
    I have now, Aammppaa, and it worked: all the apps talk to each other now. Thanks.
  21. Like
    kirkt reacted to Aammppaa in Publisher/Studiolink - Mixing apps bought on Apple App Store and through Affinity   
    Make sure you have opened and closed a document in both Photo and Designer - that has helped some people.
    Other solutions near the bottom of this thread…
     
  22. Like
    kirkt got a reaction from Andy Somerfield in DxO Nic collection   
    Just a heads up - I am currently beta testing the Nik Collection and I am happy to report that the current RC has fixed the Updating Fonts issue, at least in my very brief test today using AP beta 1.7.0.128.  Yay!
     
    Kirk
  23. Thanks
    kirkt got a reaction from Jock Thomson in Grain Effect using Equation Filter   
    Some images:
     
    Working Dialog (it's a little bit busy at the moment):
     

  24. Like
    kirkt got a reaction from MEB in AP- Soft Proof - Gamut Warning with false color overlay   
    A workflow that ultimately results in a print can benefit greatly from a color managed environment and soft-proofing for print.  In pretty much every pixel editor and raw conversion application that provides soft-proofing, the gamut warning overlay is simply a single color that indicates, presumably, that those indicated pixels are out of gamut.  Unfortunately, the gamut warning does not indicate how far out of gamut (in terms of dE, for example) so the warning is not very useful.  It would be nice if the Soft Proof adjustment layer had a gamut warning that, when enabled, completely replaced the current working image with a false color map of the image where pixels in gamut, and up to a user-defined acceptable dE threshold (like dE=1 or 2), were green; pixels within a second user-defined dE interval were yellow (dE = 2-4) and pixels outside a third user-defined dE were red (dE > 4), or something similar.  Because soft-proofing in AP is an adjustment layer, the user could pull down the opacity of this false-color map and overlay it on the original image to identify the areas on the image that fall within the most out-of-gamut regions and implement local adjustments to those areas without making global changes to areas where the issue is not such a problem.  Because the rendering intent can be selected in the adjustment layer drop-down, the effect of rendering intent would be more informative by examining how the most problematic areas out of gamut get mapped toward, or into, the output gamut, instead of simply relying on the current gamut warning which is a go/no-go indicator.  
     
    This approach would be really helpful in isolating specific problem areas of an image so that the user could make more informed and targeted decisions about transforming and correcting an image in a large working space, such as ProPhoto, to a smaller output space for a particular printer/paper combination.
     
    The only utility where I have seen this feature is in ColorThink, which is an application specifically intended to explore color profiles and compare images against profiles for purposes such as evaluating gamut and gamut mapping.
     
    Thank you for you consideration!
     
    Kirk Thibault
    Berwyn, PA
  25. Like
    kirkt got a reaction from aitchgee in [AP] Film Emulation looks using Inferred LUTs in AP   
    Hi folks,
     
    There are several different ways to achieve looks in your image editing workflow, and one of them that can be useful is the LUT (Look Up Table).  Affinity Photo has an adjustment layer called "LUT" which permits the user to include a LUT in their layer stack, giving non-destructive control to the image look, etc. through a color LUT.
     
    The typical use of the LUT adjustment layer is to load a pre-made LUT (in various forms, like .cube, etc.) and season to taste.  The AP LUT adjustment layer also provides an interesting second method, the "Infer LUT" mode.
     
    As the name suggests, the Infer LUT mode asks the user to load two images: the first image is sometimes referred to as the "Identity" image - the non-modified reference image; the second image is the modified image, typically the Identity image put through some sort of processing to get the final, transformed look.
     
    One handy tool for storing inferred LUTs is the HALD LUT identify image - here is a brief explanation of the HALD:
     
    http://www.quelsolaar.com/technology/clut.html
     
    It is essentially a compact image composed of color patterns that can be visualized as a 3D color matrix.  It is usually stored in a non-lossy file format, like PNG.  When you apply edits to the identity HALD image, the colors in the image change accordingly, and the edits are "recorded."  When you compare the original Identity HALD image to the edited HALD image, the transforms in color can be inferred and, voila, you have a LUT!  It is efficient and compact, so you can get good resolution of the LUT transform in a small package.
     
    AP can use the HALD identity image and a transformed version of that image to infer a LUT.  Why is this cool?  Because you can use any piece of software you would like, create a look you want on a working image (any image) - once you have the look you want, you insert the HALD image into the workflow and apply the exact same transform and save the resulting, edited HALD image.  You can build a library of the Identity HALD and then all of your edited HALD images and load the Identity and the Edited HALD image that has the look you want to apply into the AP LUT adjustment layer and POW! You can apply your look via the Infer LUT mode.
     
    This is a nifty way to export all of your presets from something like Lightroom, where you paid $$$ for the VSCO film pack presets, for example, into a useable HALD LUT.
     
    Because it's fun, I've included a link here:
     
    http://blog.patdavid.net/2015/03/film-emulation-in-rawtherapee.html
     
    That describes film emulation looks using this same technique, along with a bunch of freely downloadable HALD LUTs for those films.  The context is within the film emulation module of Raw Therapee, but the functionality is identical to AP's Infer LUT.
     
    When you download the film emulation HALDs from the above link, read the README.txt file for more details.  The collection includes the identity HALD and a bunch of HALDs that capture various film looks.  The looks are all made in 8bit sRGB, but no one is going to analyze your look transform if you use it in a 16 bit document or a slightly larger color space, so have at it!
     
    Remember, you can do this with any look you create, in any application, as long as you can repeat the procedure (for pixel-based operations) or insert the identity HALD into your non-destructive layer stack.
     
    Also, LUTs do not capture non-tonal/color operations like chromatic aberration correction, etc.  Also, with respect to film emulation, this is for tonal edits and does not emulate grain.
     
    Have fun!  This is a great feature to have in AP, but maybe not as celebrated as it could be!
     
    kirk
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.