Jump to content
You must now use your email address to sign in [click for more info] ×

kirkt

Members
  • Posts

    440
  • Joined

  • Last visited

Reputation Activity

  1. Like
    kirkt reacted to SCarini in Affinity Photo Apply image from green to gold by color LAB   
    Thank you Kirk!
    I soon noticed by your attachment that the result is different in AP than in PS
    I will follow your instructions, then I will also try with Soft Light instead of Overlay
    I will be back with my results.
  2. Thanks
    kirkt got a reaction from NotMyFault in Affinity Photo Apply image from green to gold by color LAB   
    @SCarini - Apply Image works a little differently in AP compared to PS.  For what you want to do, you will need to use Equations in the Apply Image interface.
    1) The base image (the green image) we will call "Source" (S).  Duplicate the Source (CMD+J) and rename that duplicate "Destination" (D).
    2) Set the blend mode of the Destination layer to Overlay.
    3) Make the Destination layer the active layer (target it b clicking on it in the Layers panel).
    4) Choose Filters > Apply Image... Here is where you will apply the b channel in the Source layer to the L, a and b channels of the Destination layer.  
    4a) Drag the Source layer from the Layers panel onto the Apply Image dialog - this tells the Apply Image dialog that you want to use the Source layer as the SOURCE (S) for the operation.
    4b) Check the box next to the Equations header toward the bottom of the Apply Image dialog - this tells the Apply Image operation that you will be performing operations on the Destination channels by using equations.  In Equation lingo, S denotes the source channel and D denotes the destination channel.  In the Equation Color Space dropdown menu, choose Lab.
    4c) Input the following equations:
    DL = Sb
    Da = Sb
    Db = Sb
    DA = SA (the alpha channel does not matter here).
    What this is telling the Apply Image operation to do is put the Source b channel into the Destination L channel (DL = Sb), put the Source b channel into the Destination a channel (Da = Sb) and put the Source b channel into the Destination b channel (which is the same thing).
    4d) It appears that when you use Equations in the Apply Image dialog, the blend mode specified in the Apply Image dialog is not relevant.  Therefore, once you apply the Apply Image operation, the resulting layer will have to have its blend mode changed to Overlay - you already did this in Step 2 above, so the result you see after the Apply Image operation should be what you expected.  Repeat this exercise without changing the blend mode in Step 2 and see what I mean - the Apply Image result will look wildly different, but all you have to do is change the blend mode of the Destination layer to Overlay, and all is well.
    Have fun!
    If you ever want to perform an Apply Image operation where you want to change only one channel, just enter an equation for the other channels (the untouched channels) as:
    D(channel) = D(channel)
    That is, the Destination channel (the result) is equal to the Destination channel.
    Kirk

  3. Like
    kirkt got a reaction from SCarini in Affinity Photo Apply image from green to gold by color LAB   
    What is really convenient about AP is that your image does not need to be in Lab mode to do this - you can leave it in RGB mode and just specify Lab equations in the Apply Image dialog.  All of the computations and conversions are done on the fly without needing to convert back and forth from RGB to Lab back to RGB.
    Kirk
  4. Like
    kirkt got a reaction from SCarini in Affinity Photo Apply image from green to gold by color LAB   
    You will also probably notice that the result is slightly different in AP than in PS - I think AP handles Lab slightly differently than PS, so the colors are shifted slightly.
    Kirk
  5. Thanks
    kirkt got a reaction from SCarini in Affinity Photo Apply image from green to gold by color LAB   
    @SCarini - Apply Image works a little differently in AP compared to PS.  For what you want to do, you will need to use Equations in the Apply Image interface.
    1) The base image (the green image) we will call "Source" (S).  Duplicate the Source (CMD+J) and rename that duplicate "Destination" (D).
    2) Set the blend mode of the Destination layer to Overlay.
    3) Make the Destination layer the active layer (target it b clicking on it in the Layers panel).
    4) Choose Filters > Apply Image... Here is where you will apply the b channel in the Source layer to the L, a and b channels of the Destination layer.  
    4a) Drag the Source layer from the Layers panel onto the Apply Image dialog - this tells the Apply Image dialog that you want to use the Source layer as the SOURCE (S) for the operation.
    4b) Check the box next to the Equations header toward the bottom of the Apply Image dialog - this tells the Apply Image operation that you will be performing operations on the Destination channels by using equations.  In Equation lingo, S denotes the source channel and D denotes the destination channel.  In the Equation Color Space dropdown menu, choose Lab.
    4c) Input the following equations:
    DL = Sb
    Da = Sb
    Db = Sb
    DA = SA (the alpha channel does not matter here).
    What this is telling the Apply Image operation to do is put the Source b channel into the Destination L channel (DL = Sb), put the Source b channel into the Destination a channel (Da = Sb) and put the Source b channel into the Destination b channel (which is the same thing).
    4d) It appears that when you use Equations in the Apply Image dialog, the blend mode specified in the Apply Image dialog is not relevant.  Therefore, once you apply the Apply Image operation, the resulting layer will have to have its blend mode changed to Overlay - you already did this in Step 2 above, so the result you see after the Apply Image operation should be what you expected.  Repeat this exercise without changing the blend mode in Step 2 and see what I mean - the Apply Image result will look wildly different, but all you have to do is change the blend mode of the Destination layer to Overlay, and all is well.
    Have fun!
    If you ever want to perform an Apply Image operation where you want to change only one channel, just enter an equation for the other channels (the untouched channels) as:
    D(channel) = D(channel)
    That is, the Destination channel (the result) is equal to the Destination channel.
    Kirk

  6. Like
    kirkt reacted to walt.farrell in 3rd party Panels   
    No.
  7. Like
    kirkt got a reaction from Andy05 in Applying an ICC profile when Developing a RAW image   
    There is some crosstalk in the discussion here about profiles.  Firstly, the profile referred to during raw conversion is a profile that is meant to translate raw camera color into a specific color rendition - this profile is typically made by shooting a color reference target and then running that DNG file through software to output a DCP (Lightroom, ACR or similar software) or ICC (Capture One, etc.) file for use during raw conversion.  AP does not permit the user to specify such a profile for raw conversion at this time - it uses the internal LibRaw profile to generate the raw file as far as I know (Mac users can also switch to the Apple raw engine).
    In AP's Develop persona there is a dropdown list at the bottom of the Basic palette called "Profiles."  This dropdown list permits the user to select the ICC profile for the color space into which the raw file will be converted as an RGB image leaving the Develop persona.  This is similar to the selection of the color space in the Lightroom/ACR output dialog or any other raw converter's output controls.  This does not affect the look of the raw conversion, it specifies the color space into which the raw file is converted and it tags the output file with that color space so a color-aware application can display the RGB numbers in the file correctly, according to that color space.  If you do not specify a profile here, AP will convert the raw file into AP's working color space, as specified in the Preferences.
    If you have custom log or cine camera profiles that are supposed to be used at raw conversion time, you will not be able to use them in AP because AP's raw conversion interface does not support custom/user-specified camera profiles. 
    Once the file has been converted into an RGB file and opened in the Photo persona, it exists in the specified color space (from the selection of the Profile in the Develop persona) or in the default working color space (if no Profile was specified in the Develop persona).  Here you can CONVERT to a different color space via an ICC profile, or ASSIGN a new ICC profile to the existing image data.  When you CONVERT, the RGB numbers in the current color space get changed to preserve the original appearance of the image in the new color space; when you ASSIGN, the original RGB numbers are preserved and the colors change according to their new interpretation in the new, assigned color space.
     
    In terms of printing, usually one edits in a working color space that contains a large enough gamut to comfortably work with the gamut of the output device.  At print time, the print driver will do the conversion into the printer's ICC profile color space according to your instructions - the printer interface also permits the user to select the rendering intent of the conversion so that the printer driver knows how to handle out of gamut colors.  You do not need to assign a printer profile to the image in AP.  You can, however, use the printer ICC profile to soft-proof the image in AP prior to printing.  This is performed with the Soft Proof adjustment layer, a non-destructive operation that lets you visualize the printed output through AP's simulating its appearance for the selected profile (usually a printer+paper combination).  Of course, soft-proofing is a simulation of a reflective medium (ink on paper) on a transmissive device (your display), but once you print enough, you can usually anticipate how the print will appear on paper compared to the soft proof.
    To recap:  convert your raw image into a color space that is large enough for your editing workflow (let's say ProPhoto RGB).  Do your edits.  CONVERT the image, if necessary, into the output color space (let's say sRGB for web display) or print the image and let the print driver do the conversion using the printer+paper ICC profile that you specify.  
    One workaround for color rendition following raw conversion is to use a LUT that you create that takes the default raw conversion output from AP (using AP's camera profile) and alters it like a custom profile would.  You need software that will permit you to make such a LUT from a color target, for example (3D LUT Creator) - then you can convert your raw image into your working color space with AP's default rendering and apply the custom LUT to get the color rendition you really want after the default raw conversion.
    Another workaround, especially for using LUTs etc. that require a log input file, is to convert your raw file into a 32bit linear file from the Develop persona.  Then you can use OCIO transforms to take the linear output (a raw file is just linear data) and transform it into the correct log format for your cine looks.
    Have fun!
    Kirk
     
     
  8. Like
    kirkt got a reaction from affinityfan in Your Affinity 2020 wishlist   
    Node-based UI and workflow for AP.  It would be so good.
    Kirk
  9. Like
    kirkt got a reaction from ThatMikeGuy in Node-based UI for AP. Please?   
    While I do not profess to know how AP is structured under-the-hood (bonnet!), it seems like a lot of the tools are implemented in a real-time, live way that seems as if they would work in a node-based workflow.  For example, the node editor in Blender or Davinci Resolve.  If this is the case, it would be an incredibly terrific feature if the user could select between the current "traditional" interface and workflow for AP, or a node-based interface.  I would love to be able to create an image-processing pipeline with a network for nodes with preview renders along the way to see each stage of the workflow and variations of the node chain.  It would be terrific if node groups could be saved as "presets" that became single nodes themselves, which could be expanded and the contents exposed for tweaking and customization.  
    Please consider this approach, if it is possible.  Rendering low-res preview proxies during node assembly would hopefully be a lot less taxing on the interface than the current full-res rendering of Live Filters that tends to get laggy when there are even a modest amount of layers in the stack.  You could save full, non-destructive workflows as a pre-built node chain, you could have a single node chain branch into multiple variants, and have a batch node that feeds an entire directory of images into the node chain for processing.  Maybe even macro nodes, etc.  It would be so much more flexible and serve to further differentiate AP from PS.
    The output of the node-based workflow could be fed into the "traditional" photo persona (a Photo persona node) for local, destructive edits, painting on masks, etc.
    One can dream....  LOL
    Thanks for pushing the boundaries with your applications.
    Kirk
     
  10. Like
    kirkt got a reaction from IPv6 in Affinity Photo Customer Beta (1.9.2.228)   
    v 1.9.2.228 - macOS Big Sur, v11.2.2, 16in MacBook Pro.
    A minor interface bug/fault - in the L*a*b* color picker interface, the a* color slider shows a gradient from cyan to magenta, instead of green to magenta.  Using the MacOS Digital Color Meter, the color values on the negative end of the a and b sliders are essentially identical, i.e., cyanish, which is ok for the b* slider, but not for the a* slider.
    kirk
    EDIT: This fault is also present in the current App Store version, v 1.9.1.
     

  11. Like
    kirkt reacted to tpaull in problems opening tiff files in Affinity Photo with sufficient resolution   
    OK, thanks. It is true that opening the gel file instead of the tiff export gives a better match to the original.  It also helps to change the color profile in AP first to 16-bit grey; this allows for similar color as the original also. I think this problem is reasonably solved now.
  12. Like
    kirkt got a reaction from Alfred in Affinity Photo angled lines have stepping/ jaggies on edges?   
    Thanks for the tip.  Probably not worth it here, but I will remember that for future files that may require more of their integrity preserved.
    kirk
  13. Like
    kirkt reacted to lepr in Help! Advice needed on creating LUTs   
    This might save you some work: 
     
  14. Like
    kirkt got a reaction from Carajp in Help! Advice needed on creating LUTs   
    You can also spend hours going down the rabbit hole of film sims by using infer LUT in AP with the help of the Raw Therapee film sim collection.  The collection is based on a HALD CLUT image (a synthetic image of a wide range or color, used as a reference) and the altered HALD images with each film sim applied to them.  You use them in AP by implying the Infer LUT operation just as you have done - the reference HALD CLUT image is loaded first and then the altered HALD film sim image is loaded.  AP infers the LUT and applies the film sim for that HALD image.
    https://rawpedia.rawtherapee.com/Film_Simulation
    Have fun!
    Kirk 
  15. Like
    kirkt got a reaction from Carajp in Help! Advice needed on creating LUTs   
    Here is an online resource for altering LUTs, including changing their resolution:
    http://cameramanben.github.io/LUTCalc/LUTCalc/index.html
    Have fun!
    Kirk
  16. Like
    kirkt got a reaction from Carajp in Help! Advice needed on creating LUTs   
    You may be exporting your LUTs from AP at a very high resolution, which might be necessary if the LUT does some really wild mapping, but is usually not necessary for many milder applications.  In the Export LUT dialog, turn the LUT resolution ("Quality") down to something reasonable - try a smallish resolution first to see if it is sufficient to make your images look the way you want without causing banding or other artifacts.  If you need to increase the resolution to address artifacts, do it in reasonable increments until you see the artifacts go away.
    For example, I made a LUT in AP, based on a white balance and a curves adjustment, and exported it at 17x17x17 and again at 34x34x34.  The 17x LUT was about 135kB, the 34x LUT was about 1.1 MB, or about 2^3 or 8 times larger, as you would expect.  Think of the LUT as a cube of color - if you double each edge of the cube, its volume increases eight-fold (two cubed).
    Also, remember that LUTs are not color space aware, so if you are using, for example, an sRGB document or working color space in AP as the basis for making the LUT, the transform contained in the LUT will only really be accurate (give you what you expect) for transforming images that are in the color space for which the LUT was constructed (ie, sRGB).
     
    Kirk

  17. Thanks
    kirkt got a reaction from InigoRotaetxe in Cant export ACES 1.2 OCIO what I see in 32-bit preview   
    @d3d13 - What are you trying to accomplish specifically within AP?  Toward the end of your original post you mention trying to save your AP work to ACES [ACES cg presumably] but then you mention that you are trying to export to JPEG or PNG.  Are you trying to export a gamma-encoded version of the AP documented to an 8-bit JPEG or PNG in ACES or in sRGB?
    See this video -
    take deep breath and watch the entire thing before pointing me to the original post about not wanting to use Blender Filmic.  Even though the video is about using  Blender Filmic LUTs, the video spells out exactly how to generate correct output for a gamma-encoded format like JPEG or TIFF.
    Here is the TL/DR version:
    1) When you open a 32bit file into AP and have an OCIO configuration enabled in AP, you need to select "ICC Display Transform" in the 32bit Preview panel.  This is critical to getting your gamma-encoded, exported JPEG or TIFF to look correct.
    2) Once that is sorted out, you can use your OCIO adjustment layers to do whatever it is you are trying to do - remember that you are now manually overriding color management to a certain extent.  For example, transform your linear ACES data into sRGB to do texture work.
    3) Make your edits.
    4) Use an OCIO adjustment to bring the file from the transformed sRGB state back to linear.
    5) Export to a JPEG, or whatever you need to do.
    The key to getting the "correct" low bit depth, gamma-encoded output is enabling the ICC Display Profile in the 32bit Preview panel, and keeping track of your OCIO transforms to make sure your data are transformed correctly for output.  The attached screenshot depicts a 32bit EXR opened in AP with the above workflow, and the exported JPEG of the composite 32bit document.  I used a Curves and a Levels adjustment AFTER the OCIO transform from linear ACES to sRGB (gamma-encoded data) and BEFORE the transform from sRGB back to linear ACES to manually tone map the bright, unbounded lighting of the celling lights back into the overall exposure range for the rest of the image.
    As Walt noted, there will be some differences between the two files and how they are displayed, because of the differences in bit depth and the preview accuracy (like the shadow tones in the 32bit file displayed on the left side of the screenshot).  But that is minimal compared to the differences when ICC Display Transform is not used throughout the process.
     
    Have fun!
    Kirk
     
     

  18. Like
    kirkt got a reaction from o4tuna in Affinity Photo: changing tabs into into layers   
    You can also export the images from Exposure as TIFFs, for example, and in AP choose "File > New Stack..." and then select all of the rendered TIFFs you want to composite.  They will be opened in a single AP document as individual layers in a stack.  If you do not want to work with them in a stack, you can just drag them out of the stack and they will become plain old layers.
    Kirk
  19. Thanks
    kirkt got a reaction from loukash in Affinity Photo: changing tabs into into layers   
    You can also export the images from Exposure as TIFFs, for example, and in AP choose "File > New Stack..." and then select all of the rendered TIFFs you want to composite.  They will be opened in a single AP document as individual layers in a stack.  If you do not want to work with them in a stack, you can just drag them out of the stack and they will become plain old layers.
    Kirk
  20. Like
    kirkt reacted to lacerto in AP – issues selecting or creating a hard pixel edge   
    (...)
  21. Like
    kirkt reacted to v_kyr in Affinity Photo Customer Beta (1.9.1.219)   
    According to your crash report it seems to crash at a state, where for a HTTP connection, it's trying to check whether a readable stream has data that can be read without blocking ...
    ... so all in all during an HTTP protocol connection.
  22. Like
    kirkt reacted to Patrick Connor in Affinity Photo Customer Beta (1.9.1.219)   
    The Publisher macOS beta 1.9.1.952 includes a fix for crashes on macOS 10.14 and earlier, so this makes sense. I think you will be able to run the next Photo beta, sorry for the inconvenience.
  23. Like
    kirkt reacted to AffinityAppMan in Creating tiles using Affine, especially of Leaves   
    That’s a good tip. I am to try these steps and see if works with my workflow. It would be nice if the pattern layer would accommodate the affine feature automatically. That will make it more automatic like in other software. I can see this pattern layer feature get better as updates are made. It’s a step in the right direction.
  24. Like
    kirkt got a reaction from PaulEC in Creating tiles using Affine, especially of Leaves   
    The new pattern layer does not create a tileable pattern (using the affine hack) but it permits the user to use that tileable pattern non-destructively on its own dedicated layer, instead of creating a fill layer and using the texture on the fill layer.  I posted it here because it is a new feature that folks who make and use tileable patterns may want to try.
    If you need to make your pattern tileable (seamless), you can record a macro that will do the affine steps, do all the cloning that you need to blend the seams, then use a second macro to reset the affine transform with your new, seamless tile.  Once you have the seamless tile on its own pattern layer, you can duplicate that layer and transform it multiple times to add scale, offset and rotation variation to the texture you are creating and you can paint in the effect of an individual variant using regular old masking tools.  
    If you feel that using another piece of software to make your tile is worth the extra effort and adds value to your work, that might work for others as well.
    kirk
  25. Like
    kirkt reacted to Ron P. in Where are the plugins   
    Hi @8675309 (Jenny), Welcome to the forums,
    What app are you using, Photo, Designer, or Publisher? As far as I'm aware, none of the Affinity apps come with any plugins. Do you have like the Nik collection? I use the Google free version. If you have them, in Preferences>Photoshop Plugins, click on the Add button. Navigate to the folder where they're located on your system and select the folder. Close and restart the app, (photo in my case). Once restarted, they will show in the Filters>Plugins menu.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.