Jump to content
You must now use your email address to sign in [click for more info] ×

kirkt

Members
  • Posts

    440
  • Joined

  • Last visited

Reputation Activity

  1. Like
    kirkt got a reaction from MEB in HDRI Neutralization method   
    @cgiout - How much control do you have over the scene you are photographing and how long do you have to acquire all of the source images that you ultimately use to make your HDR composite?  The reason I ask is because, when you use a full 32bit per channel workflow, you maintain the physical integrity of the lighting in your scene, making it easy to manipulate that lighting after capture.  However, to give you the most flexibility, you want to sample the scene one light at a time and then add all of the lighting together in post.  That is, let's say your scene has 3 lights in it.  Ideally, you would want to shoot the entire scene with each light illuminated individually, with all of the other lights off.  In post, you can combine your HDR exposure sequence for each light into its own HDR file (32bit per channel) and then bring each HDR file into a working document and add the light sources together in your 32 bit per channel working environment.  In this scenario, you can add an exposure adjustment layer and a color filter adjustment layer clipped to each light layer and use these controls to change the intensity and color of the contribution of that light to the scene.  This gives you the power to recolor each light and adjust is contribution to the scene as you see fit.  Not only can you neutralize the color temperature of each  light, if that is what you want to accomplish, but you can add any color filter, completely relighting the scene with some look or mood.
    Essentially, you stack the light layers and set the blend mode of each one that is above the background layer to "Add" - because you are working in a 32 bit per channel document, the light will add linearly, just as it does in "real life."  
    Attached are a few images of the process based on an example I wrote up several years ago, but it is no different now (the example is in Photoshop, but it is the same in AP).
    The first three images show the three light sources in the scene, each one illuminating the scene without the other two.  An HDR sequence was shot for each light.  A color checker card is included near the light source that is being imaged.  The color checker can also be cloned out or the image sequence can be shot again without the color checker.
    Next, the layer stack that is constructed to mix the lighting - note that each image of a light has a color filter ("Gel") and an Exposure control to modulate the properties of the light.  It is like having a graphic equalizer for the scene!  Also note the Master Exposurecontrol at the top of the stack, giving you control over the overall intensity of the scene (you could add a master color filter layer too).
    The next image demonstrates how a local white balance for one of the lamps is accomplished to bring its CCT (correlated color temperature) into line with the other lamps in the scene.  In this scene, two of the lamps were LEDs with a daylight CCT, and one lamp was a tungsten filament light bulb with a much warmer CCT.  I balanced the warmer lamp to bring its color into line with the other LED lamps by adjusting the warmer lamp's color filter layer.
    Finally, the rendered results for a "literal" tone mapping of the scene, and then a moody, funky relighting of the scene using the exposure and color filter layers for each image.  Note that the scene is rendered "correctly" when you make large and extreme changes to the lighting because you spent the time to capture each light's contribution to the scene (for example, the mixing of colors and reflections within the scene).  You can also add ambient lighting to the scene by acquiring a separate HDR sequence taken in that ambient lighting condition (daylight from outside, for example) and mix that into the scene as well.  You just need to keep your tripod set up locked down within the scene and wait for the ambient lighting conditions you want.  For example, set up your tripod and camera and shoot the ambient scene during the time of day you want (or several different times of day) and then shoot the individual lamps in the scene at night, where there is no ambient light in the scene.
    This process take a lot of time to sort out and acquire the image sequences, but it gives you an incredible amount of data to work with when compiling your HDR image.  It sounds like you also are acquiring spherical panoramic HDRs for image-based lighting - the process is no different, but it will take time and diligent management of the workflow.  You can mix your scene in 32 bits per channel and then export a 32 bit per channel flattened EXR to use for your CGI rendering.
     
    Have fun!
    Kirk





  2. Thanks
    kirkt got a reaction from Minus44 in Feature Request - Alpha Channels (AGAIN!)   
    You're welcome.  It has taken me a while to get my brain around how channels in AP work and redo a lot of muscle memory from using PS for decades.  I still use both applications, and I understand how you feel!
    kirk
  3. Like
    kirkt got a reaction from larsh in Color question   
    As @anon2 states, the colors measure the same, if one takes the screenshot image and places color sampler points on both of the images.  This is how color management should work.  If you intend to print the images that you lay out (with Blurb, etc.) then soft-proofing and checking the gamut of your images with an ICC profile for the print device is also a tool you can use to ensure that your printed output resembles your images on-screen within the limits of the print device.
    You might want to unify your viewing conditions across your applications (make the background color of the workspace the same) and see if this alleviates the change in perception of the image brightness when switching between applications.
    Kirk
  4. Like
    kirkt reacted to masevein in E-Paper Floyd-Steinberg 7 colors   
    Wow thank you all! I tried Image Magick and it worked great at 80% dithered thank you Kirk! I can’t believe that GIMP has this option and not Affinity Photo.
  5. Like
    kirkt reacted to Emile Spaanbroek in Soft Proofing not working with custom ICC profile   
    Thanks for the url of the printer test images. I downloaded them and when I leave the AdobeRGB profile assigned to it and add a soft proof layer to it with my printer profile I get indeed about 40% OOG. But when I assign the profile created by the X-Rite colorchecker camera calibration software all of a sudden the soft proof layer seems to be inactive (or all colors are in gamut, which a doubt).
     
    When I do the same in The Gimp I see much colors are still OOG, in Affinity I see none. So, the problem seems only there when assigning color profiles from the X-Rite colorchecker camera calibration software to a document in Afiinity. Then the soft proof layer seems inactive and I can not soft proof for prints at all. 
  6. Like
    kirkt reacted to lacerto in E-Paper Floyd-Steinberg 7 colors   
    (...)
  7. Like
    kirkt got a reaction from Paul Mc in E-Paper Floyd-Steinberg 7 colors   
    The seven RGB colors in the color table are:
    (R,G,B)
    0,0,0
    0,0,255
    255,0,0
    0,255,0
    255,128,0
    255,255,0
    255,255,255
     
    You can perform Floyd-Steinberg dithering in ImageMagick using the following command:
    convert original_image.png -dither FloydSteinberg -remap colortable.gif dithered_image.gif where "original_image.png" is the full-color image you want to dither, "dithered_image.gif" is the dithered output of the operation, and "colortable.gif" is a gif image that contains 7 patches of color with the listed colors above - it instructs the algorithm to reduce the original image to a dithered one using only those colors.  I have attached a gif (4px wide by 28 pixels tall) of these 7 colors for your use.
    I will say that the results appear more "dithered" than the same results in Photoshop with the n color ACT color table in the link you provided.  Reducing the original image to one with 7 colors will not perform dithering, which is an inherent part of the process to maintain the overall appearance of image brightness in the dithered output.  Even if you try to "Posterize" the image to 7 levels, there will be more than 7 colors in the resulting posterized output.
    See more here:
    https://legacy.imagemagick.org/Usage/quantize/#dither
    in the section entitled "Dithering using pre-defined color maps."
     
    Kirk


  8. Like
    kirkt reacted to thecompu in Struggling to get this images at a smaller file size   
    Hey all!
    This is perfect information. I was able to make it work with the information each of you provided. Thanks so much!!
    Cheers,
    m
  9. Thanks
    kirkt got a reaction from thecompu in Struggling to get this images at a smaller file size   
    At full res (3000x3000 pixels) and 50% JPEG export compression in AP the resulting file size is 436kB.  The image looks pretty much identical to the original when opened side-by-side.  You could also try running the original through some noise reduction or median filtering to smooth larger areas of tone so that the compression algorithm is not trying to preserve the small variations in these large uniform areas because of noise.  If you apply the Dust and Scratches filter at a radius of about 3-4px and compress with JPEG at 50%, the resulting file size is 276kB.
    Kirk
  10. Thanks
    kirkt got a reaction from Mark Oehlschlager in Affinity Photo Customer Beta (1.9.0.199)   
    @R C-R The Spare Channel is blurred, but it is not doing anything in the document until you use it for something.  For example - after you blur it and make the background layer active, right-click on the Spare Channel and select "Load to Background <select any channel>" - this will load the now blurred Spare Channel into the Background layer's channel of your choosing, where you will see the effect.  You can also make a new pixel layer, for example, and load the Spare Channel into the R, G and B channels of the new pixel layer.  Now you have a grayscale layer that you can rasterize to a mask.  You can also make a new Curves adjustment layer, for example, and load the Spare Channel into the Alpha channel of the Curves adjustment layer, masking the Curves layer.
    kirk
  11. Thanks
    kirkt got a reaction from Mark Oehlschlager in Affinity Photo Customer Beta (1.9.0.199)   
    Right now, it appears that you can apply destructive Filters to the spare channel (the pixel operations in the Filters menu) - because it is not a layer, you cannot apply adjustment layers or live filter layers to the Spare Channel.  Unlike PS, AP does not have a Levels or Curves adjustment that is a destructive adjustment (similar to PS, Image > Adjustments ...) that one could use in such a case, i.e., when the narrator in the above video instructs the viewer to "... open the Levels window with CTRL-L...."
    Also - remember that once you create the mask that will make the graphic look weathered (the mask from the high-contrast channel of the rusty background image) you can adjust that mask with a Levels adjustment layer.  That is, you use create Spare Channel to extract the high-contrast channel from the rusty image*.  You use whatever technique you would like to make that into a pixel layer and rasterize it to a mask.  Then apply that mask to the graphic you want to make look like it is scratched, etc.  Then apply a Levels adjustment to the mask and select the Alpha channel from the dropdown menu in the Levels dialog window.  In other words, you are adjusting the contrast of the mask once it is in place, and you are doing it non-destructively as well.
     
    Kirk
     
    *You can create a pixel layer from channels in an active layer using Apply Image... and Equations.  Lets say you wanted to make a pixel layer out of the red channel in the rusty image.  Make a duplicate of the rusty image layer and make it the active layer.  Select "Use current layer as source" - enable Equations (RGB) and use the equations: DR = SR, DG = SR, DB = SR.  This will result in a grayscale image that replaces the duplicate rusty image - it will be a grayscale representation of the red channel because all three "Destination" color channels have been set to the original "Source" red channel.  In the equations, "D" represents Destination and "S" represents Source.
  12. Like
    kirkt reacted to Old Bruce in Affinity Photo Customer Beta (1.9.0.199)   
    Not sure if this is what you want, we have to apply the edited spare channel to something. You have a Pixel layer and choose the Red channel to make a Spare Channel. Click on the thumbnail of the Spare Channel, blur it and apply the blur. Click on the Pixel layer to select it and then right click on the Spare Channel and choose Apply Spare Channel to Pixel Red.

    This is after editing the (obscured by the dropdown menu) Spare Channel.
  13. Like
    kirkt reacted to R C-R in Affinity Photo Customer Beta (1.9.0.199)   
    Thanks @kirkt & @Old Bruce for the help with this.
    For some reason none of the load options were doing anything until I applied the 'IT Crowd fix' (quitting & relaunching the AP beta). Now it works as I assume it should. 👍
  14. Thanks
    kirkt got a reaction from R C-R in Affinity Photo Customer Beta (1.9.0.199)   
    @R C-R The Spare Channel is blurred, but it is not doing anything in the document until you use it for something.  For example - after you blur it and make the background layer active, right-click on the Spare Channel and select "Load to Background <select any channel>" - this will load the now blurred Spare Channel into the Background layer's channel of your choosing, where you will see the effect.  You can also make a new pixel layer, for example, and load the Spare Channel into the R, G and B channels of the new pixel layer.  Now you have a grayscale layer that you can rasterize to a mask.  You can also make a new Curves adjustment layer, for example, and load the Spare Channel into the Alpha channel of the Curves adjustment layer, masking the Curves layer.
    kirk
  15. Like
    kirkt reacted to NY32 in Problem using a macro to scale a photo   
    I'm sorry, I just found the "Change DPI" macro created by John, I downloaded it and it works. That's exactly what I need.
    And ok, now I can use a batch.
    Thank you John and Kirk, this was a big problem for me (the other was a "macro with export" but I know this is not possible yet) and now I'm fin.
    Really really thank you to both of you!
  16. Like
    kirkt got a reaction from srg in Unsharp mask and line of bright pixels   
    Has this object that has the bright outline after USM been cut out of an image and placed against a new background?  The bright pixels are from the contrast at the edge of the object being being accentuated (that is what USM does).  If you cut the object out of another image (that had a bright background, for example) and there are some stray pixels at the edge that are much different than the new background, that difference will be enhanced by the USM operation.  If this is the case, then you need to refine the edge of the cut out before comping onto the new background and applying USM.  Maybe a tool like Filters > Color > Remove White Matte might help get rid of that edge of pixels before composting the object onto the new background.
    Otherwise, you can apply USM as a live filter and mask the adjustment to affect just the object - the live filter has a mask by default that is all white.  Invert that mask (to all black) and then paint white onto the mask to reveal the USM effect just where you want it (the object).
    Kirk
  17. Like
    kirkt got a reaction from angelhdz12 in [MISSING FEATURE] Flip Vertical/Horizontal Icon Affinity Photo   
    Both versions of "Flip" (Document and Arrange) are available for assignment to keyboard shortcuts.  One less mouse movement and button press.
    Kirk

  18. Like
    kirkt got a reaction from Chris B in Affinity Photo Customer Beta (1.9.0.196)   
    Ok - makes sense.  Thanks!
    kirk
  19. Thanks
    kirkt got a reaction from Chris B in Affinity Photo Customer Beta (1.9.0.196)   
    You all have been busy!
    beta v 1.9.0.196 - the menu entry for the CPU/GPU benchmark is labeled "Support..." as is the actual link to the web page support.  The first (top) menu entry is the benchmark.
    kirk
     

  20. Like
    kirkt reacted to Andy Somerfield in Affinity Photo Customer Beta (1.9.0.196)   
    Status: Beta
    Purpose: Features, Improvements, Fixes
    Requirements: Purchased Affinity Photo
    Mac App Store: Not submitted
    Download ZIP: Download
    Auto-update: Available
     
    Hello,
    We are pleased to announce the immediate availability of the second build of Affinity Photo 1.9.0 for macOS.
    If this is your first time using a customer beta of an Affinity app, it’s worth noting that the beta will install as a separate app - alongside your store version. They will not interfere with each other at all and you can continue to use the store version for critical work without worry.
    This beta is significantly different from the 1.8.4 version available for purchase - we strongly recommend that you do not use this beta for real work as data could be lost and the files you save are not guaranteed to open in previous / future versions of Affinity Photo.
    Furthermore, massive behind the scenes work has been done to enable GPU acceleration for the Windows version of Affinity Photo in 1.9. In theory this should have no consequences for macOS / iOS users - but it’s likely that a couple of things will have become broken along the way. We are giving ourselves a longer beta period with 1.9 in order to find and fix those things.
    This also means that the full complement of new features is not available yet in this 1.9 beta - more will be added over the coming weeks.
    Thanks again for your continued support!
     
    Affinity Photo Team  
     
    Changes Since 1.9.0.195
     
    - Text on a path is now available in Photo.
    - More filters now work on masks / adjustments / spare channels (Add Noise, Perlin Noise, etc.).
    - Improved Metal rendering performance (over 195).
    - Improved experience when editing a spare channel (view artefacts, histogram and navigator).
    - Setting a blend mode on a sub-brush will now work properly - paving the way for import of PSD dual-brushes in a future build.
    - Attempted to fix startup crash on machines with no compatible Metal GPU.
    - Fixed localisation issues affecting non-UK users.
    - Fixed issues when attempting to add layers to a spare channel (!).
     
    Cumulative Changes Since 1.8.4

    - Improved “Serif Labs” RAW engine.
    - You can now single click a spare channel in the Channels panel to edit it like a layer.
    - The Curves adjustment now has numeric field controls for precise positioning.
    - Studio presets are now available.
    - A benchmarking tool has been added in the help menu. It will only become available when no documents are open.
    - Blend modes now work on “alpha only” layers (masks, adjustments, live filters, etc.).
    - Added a new “Divide” blend mode.
    - Added “Create from centre” for the elliptical marquee tool.
    - Allowed snapping to the bounds of the pixel selection.
    - Added ability to create brushes from the current pixel selection with one click.
     
    To be notified about all future macOS beta updates, please follow this notification thread 
    To be notified when this Photo update comes out of beta and is fully released to all Affinity Photo on macOS customers, please follow this thread
  21. Like
    kirkt got a reaction from fde101 in Affinity Photo Customer Beta (1.9.0.196)   
    You all have been busy!
    beta v 1.9.0.196 - the menu entry for the CPU/GPU benchmark is labeled "Support..." as is the actual link to the web page support.  The first (top) menu entry is the benchmark.
    kirk
     

  22. Thanks
    kirkt got a reaction from Dan C in am not able to open CR3 in affinity Mac   
    I just opened a Canon RP .CR3 file (a sample from imaging-resource.com) in v 1.8.4 on my MacBook Pro, using both the Apple Core Image Raw library and the Serif raw library.  Running OS 10.15.6.
    Kirk
  23. Like
    kirkt reacted to MEB in How do I paste into layer mask?   
    Hi Lorox, Haitch,
    If you want to create luminosity mask from a pixel layer press cmd+alt (macOS) and click the pixel image thumbnail in the Layers panel, then press the Mask Layer button on the bottom of the layers panel. Same if you want to create a mask from an existing selection - just click the Mask Layer button on the bottom of the layers panel (you can also oress the Refine button in the context toolbar with one of the selection tools selected if you need to refine the selection first. To control/edit a mask using a Curves or Levels adjustment, nest it to the mask layer (dragging it over the thumbnail of the mask layer), then change the Channel to Alpha in the respective Curve/Levels adjustment dialog.
  24. Like
    kirkt got a reaction from Alfred in How do I paste into layer mask?   
    @Haitch
    You can edit masks as if they were pixel layers, but it takes a couple of extra steps, unfortunately.  That's how life works sometimes.
    When you create a mask, the Channels panel will display the thumbnail of the Mask Alpha if you select the mask in the Layers panel.  Just right-click on the Mask Alpha channel and select the "Create Grayscale Layer" from the contextual menu that pops up.  A new grayscale pixel layer will be created in the Layers panel that you can adjust will all of the image editing tools, adjustment layers, etc. that can be applied to any pixel layer.  Once you are satisfied with your adjustments to the pixel layer that you want to use as a mask, right-click on it in the Layers panel and select "Rasterize to Mask" and then nest that converted mask layer into the layer that you intend to mask (you will have to delete the previous mask that was nested in that layer).  If you need to adjust the mask further, just repeat the process.
    So, in your example where you like to use the background image as a starting point for a mask - 
    0) Create an adjustment layer above the background - make it something obvious, like an HSL layer with settings that make the effect obvious when applied to the background (like Saturation Shift set to -100) - hide the layer for now (turn it off so its effect is not visible).  This layer is just a test layer so that you can apply your mask to it and see the results as you edit the mask.
    1) Duplicate the background layer.
    2) Right-click on the duplicate layer and select "Rasterize to Mask" - nest the newly created mask into the HSL layer and make the HSL layer visible to see its effect, modulated by the luminosity mask created from the background image.  In this example, if you set the HSL Shift adjustment layer Saturation Shift to -100, then areas in the original image that are shadows will retain their original saturation (i.e., the mask is black and the HSL adjustment has no effect) and highlight areas will be completely desaturated (the mask is white and the HSL adjustment has 100% effect).
    3) At this point, if all you need to do is paint on the mask with a regular paint brush, then Opt-click (Mac) or Alt-click (PC) on the Mask layer and the grayscale mask will appear and be available for painting.  I have found that the brush only paints in "Normal" blend mode (that is, you cannot do things like define edges using Overlay blend mode with the paint brush while painting on a mask).
    4) If you need to make adjustments, for example adding contrast or choking a mask with a Levels adjustment, then, with the Mask layer selected in the Layers panel, go to the Channels panel and right-click on the Mask Alpha channel and select "Create Grayscale Layer" - this will create a new grayscale pixel layer of the mask that you can edit will all of the AP pixel editing tools and adjustment layers.
    5) When you are finished your editing, right-click on the grayscale pixel layer that you want to transform into a mask and select "Rasterize to Mask" - this step will rasterize all of the adjustments (adjustment layers, etc) that you used to edit the pixel layer and convert the pixel layer to a mask layer.  Then drag the new mask layer and nest it into the HSL layer and delete the previous mask.
    Etc.  One thing to be aware of - adjustment layers have their own mask already built into the adjustment layer.  If, in the above example, you click on the HSL Shift adjustment layer, you will see a channel called HSL Shift Adjustment Layer.  This alpha layer is different than the mask alpha layer that you added to the HSL layer.  You can right-click on this one too and make it a grayscale layer if you choose, giving you two masks to manipulate without having to use the PS hack of creating a group out of a single layer to add an additional mask.
    The process is a little kludgey compared to Photoshop's masking approach, but it really does not take that much more effort once you understand how to access and work with masks in AP.  As a side benefit, it preserves the previous mask (in the Mask Alpha that you used to create the new grayscale layer that you are currently editing) in case you need to start over from the previous mask.
    AP is different than PS and there are probably reasons why the designers implemented masking this way - maybe you will find that AP's method for masking will create new opportunities for your workflow.  I hope this explanation is helpful.
    Kirk
  25. Like
    kirkt got a reaction from Alfred in Refining a colour change on a car   
    You might get a better result converting your black and white image to Lab, and then painting the brown body color of the car onto the a and b channels directly (they will be shades of gray in the a and b channels, but will combine to make the brown color in the composite).  You can make a color patch in the document with the correct brown color and then, when you view the a channel for example, sample the patch with the dropper tool to get the correct grayscale tone that is present in the brown color and paint into the body area of the car.  To make the job easier, create a mask on the brown body paint layer that isolates the painting to just the body work, then you do not have to be too careful with your painting.  You can even just fill the a and b channels of the layer with the correct gray tone and then apply the mask.
    The idea is that the detail in the black and white image is pushed into the L channel, and the color is then applied to the a and b channels to get the composite look you want.
    Kirk
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.