Jump to content

kirkt

Members
  • Posts

    440
  • Joined

  • Last visited

Everything posted by kirkt

  1. To bring up the 32 bit color chooser, double click on the circlular color patch in the chooser to bring up the dialog in the image I posted above. Kirk
  2. I responded in the thread you linked to in your above post. Kirk
  3. To experiment with this feature of the Curves adjustment, try the following: Make a new document - 32bit. This gives you an unbounded (not constrained between 0 and 1) working document. Now, use the rectangle tool, or whatever, and make a box and then use the color chooser to make boxes with various exposures. I have made one for you - you can download the .aphoto file here: https://www.dropbox.com/s/8l13orpca6xr8x1/Unbounded.afphoto?dl=0 The example file has a large rectangle that is 0EV and four rectangles below 0EV (-1, -2, -4, -8) and four above 0EV (+1, +2, +4, +8). In 32bit color, you can express these values in "intensity" (like log photographic exposure or stops) or in floating point (FP) values. Normally, a bounded working document has all values between 0 (black) and 1 (white). An unbounded document can have any intensity values - presumably even negative ones. The 32bit document is linear, so doubling the FP value doubles the intensity, equivalent to increasing by 1 photographic stop, or EV. The Color Chooser in AP takes on a special form in a 32bit document, permitting you to choose "color" (RGB values) and intensity (in Stops). You can also dial in both using the FP values. The relationship between EV and FP values is: FP Value = 2^(EV). To visualize values beyond the display range, you can use the 32bit preview tool (Studio > 32bit preview). Slide the Exposure slider to adjust the range of values displayed in your 32 bit document. You will see that the current document shows you the rectangles that are in the range between 0 and 1. Dial in negative exposure and you will see the upper +EV rectangles appear. In the document I provided, there is a curves adjustment layer at the top of the layer stack. You can use the min/max input values to specify what range of FP values you want to comprise the lower and upper bounds of your curves adjustment. Try setting the max input to 0.25 and pulling the white point node down to the horizontal axis (0 output). Notice that only the squares that have a FP value <=0.25 are affected. Etc. This way you have a true 32bit curves adjustment that you can control the range of input values that get affected in 32bit mode. Totally cool!
  4. Yes, you can scale the effect in the equation with a constant - you could even make one of the three parameters (a, b or c) the scaling factor and expose that also, allowing a user driven range for the slider max! I sort of forgot about this thread I was so busy last week. Glad I stumbled back into it! kirk
  5. A 16 and 32-bit color readout option in the info panel would be terrific and work hand-in-hand with the unbounded input min and max feature in the Curves adjustment. kirk
  6. I'm glad you are able to edit it! Yes, this implementation of Macros will make for some interesting extensibility of AP. kirk
  7. No problem. I probably should have removed those things, but this way the user, at least at this stage in the development, is cued to recall the default value - in case they forget and want to go back to the default. Can you edit the macro? If so, you can click on the gear icon next to each control I exposed to bring up the rigging controls. Click on the eye to un-expose the control, then click on it again to re-expose it - this should give you the opportunity to rename to control that appears in the dialog. This way you can customize the control names. Have fun! kirk
  8. Note that I have found that the default values for the Gaussian Blur live filter changes automatically with the size of the image when you first run the macro. That is, if you load a large (say full-res dSLR) image, the blur radius is larger than if you load a smaller res image. You can always change the blur radius during the macro dialog or after the fact, as the GB layers are live filter layers. Also, when working on an image, I would suggest working on it at final export size. Play around with getting the look you want and then changing the document size to see how that affects the grain. You can experiment with stamping the stack prior to resizing - this will create a pixel layer with the look burned in - the grain will get resized during image size changing. You can also leave everything live, resize, and then adjust the GB layers accordingly to reestablish the look at the smaller size. Then export. I found this approach worked better. kirk
  9. A couple of renderings. Fine/sharp grain:
  10. Some images: Working Dialog (it's a little bit busy at the moment):
  11. Here is another, similar approach using the Distort > Equations to generate noise and then applying it. I broke the grain into the Shadow-Midtone grain and the Highlight grain to control the look of the grain application. This is all performed by the user-controlled blend mode used. I cannot figure out how to select and group multiple layers while recording a macro, so the resulting layer stack is not as tidy as it could be. The macro: 1) applies a B+W conversion adjustment layer - this is editable after the macro runs, so you can adjust the color contributions to taste. 2) Stamps the B+W conversion and applies Equations to make a grain layer. I have exposed the parameter "Grain Control" to give the user access to the random seed in the noise function. 3) The noise layer is duplicated so that the Shadow-Mid and Highlight grain have the same noise pattern. 4) A Gaussian blur is applied to each noise layer to control the sharpness/fineness or softness/coarseness of the grain. There are Live Adjustment layers, so they are editable after the macro runs. 5) A curves layer is added with a gentle contrast "S" curve to re-establish contrast - this is editable after the macro runs. 6) A levels adjustment layer is added for the user to fine tune the black point, white point and gamma of the resulting image. In general, the application of the grain layers will affect the tonal values of the underlying black and white conversion, so adjustment of tone and contrast will be required after the macro runs. I suggest first accepting the default values and playing with the blend modes to see how they affect the rendering of the grain layers. I also would suggest grouping the two grain layers and turing their opacity down - this can be done in the dialog while the macro is running, but you must adjust each individually. Adjust each to taste in the dialog and then you can group the two layers and turn the effect down to taste by adjusting the group opacity. The black and white conversion in the macro is simply the default conversion with all color sliders set to the their zero position. This conversion should probably be adjusted after the macro runs to get the best conversion. Thanks to paolo for encouraging me to try to understand the Equations interface with this interesting application. Please feel free to download the macro and edit it, etc. I am still learning all of this, so the implementation is weak and the equations are not real sophisticated. kirk Dropbox link to macro file: https://db.tt/pNOXLuvXmy
  12. If it is possible, it would be terrific if the "Gamut Check" feature in the Soft Proof adjustment layer provided a false-color overlay/map that gave an indication of how far OOG the pixels were, say in deltaE, instead of a gray overlay that does not indicate how far OOG each selected pixel actually is. Even better would be a preference to map the false color overlay to different dE values - that way, the user could specify the level of dE discriminated in the OOG overlay. For example, green for a dE <2, yellow for dE between 2 and 6, red for dE > 6 or something like that. This would be useful in assessing OOG colors for print, for example, and what adjustments would be necessary to bring the current document into gamut for the intended output space. Sort of highlight and shadow clip warnings in a typical raw converter. I can envision making a saturation adjustment or a Lab curves adjustment and watching the OOG indicator map change from lots of red and yellow to a little bit of yellow and calling that adjustment sufficient for the conversion into the destination color space. Color Think: http://www.chromix.com/colorthink/pro/pro_worksheet?-session=SessID:47E61C221d6c420F6BuomWB1C0FC provides this kind of gamut map, but is a professional color tool that is expensive and requires a lot of additional knowledge to use, and is not an embedded tool in an image processing application. The attached example shows a particular blue in AdobeRGB and how it would map to an Epson printer profile, with the "gamut map" showing how far OOG some of the blue areas are. Thanks! Kirk Thibault Berwyn, PA
  13. OpenEXR and HDR (Radiance) are the two 32bit file formats available for export/save in AP. kirk
  14. I sort of figured this was covered before, but a search for LUT and Infer LUT came back with nothing. Sorry about that. Thanks for the tip! kirk
  15. Hi folks, There are several different ways to achieve looks in your image editing workflow, and one of them that can be useful is the LUT (Look Up Table). Affinity Photo has an adjustment layer called "LUT" which permits the user to include a LUT in their layer stack, giving non-destructive control to the image look, etc. through a color LUT. The typical use of the LUT adjustment layer is to load a pre-made LUT (in various forms, like .cube, etc.) and season to taste. The AP LUT adjustment layer also provides an interesting second method, the "Infer LUT" mode. As the name suggests, the Infer LUT mode asks the user to load two images: the first image is sometimes referred to as the "Identity" image - the non-modified reference image; the second image is the modified image, typically the Identity image put through some sort of processing to get the final, transformed look. One handy tool for storing inferred LUTs is the HALD LUT identify image - here is a brief explanation of the HALD: http://www.quelsolaar.com/technology/clut.html It is essentially a compact image composed of color patterns that can be visualized as a 3D color matrix. It is usually stored in a non-lossy file format, like PNG. When you apply edits to the identity HALD image, the colors in the image change accordingly, and the edits are "recorded." When you compare the original Identity HALD image to the edited HALD image, the transforms in color can be inferred and, voila, you have a LUT! It is efficient and compact, so you can get good resolution of the LUT transform in a small package. AP can use the HALD identity image and a transformed version of that image to infer a LUT. Why is this cool? Because you can use any piece of software you would like, create a look you want on a working image (any image) - once you have the look you want, you insert the HALD image into the workflow and apply the exact same transform and save the resulting, edited HALD image. You can build a library of the Identity HALD and then all of your edited HALD images and load the Identity and the Edited HALD image that has the look you want to apply into the AP LUT adjustment layer and POW! You can apply your look via the Infer LUT mode. This is a nifty way to export all of your presets from something like Lightroom, where you paid $$$ for the VSCO film pack presets, for example, into a useable HALD LUT. Because it's fun, I've included a link here: http://blog.patdavid.net/2015/03/film-emulation-in-rawtherapee.html That describes film emulation looks using this same technique, along with a bunch of freely downloadable HALD LUTs for those films. The context is within the film emulation module of Raw Therapee, but the functionality is identical to AP's Infer LUT. When you download the film emulation HALDs from the above link, read the README.txt file for more details. The collection includes the identity HALD and a bunch of HALDs that capture various film looks. The looks are all made in 8bit sRGB, but no one is going to analyze your look transform if you use it in a 16 bit document or a slightly larger color space, so have at it! Remember, you can do this with any look you create, in any application, as long as you can repeat the procedure (for pixel-based operations) or insert the identity HALD into your non-destructive layer stack. Also, LUTs do not capture non-tonal/color operations like chromatic aberration correction, etc. Also, with respect to film emulation, this is for tonal edits and does not emulate grain. Have fun! This is a great feature to have in AP, but maybe not as celebrated as it could be! kirk
  16. Another thread has, coincidentally, recently been started that includes requests for the Apply Image functionality, as well as access to all channels and, of course, automation. https://forum.affinity.serif.com/index.php?/topic/25530-color-correction-prepress-missing-tools/ Thanks, kirk
  17. Please also see this recent "hot" thread regarding the same request for Apply Image functionality: https://forum.affinity.serif.com/index.php?/topic/10847-photoshop-apply-image-functionality-needed/ thank you, kirk
  18. I would also like to see both of these important features implemented in AP, please! These are essential image manipulation tools for use in selections, masks and channel-based operations. The current "Apply Image" command is of little use compared to the "traditional" Apply Image command referenced by Davide. Thank you for your tireless development of AP. kirk
  19. Maybe an explanation of how Apply Image works like (or will work like) Photoshop would be in order. At the moment Apply Image exists but you cannot apply a channel/layer from the existing image to a layer or channel in the same image, etc. or another open image of the same dimensions. You have to load an external image to apply it to the current image and there is no access to the channels of the external image either. Pointless for most uses of masking and image processing in the PS methodology of using Apply Image. There is no "Calculations" dialog either to attempt to replicate the Apply Image operation in PS. kirk
  20. You need to tell AP which folders to search for plug-ins, and the plug-in support. This is done in the Preferences > Photoshop Plug-ins tab. Not all plug-ins will function even if they are detected. Search the forum for several threads on this topic. kirk
  21. I can understand your frustration but you need to work with AP to solve your issue, as it is specific to your set up and there is not enough information to reproduce your problem. File a bug ticket and work with the developers to address your issue. I would also advise that you not work in ProPhotoRGB in 8bit color (use 16bit). I have attempted to reproduce your issue and cannot on my MacBook Pro - i do not run it with a second display. I can produce an 8bit ProPhotoRGB image from a raw file in the Develop persona, save it as an 8bit ProPhotoRGB TIFF and open it in AP (left image in attached shot), PS (bottom right) and Photoline (top right) and the images look, for all intents and purposes, identical I am going to guess that your issue is a Colorsync issue maybe involving your second display, maybe due to a larger display gamut? - Adobe/Photoshop's internal color management handles your images and AP/Colorsync does not. There may be a slight change in the shadow tones when I view my above image on my MacPro/Eizo combination, but it is not nearly as drastic as your example. That is my best guess. Good luck. kirk
  22. But, once you open your grayscale image, you have to make it a COLOR image (Document > Color Format RGB (16bit)) THEN you can assign a color profile. I was not sure you were in color mode. kirk
×
×
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.