Jump to content
Our response time is longer than usual currently. We're working to answer users as quickly as possible and thank you for your continued patience.

kirkt

Members
  • Posts

    440
  • Joined

  • Last visited

Everything posted by kirkt

  1. To check the soft-proofing adjustment layer and how it responds to your printer profile, make sure you use a test image that certainly contains out-of-gamut (OOG) colors - test images available here: http://www.northlight-images.co.uk/printer-test-images/ will give you a variety of options for examining documented OOG colors and how AP's soft-proofing responds to them. This way, you are not adding another unknown (the scanner profile assignment) into the mix and you are using a known reference that has OOG colors. The test image from Datacolor, for example, is about 40% OOG across the entire image for your HP printer profile when using a soft-proof adjustment layer. I tested this on the Mac version of AP, v 1.8.4. Kirk
  2. The seven RGB colors in the color table are: (R,G,B) 0,0,0 0,0,255 255,0,0 0,255,0 255,128,0 255,255,0 255,255,255 You can perform Floyd-Steinberg dithering in ImageMagick using the following command: convert original_image.png -dither FloydSteinberg -remap colortable.gif dithered_image.gif where "original_image.png" is the full-color image you want to dither, "dithered_image.gif" is the dithered output of the operation, and "colortable.gif" is a gif image that contains 7 patches of color with the listed colors above - it instructs the algorithm to reduce the original image to a dithered one using only those colors. I have attached a gif (4px wide by 28 pixels tall) of these 7 colors for your use. I will say that the results appear more "dithered" than the same results in Photoshop with the n color ACT color table in the link you provided. Reducing the original image to one with 7 colors will not perform dithering, which is an inherent part of the process to maintain the overall appearance of image brightness in the dithered output. Even if you try to "Posterize" the image to 7 levels, there will be more than 7 colors in the resulting posterized output. See more here: https://legacy.imagemagick.org/Usage/quantize/#dither in the section entitled "Dithering using pre-defined color maps." Kirk
  3. At full res (3000x3000 pixels) and 50% JPEG export compression in AP the resulting file size is 436kB. The image looks pretty much identical to the original when opened side-by-side. You could also try running the original through some noise reduction or median filtering to smooth larger areas of tone so that the compression algorithm is not trying to preserve the small variations in these large uniform areas because of noise. If you apply the Dust and Scratches filter at a radius of about 3-4px and compress with JPEG at 50%, the resulting file size is 276kB. Kirk
  4. Depending upon the size and bit depth of the images, and how many you want to stack to get your final output, you can imagine that the file size of the resulting .aphoto file that includes all of the source images would be unmanageable for many systems running AP. An alternative might be one in which the user put all of the source files in a folder and AP loaded those source images into a stack and then produced XML instructions that recorded the alignment and segmentation of the sources. The XML instructions could be included in the .aphoto file, or as a sidecar - either way, the source images would be LINKED to the .aphoto file, not EMBEDDED. If you needed to go back and make edits, you could invoke the "Edit stack..." button that does not yet exist, and the XML instructions would reload the original source images and alignment and segmentation/depth map for you to tweak accordingly. This would essentially make the workflow non-destructive to the source images. The same thing could be implemented with the HDR Merge and the Panorama modules as well, effectively creating a semi-Smart Object like workflow without creating an enormous .aphoto file. This approach would require keeping the .aphoto file and its XML instructions linked to the source files. If the link is broken (maybe you moved the source files somewhere else) then a prompt when loading the .aphoto file to locate the source files could pop-up, for example. You could also have an option to EMBED the source images too, if you do not mind the potentially large file size. Kirk
  5. The macros that John provided to change pixel dimensions employ the Equations filter cleverly, combined with selecting the empty space in the resulting image canvas and clipping it away. It is a kludge, but the Equations filter permits the macro to record the actual equation, not the resulting value. Well done! Glad you got it all working. Kirk
  6. You can stack the Macros in the Batch Job dialog - for example, if you want to convert a set of images of different long edges to a max long edge of 1280px and set the resolution to 300 DPI and save as JPEG, you can set up a Batch Job where you add the files you want to resize, set the destination folder and then apply the two actions from John's set. This way you can plug in any combination of macros to get the final size and resolution you want. There's nothing stopping you from applying this to an individual image also, from the macro Library. See screenshot. Kirk
  7. When you resize a document, there are two options - 1) scaling and 2) resampling. It is important to understand the distinction. The "DPI" value is simply a metadata tag that is used in situations where the image will take on actual physical dimensions, like printing. When performing scaling, you alter this tag to be whatever you want (300 DPI, 72 DPI, whatever) WITHOUT CHANGING THE PIXEL DIMENSIONS OF THE IMAGE. In contrast, resampling actually will change the pixel dimensions of the image to accommodate the new pixel dimensions that you enter into the W and H fields in the Resize dialog. You can perform both scaling and resampling simultaneously, and AP will increase or reduce the required number of pixels accordingly by creating (upsampling) or combining (downsampling) pixels to achieve the final pixel dimensions at the specified DPI. In PS, if you want to record an action that will change the dimensions of any image you feed it by a specified percentage, you can do this by changing the units in the Preferences to Percent prior to recording the action. Then, every time you run the action, the image you feed it will be reduced by the given percentage with the aspect ratio of the image maintained. This is what AP appears to be lacking because it appears that AP records the absolute pixel values of the W and H fields when recording the macro instead of the relative expression that generated those values. That is, it appears that AP permits the user to enter expressions into the W and H boxes in the Resize Document dialog (like "50%" or "<original_dimension>*0.5") but does not record these relative measures (the expression itself) in the macro - it appears to record the resulting value, not the expression that generated that value. Maybe I am missing something, but it appears to me that this is how AP currently works. Kirk
  8. Has this object that has the bright outline after USM been cut out of an image and placed against a new background? The bright pixels are from the contrast at the edge of the object being being accentuated (that is what USM does). If you cut the object out of another image (that had a bright background, for example) and there are some stray pixels at the edge that are much different than the new background, that difference will be enhanced by the USM operation. If this is the case, then you need to refine the edge of the cut out before comping onto the new background and applying USM. Maybe a tool like Filters > Color > Remove White Matte might help get rid of that edge of pixels before composting the object onto the new background. Otherwise, you can apply USM as a live filter and mask the adjustment to affect just the object - the live filter has a mask by default that is all white. Invert that mask (to all black) and then paint white onto the mask to reveal the USM effect just where you want it (the object). Kirk
  9. Have you verified that the lens model contained in the EXIF data of the raw file itself is actually correct? It appears that it is not. Manually selecting a lens correction profile will not change the EXIF data for the lens. Is the firmware for the camera and lens up to date? You can use EXIFTool to change the lens data in the raw file manually to the correct lens and then see if AP will detect it and automatically apply the correct lens profile. Kirk
  10. You can also ask your print shop if they have an ICC profile of their printer-paper combination that you can use to soft proof your image in AP. kirk
  11. Also - when working with a large sequence of images (more than 4 or 5, for example) it is a lot easier to test the waters by working first with reduced-size JPEGs of the source images to test how the stitch will go before trying to jam all of those 16bit TIFFs down the stitcher's throat. PTGui permits the use of templates to do just this thing. Use smaller proxy images to set up the stitch, make a template from this set up, and then swap in the full-res 16bit TIFFs and let the template render the result from those large files without having to use the large files to solve the stitch. Kirk
  12. It is not entirely clear to me how AP handles some issues that all panoramic image stitching apps need to handle, if the scene demands it. For example, when shooting a set of images that one ultimately wants to stitch into a single panoramic image, the easier case is one in which the image sequence contains a scene that is far away from the camera, like a landscape. Even with a relatively wide field of view, the image maps to a small portion of a spherical projection and there is little parallax to deal with because all of the objects in the scene are essentially at the same depth relative to the camera (far away). Even if you shoot this sequence hand-held and rotate (not about the no-parallax point) and translate the camera, you still get a pretty decent result. Even in this situation, there may be an issue with the EXIF data in your source images that is causing AP not to be able to read the orientation of the image. I wonder if this might affect the result. Once you start introducing parallax and angles of view that take up significant portions of a spherical projection, for example in a living room of a house, all bets are off, even with good image overlap. I use PTGui to deal with challenging panoramic sequences, and shoot with a panoramic tripod head that ensures the optical system is rotating (panning and tilting) about the no-parallax point to remove parallax from the image sequence. This permits full spherical panoramic images to be made with little intervention from the user (automatic control point generation and alignment is successful). If you are trying to shoot a wide scene or a tall scene that can be covered in a few images, you might also get good luck with a perspective control lens or adapter that permits shift movements of the lens. Hugin is a free version of the same tools that PTGui uses - maybe Hugin might make working with difficult scenes easier. Kirk
  13. You can create your mask, rasterize it to a mask and then apply an adjustment layer like a curve to the mask to choke it, etc. In the attached images, the image of the bike is layered on top of a red fill layer. The mask applied to the bike image is a radial gradient with a feathered fall-off. The curves adjustment layer applied to the mask creates a harder edge to the fall off by adding extreme contrast to the mask - NOTE: to make the adjustment layer operate on the mask, you nest it with the mask layer but you must also choose the curves layer to operate on the ALPHA from the curves dialog drop down menu (yellow arrow). Kirk
  14. You can also work in Lab color mode, where color and luminance are separated by the color model itself. Kirk
  15. Check your settings in the Develop Assistant - there are choices there for applying a tone curve or taking no action, as well as applying an exposure bias. Perhaps you have the Tone Curve option set to "Take no action." Kirk
  16. I downloaded an X-T4 raw file from imaging resource: https://www.imaging-resource.com/PRODS/fuji-x-t4/XT4hSLI00160NR1.RAF.HTM and opened it in AP v1.8.4 on my Mac, running OS 10.15.6 using both Apple Core Raw and Serif Raw. No problems. Working in v1.8.4 with my Fujifilm cameras (X100V and X-Pro3) - no issues with their RAF files either. Are these raw files that you have opened previously in AP or another raw converter? Trying to rule out a bad card reader or similar corruption of the raw file itself. Kirk
  17. What problem is being caused by your OCIO config file? It is pretty much used only for the OCIO adjustment layer and the display preview in the 32bit Preview panel. kirk
  18. I just opened a Canon RP .CR3 file (a sample from imaging-resource.com) in v 1.8.4 on my MacBook Pro, using both the Apple Core Image Raw library and the Serif raw library. Running OS 10.15.6. Kirk
  19. You can add an adjustment layer (like curves) and set the blend mode to Screen. You do not need to do anything to the curve. Invert the layer mask (just select the layer and CMD-I or CTRL-I) to hide the screen effect and then use a white rush with low flow and 100% opacity to gently brush in the screened layer. You can also clone the catchlight from the right eye onto the left eye. Kirk
  20. I think we both agree that AP's curve dialog could use some additional tools, although the fact that the user can change the input range is pretty great, especially when editing 32 bit per channel, HDR data. Numerical input would be a great addition. I was also trying to kludge together the max/min reversed curve adjustment layer with a procedural texture live filter to invert the output of the curve, but even when the document is in Lab mode, the live filter version of the procedural texture only permits equations in RGB, including presets made in Lab. Boo! Kirk
  21. I think we both agree that AP's curve dialog could use some additional tools, although the fact that the user can change the input range is pretty great, especially when editing 32 bit per channel, HDR data. Numerical input would be a great addition. Kirk
  22. @KentS When you open a new curve layer, try inverting the input min and max values - that is, enter "1" for the minimum value and "0" for the maximum value. See if the correction curves in his book translate one-to-one with this set up. I don't have Professional Photoshop in front of me right now to test it. It may not be enough though, as you probably need to inert the output axis too. Kirk
  23. In looking through the lensfun database, the lens is not characterized yet. You can go to the Lensfun homepage and read up on how you can request support for a lens and help contribute to the database. Once the database is updated, you can manually add the new data for your lens to AP for your install of the app. Kirk
×
×
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.