Jump to content

kirkt

Members
  • Posts

    440
  • Joined

  • Last visited

Everything posted by kirkt

  1. Of course my reply was predictable, but it looks like you got a response. I'm not sure why you think AP should have all of the tools and performance that Zerene or Helicon Focus does - it is a general purpose pixel editing application. But hey, that's one of the things that this forum is for - requesting features. Try going on the Adobe forum and posting your plea there and see what you get. Also, consider that, regardless of the tools, the Windows version of AP is months old - enhancing the features and speed of focus stacking may not be a top priority on the Windows development map. In the interim, consider using Zerene or Helicon Focus, they are both top notch. You can also use enblend/enfuse (if you do not like the command line, there are GUI's for it). Photoshop's focus stacking tools generally are not good. My standard and expectation of this forum is derived from experience with it and the software. While I do not know the developers or the forum members personally, I have found them to be helpful and I know that they are not a legion of help desk folks awaiting pleas for help 24 hours a day, but a small(er) operation trying to maintain an unrealistically high level of innovation and incorporating thousands of user suggestions, feedback and bug reports. I am impressed beyond expectation at what they have managed to accomplish in such a short period of time at what are likely unprecedented levels of growth and demand from the users. kirk
  2. An example: I opened a Canon 5D IV raw file into AP (v 1.5.2) Develop persona, hit the "Develop" button and then did a Save As... to a ".afphoto" file. The raw file was converted to a 16bit file in the Develop stage. The resulting file size for the afphoto file was ~250MB for a 6744x4502 pixel RGB image. If you then export the image as a JPEG, full res, 80% quality, the resulting file is 2.4MB. This is what you should be sending to the lab for printing, unless they tell you otherwise. JPEG is an 8bit, flattened file, so you would likely save the afphoto file with all of your layers as the "master" file that you can go back to and edit, re-render to different output sizes and formats, etc. whereas the JPEG is your deliverable format. For comparison, Photoshop opens the raw image and doing a Save As... as a .psd file yields a 180MB file that is 6720 x 4480 pixels. Saving to a JPEG with level 9 out of 12 quality (about 80% of 12) yields a file that is 4.7MB. Each application is doing whatever it does, and specifying a specific compression quality in their specific JPEG engine seems to yield different results. What is sort of fascinating, and maybe someone else can try this to verify that it is not some fluke, is what happens when you start to add layers to the file. Here is the test sequence I followed. Consider opening the raw file to the background layer as "State A." Also, your file sizes may varying depending upon the pixel dimensions and the nature of the image itself, in terms of compression efficiency. State B: add a Curves adjustment layer to A, without applying any data to the inherent mask and without adjusting the curve (leave it as a straight 45 degree line). Save As... State C: stamp the layer stack (CMD+OPT+SHIFT+E or, what AP calls "Merge Visible") in B. Save As... State D: Create 3 more curves adjustment layers on top of the previous stack in C. The Curves have no mask and no adjustments made to the curve. Save As... State E: Stamp the layer stack in D. Save As... None of the curves in the above exercise actually had adjustments made to them, so the stamped pixel layers are identical to the background - more on this below. For each state, I closed the previous file, reopened StateA.afphoto, and redid all of the steps, so that the history would not accumulate, etc. Here are the resulting file sizes for each state: State File Size (MB) A 249.9 B 249.9 C 393* D 287.7 E 487 * - I redid this operation a second time and got a file size of 287.6 MB - this is what it probably should be. I redid it a third time and got 449MB. WTF? Obviously adjustment layers cost nothing in terms of file size if there is no bitmap mask associated with that layer. Add a mask and the file size increases accordingly with the number of masked layers. What is strange is the behavior of the stack and the Save As... file size when going from State B to State C to State D. Adding three curves layers onto State C (State D) dramatically decreased file size compared to C - *perhaps*. I do not get it. The fact that replicating State C three different times, all from scratch starting with a freshly opened copy of State A, produced three different results is strange. See note below. It would appear that AP is trying to optimize file size by looking at the stack and differences between various layers (pixel layers) and trying to see if they can highly compress a pixel layer that is a duplicate of another one, or something along those lines. However, it seems like whatever algorithm is being used is not very consistent in its application. It seems like adding the curves layers (State D) forced the algorithm to compress the stack properly, to get a 287MB file size. What is also interesting is the varying behaviors in States B through D if you make changes between the intervening states. If you make the Curves adjustment layers so that they stay at their default straight line (no adjustment) and stamp the stack at the previously described locations, etc., you will have a layer stack that is comprised of curves with no adjustments and stamped pixel layers that are identical to the background because the curves do nothing to alter them. If, however, you make adjustments to the curves, and the stamped pixel layers are different than the background, then the file size changes dramatically. For example, retrace the process above, but apply some arbitrary shape (an "S" curve for example) to each curves adjustment layer - call this branch of our experiment the "+" branch. Then: State File Size (MB) A 249.9 B 249.9 B+ 249.9 C 393* C+ 447.5 D 287.7 D+ 447.6 E 487.2 E+ 656 So, AP is looking at the difference in the state of the layer stack and compressing the stack dynamically based on how different pixel layers are. So, there are all probably all sorts of things going on dynamically to compress the file and save space where the operation can get away with it. However, the fact that State C above differs over three identical attempts to replicate the state is bizarre and may ultimately uncover some function of the saving algorithm that leads to bloated file sizes, perhaps unnecessarily. RAM and hard drive space is cheap, but a predictable and repeatable save algorithm is important too. kirk NOTE - It seems like some of the files that were the multiple attempts to replicate State C, versus State D, produced different stamped pixel layers (compared to the background layer), even with a curve that should have done nothing to the image. If I put the stamped pixel layer in "Difference" blend mode (comparing it to the background), there is a noticeable difference between what should be identical pixel layers. This is likely the problem with my experiment, not the save algorithm. This is still an incredibly bad problem, because it makes the file size skyrocket when the layers are even slightly (inadvertently, erroneously) different. So, an error in the stack rendering process causes large file sizes because the save algorithm sees the stamped layer as different from the background layer, even though they should be identical. Tsk tsk. Worse, it is unpredictable and not repeatable. Maybe the OP should add a blank Curves adjustment layer (or a few) to the top of his layer stack and see if it decreases his file size. Ha!
  3. I'm sorry that no developer has personally acknowledged and validated your frustration, but it may be time for you to move on if you truly believe what you are stating here. kirk
  4. There are several HDR applications that implement batch processing. This is what you want. Photomatix, HDR Expose 3, Aurora HDR, etc. all do this - what you want will probably come down to the flexibility of the batch processor's set up options. Many of them will automatically segment your image sets based on EXIF data, but they also may give you manual control to specify X number of images per image set. In this regard, Photomatix's batch processor is most flexible. Typically the batch processor will also give you the option to specify your output, including a full 32bit HDR file (in .hdr or .exr for example) as well as a 16 or 8 bit tone mapped render based on a preset that you specify. These applications typically have trial periods, so give them a shot and see which works best for you. kirk
  5. It seems like what you are trying to accomplish is to drop a color measurement target onto the image at some critical location and measure its L value. Then, with the L value measured, you want to make an adjustment (say, using Curves) to change that L value. In the video to which you linked, the author does this by using the Color Sampler Tool in PS to read the L value of a particular area of the image - he then selects this area on a Curve adjustment using the Targeted Adjustment Tool in the Curves dialog to place a point on the curve corresponding to the L value of the selected area. The author then manipulates the curve to change the L value of the area to the desired value, makes a mask to hide that adjustment layer and then selectively applies the adjustment using white paint on the layer mask to reveal the adjustment where he paints it in. This is essentially dodging and burning, setting the "exposure" adjustment with the curve and then locally and selectively applying that adjustment through the mask. You can do all of this in Affinity Photo. You can use the tools in the Info panel to make and monitor the L measurements and adjustments. You can use the Picker mode of the Curves adjustment layer to target specific areas in the image and drop a corresponding point onto the curve. You can even work in Lab in a Curves adjustment layer without changing the document's color mode to Lab. If you really need the Zone values tool, that does not exist in AP - just write down the L values that correspond to the Zones for the three color spaces that the video's author demonstrates - no need to use the panel at all, just look at your notes. Here are some Affinity tutorials that might help: Curves Picker: https://vimeo.com/154293467 Multiple Color Formats: https://vimeo.com/149265352 Here is the tutorial page (200+ videos) to browse: https://forum.affinity.serif.com/index.php?/topic/10119-official-affinity-photo-desktop-video-tutorials-200/ The Info panel is a little quirky at first - the three icons represent: top - what color mode you want the readout to display (RGB, Lab, etc.) - click on it to select the readout mode; middle - click this icon if you want the readout to display the color values under your cursor; bottom: click this icon to place a sampler and readout the color under the sampler. BE AWARE! - the sampler will be placed at the top left corner of the document - you can then drag the sampler to the desired location on the image. The top left corner is where every new sampler is created, unlike PS where the sampler gets placed on the location you click. To create more samplers, or delete existing ones, use the small flyout menu in the upper right corner of the Info panel. Like PS, there are two readouts you can configure independently. kirk
  6. A couple of things: 1) Sort of related, at least an FYI - OCIO has been implemented as a Photoshop plug-in: http://fnordware.blogspot.com/2017/02/opencolorio-for-photoshop.html in case you want to apply a transform to a layer in PS (maybe to compare the results). It is a destructive transform though, unlike AP's implementation as an adjustment layer. 2) If you keep the OCIO adjustment layer at the top of the layer stack and turn layers below it on and off, you can see the effect that the persistent OCIO layer has on the particular layer below it that you are visualizing. However, AP does not have layer comps like PS does, so if you want to export a particular state of the layer stack, you need to do an export to a file (JPEG, TIFF, etc.) to burn the OCIO adjustment layer into the stack at its current state. Otherwise, you could create a "Macro" (AP's version of an action) to create an OCIO adjustment layer at any point in the stack you choose. Then you could group the OCIO adjustment with the particular layer and turn the whole group on and off, giving each layer that needs the OICO transform its own instance of it (at least avoiding the chance that a layer that does not need it getting it mistakenly). Unfortunately, AP does not permit grouping in a Macro, so you would have to group manually. Bummer for sure. I suppose you could create a Macro that does the following: 0) Select the layer you want to affect (i.e., transform with OCIO); 1) Run a Macro that does the following: a) create an OCIO layer above the current layer; b) select the preset transform that you want; c) select the OCIO layer and the layer you want to transform and merge the two. End. Repeat. This will burn the OCIO transform into the selected layer. This way you can apply (destructively) on a layer by layer basis the transform, skipping the layers that do not need it. Maybe? Kirk
  7. Does your rendering application tag the linear output file you generate with a particular color profile? If not, then AP has no way of knowing how to interpret the color in your image, so it interprets it in the working color space (which is probably not linear). You can use an OCIO adjustment layer to tell AP how to display the image (non-destructively) so that you can work on linear data while displaying it in some other gamma encoding. The OCIO config set for vfx and animation are here: http://opencolorio.org/configurations/index.html and will probably give you the transforms you are looking for. The ACES 1.0.3 config set is particularly useful. If your image is 32bit, you can use the same transforms in the 32-bit preview panel. Unfortunately, you cannot set up AP to use a linear color space as its working color space for files lower than 32bit per channel and you cannot assign a lower bit depth (16 or 8 bit) file a linear profile. If your images are 32bit then you are good to go - just set up the color preferences properly and apply the OCIO transform you want for display in the 32bit preview panel. You can also use a LUT if you have one that does the specific transform you need, again as an adjustment layer. There are a few tutorial videos that cover processing of 3D rendering images in AP - check the video tutorials list, especially the HDR, OpenEXR and Color Management (OCIO) sections. Just a note that assigning a linear profile to the image will only change its appearance, but it will not convert the numbers to a gamma-encoded form. That is, the data remain linear but the assigning of the linear profile help with display. This is one important role that OCIO plays in AP, as an adjustment layer. Same goes for the 32-bit Preview panel. Both approaches help visualize linear data in a gamma encoded manner while maintaining the linear nature of the data for compositing, etc., where data need to add linearly to preserve light physics. It sounds like you CONVERT your linear files to sRGB in PS, which will change the data to sRGB gamma encoded numbers - are you sure this is what you want? The data are no longer linear after conversion. If that is what you want to do, you can use OCIO as an adjustment layer to do the transform and then flatten the result and assign sRGB as the color space to the document. Anyway, I hope this helps. kirk
  8. Fortunately, there are plenty of raw converters out there besides LR/ACR. You may want to try Iridient Digital's X-Transformer if you are having trouble finding a good Fuji demosaic/raw converter. http://www.iridientdigital.com/products/xtransformer.html If you use a Mac, give their full-fledged raw converter a try on your Fuji files: http://www.iridientdigital.com/products/iridientdeveloper.html kirk
  9. Have you tried loading your equirectangular projection image (where the image is seamless, with the width equal to two times the height) and then selecting "Layer > Live Projection > Equirectangular Projection"? Tutorial videos are available in the "Live Projections" section of the tutorial videos. kirk
  10. Curves are "zoomable" - you simply adjust the "Input Minimum" and "Input Maximum" to select the bounds of your curves adjustment. These values can be less than 0 and greater than 1, for unbounded 32bit images. The display of 32bit color values is not explicitly available (in an "Info" panel, for example) - however, you can open the color picker and use the color picker tool (the eyedropper) to click on a point in the image, and the color picker panel will display the intensity in the R, G and B channels in floating point format, similar to the 32bit color panel in Photoshop. kirk
  11. This is already available in v1.51. Go to Preferences -> User Interface -> Monochromatic iconography. You must restart for this change to take effect. kirk
  12. That is not constructive. Perhaps suggest some features or current limitations that you have observed, and what the devs could do to fulfill your needs. kirk
  13. I understand and appreciate the distinction you are making, but all of that can be done in 3DLC. Lattice is a nice tool as well and is useful for manipulating transforms and creating new ones based on color models and tone curves - I especially like the ability to write expressions to define transforms. 3DLC permits manipulations of LUTs (although not with the expression-driven Lattice-like approach) as well as its ability to edit color and tone with its unique set of tools. In this sense, I think it offers more than simply saving image edits in a LUT format. AP already will permit much of the Lattice-like LUT transform (input to output) in the form of OCIO support in an adjustment layer. It would appear (and I could very well be wrong) that the OP is looking for the color correction tools in 3DLC, not an explicit LUT editor like Lattice. kirk
  14. 3DLUT Creator creates 3D LUTs based on all of the tools and corrections within the application. Have you ever used it? I'm not sure I understand your characterization of it here. Care to elaborate? kirk
  15. 3D LUT Creator is a very powerful color editing and LUT generating tool. It is not simply a color selection and manipulation tool. Watch the collection of videos on YouTube detailing the various functions and features of 3DLC and I think you will find that to try to incorporate it's functionality into AP is not possible. Give the application a trial too - you might find it is worth the investment. kirk
  16. I'm not sure if this solution is exactly what you are looking for, but you can use LUTs based on the HALD identity image to simulate various color blind modes, similar to the soft proofing in PS. You need to grab the HALD identity image and the modified HALDs that simulate various color blind modes - they can be found here: http://www.daltonize.org/search/label/ColorLookupTable then use AP's LUT adjustment layer and use the "infer LUT" mode - it will ask you to load two images. The first image should be the identity image (the reference image) and the second image should be the modified HALD that simulates the color blind mode. This LUT will display your image similar to the soft-proof mode in PS. You can use the color picker and the info panel in AP to see how the colors remap - if you apply the LUT to a standard color wheel, where you know the color you are looking for, maybe you can use the LUTs to remap the reference colors and figure out how to find the appropriate new color that works. There is also this set of LUTs: https://github.com/nelas/color-blind-luts which purport to use transforms for color blind friendly colors for presentations, etc. This may also help guide your color choices. Good luck, kirk
  17. This might also be helpful: https://cameramanben.github.io/LUTCalc/LUTCalc/index.html kirk
  18. What is the working color space that your AP (beta) is currently using (set in the Preferences > Color Profiles? I ask because it appears that AP will (convert?) to this color space when the Document mode is changed from 32bit to 16bit - i.e., it does not preserve the 32bit color space (embedded in your file) and make the necessary gamma conversion, it changes to the specified working space you set up in the Preferences. If that is the case, then you should specify (in your case example) AdobeRGB as the RGB working space and AdobeRGB (linear) as the 32bit color space. Then everything will be in alignment for your workflow in AdobeRGB. I made the LUTs in Lattice, specifying AdobeRGB as the color space and specifying a linear to gamma 2.2 transform. The LUTs i linked for download are 32x32x32 (about 1.3MB for each LUT file). Here: https://db.tt/cVBVIIDWRi is a link to a 64x64x64 LUT (about 10MB) - you can use this one and compare its rendering to the lower res LUT to see if your display can render any noticeable difference between the two. You may also be able to do the display transform using the OCIO adjustment layer and the appropriate OCIO configuration. See: http://opencolorio.org/downloads.html This might give you more flexibility in how your display transform is constructed (to avoid clipping, for example, that might occur using a LUT). Good luck. kirk
  19. For what it is worth, I cannot replicate this in v 1.5. I can make a 32bit AdobeRGB (linear) image in Photoshop, save it as a TIFF, bring it into AP and change the document mode to 16bit and it will just convert the image to AdobeRGB (normal AdobeRGB, gamma 2.2). You can try to work around the AP 16bit linear limitation by using LUTs. Load the linear 16bit image that you have into AP - it will assign the working color profile to the image. So far, assuming that your working color space in AP is the same as the color space of your image file, that does not really matter other than your image will look dark and not display with the gamma transformation that the linear profile would instruct AP to perform for display. The color numbers are still linear because the working profile was assigned. Add a LUT adjustment layer and load an AdobeRGB linear to Adobe gamma 2.2 LUT. This will transform the image for display but once you are finished your image editing you can turn the LUT off (delete it from the layer stack) to save your work in linear numbers (flatten your work without the LUT if you are saving to a format that does not support layers). Then export the result as a 16bit TIFF -in the TIFF export dialog hit the "More" button at the bottom of the dialog and uncheck the "Embed ICC Profile" option. Now you will export and untagged image that has linear AdobeRGB numbers that do not have the normal gamma 2.2 profile embedded. When you open the image in another color managed application, ASSIGN AdobeRGB (linear) and you are back on track. Here are Dropbox links for a couple of LUTs that may help: AdobeRGB linear to gamma 2.2: https://www.dropbox.com/s/sf09acotl1x1hda/AbobeLin-g22.cube?dl=0 AdobeRGB gamma 2.2 to linear: https://www.dropbox.com/s/gzbefiacn6qtujw/AdobeRGBg22-lin.cube?dl=0 Hopefully this kludge will help until proper linear profile support is included in AP. kirk
  20. If you want to convert a colored background to white - convert the document to grayscale and adjust the white point in the levels to make the light gray background white. kirk
  21. Maybe try frequency separation to isolate the small details like textures on the paper, sports and blemishes, etc. You need to use pretty extreme values, with Feature Protection enabled. Then you can do your inpainting to remove creases and folds, etc. and a Levels adjustment to set the overall black and white points. Use Lab mode in the Levels dialog and adjust the black and white points of the lightness levels. You can adjust the opacity of the high-frequency layer to restore some of the texture, or turn it off completely to remove all of the texture. Have fun! kirk
  22. Also note that the Chain Bridge and Marble Hall image sets contain data was artificially created in Adobe Camera Raw from source raw files - that is, the exposure for a source raw was adjusted in ACR to make a new, artificial exposure at a higher or lower EV level. I do not think this is advisable, especially if you use ACR's PV2012, where all sorts of automagic hanky-panky is going on without your ability to control it - mostly in the highlights. This can lead to all sorts of problems in the areas where the data are not what the merger and tone mapper expect (based on the EXIF, or an estimation of the exposure and gamma curve for each source image). Use your own source raw files and try using the full set of Tone Mapping persona tools. That said, there are better tone mappers out there, mostly in dedicated HDR applications. However, AP is somewhat unique in that most of its set of editing tools is 32bit compatible, so you can manually tonemap your 32bit data in AP where other image editors might not be able to handle 32bit operations. kirk
  23. Try using all of the tone mapping tools, especially the amount of compression, the exposure and most importantly for what you are complaining about, Curves. The Curve dialog will help you move specific tonal regions around. It would be nice if, in the Tone Mapping persona, there was a 32bit color readout to help identify where the unbounded pixel values are on the tone curve so it is easier to isolate them on the Curves adjustment. I find that the black point and contrast sliders in the Tone Mapping persona are way too sensitive and crush the shadow values with very little adjustment away from zero. Another approach is to do your merge and DO NOT automatically go to the Tone Mapping persona. This way you can make adjustments to the 32bit data (with Curves or any other 32bit tool) and manually compress and remap the values before committing them to the Tone Mapping persona. kirk
  24. It is the place to read FP color values too. I'd like the FP values to be displayed under the cursor or with color targets in the info panel. Kirk
×
×
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.