Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by kirkt

  1. Having read through the entire thread, maybe the folks requesting "support" for a CC card could be more concise about what, exactly, they mean. There appears to be at least two different requests in the thread: 1) Support for making, or at least implementing previously made, custom DCPs or ICCs in a manner similar to how Lightroom/ACR and similar applications support custom camera profiles for raw image conversion; 2) Support for live color correction based on importing an RGB (i.e., not raw) image of a CC card and placing a grid over the card patches, resulting in AP's automagically performing color correction based on known reference values (or custom values provided by the user) of the card (like Davinci Resolve does). Obviously these are two very different things and there are solutions outside of AP that will perform these tasks with results that can be brought into AP for further editing. Maybe the folks with an interest in this request can elaborate on what they mean when they are asking for AP to "support" a CC card. Make and import/use DCPs or ICCs during raw conversion based on a raw image of a CC card? Produce a live LUT based on an RGB image of a CC card and be able to apply/export that LUT for batch processing? Both? As it currently stands, the Develop persona in AP is a weak point in the grand scheme of the application, so one is probably better served by converting raw files in another application that supports DCP/ICC profiles, batch processing, etc. In terms of providing live color correction based on a reference RGB image of a CC or similar card, it would seem like this would be a great addition to the LUT adjustment layer - a button on the layer dialog that would permit you to load a reference CC image, specify that reference card from an AP database of cards (for example the CC Passport), place a grid on the reference image and hit the "OK" button. The result would be a LUT in the layer stack with the custom adjustment applied via the layer. It would be great if you could also export that LUT for future use, batch processing, etc. directly upon construction of the LUT, as opposed to the Export LUT... option that currently exists (to avoid extra steps). It would be useful if the user could provide a .CIE file for custom reference targets and add the CIE file and the grid configuration for that custom reference card to the database of reference cards. kirk
  2. @Roland Rick Nothing happens when you offer the levels adjustment to the recolor adjustment because it makes no sense to adjust the levels of the recolor adjustment. All adjustment layers have a built-in mask, as you know; however, you do not need to use that built-in mask to dictate where your adjustment gets applied selectively. You can add a mask to the adjustment layer that behaves just like a regular old mask on a pixel layer and sits in the adjustment layer as a child of the adjustment layer. In your case, let's say you want to change the color of the insect's eyes with a Recolor adjustment - apply the Recolor Adjustment layer in the layer stack, tweak the settings and then - INSTEAD of painting on the built-in layer mask, ADD a mask to the Recolor adjustment layer just like you would with a pixel layer mask. That mask should appear as a child of the Recolor Adjustment. Cool. Even easier, OPTION (ALT) - Click on the add mask button to add a mask that is all black so you can hide the adjustment by default and then just paint with white to reveal the Recolor adjustment on the eyes. OK, so far so good. Now, you decide you want to add some extra adjustment to the same exact region - for example, your Levels adjustment, again, just to the eyes. Add the Levels adjustment to the stack and make the tweaks to the eyes. Of course, this affects everything in the stack below the Levels adjustment. Select the Levels Adjustment and the Recolor adjustment and make a new Group out of those two layers. Because the mask you created for the eyes is a regular mask and not the built-in mask, you can just drag the mask out of the Recolor layer and drop it onto the Group icon. Now it masks the entire group. It does not matter if you offer it to the group folder thumbnail (a clipping mask) or to the Group text (a regular mask). It will act the same. If you do not want to group the adjustments and use a single mask applied to the group folder, you can just select the mask in the Recolor layer and copy it (CMD-C) and paste it (CMD-V) and then drag that new copy of the mask to the Levels layer, if you want each adjustment to have its own copy of the mask applied. In PS, you would just hold down OPT and drag a copy of an existing mask on an adjustment layer to a new adjustment layer - AP does not offer this shortcut as far as I know. Bottom line - add a mask to an adjustment layer, instead of using the built-in mask, and then you don't need to mess with creating spare channels and targeting the alpha channel of each adjustment layer's built-in mask. Even with the added mask applied to the adjustment layer, you can paint on the built-in mask to add to the selective application of the adjustment. This is similar in PS to applying an adjustment layer, adding a mask, painting on the mask and then adding just that single layer to its own group so you can add another mask to the group. This was a very wordy description for a very easy and quick process, but it gets to the heart of the issue, which is that Affinity Photo's masking and layer structure behaves differently than Photoshop's, and reprogramming muscle memory from PS to AP takes some effort - I'm still trying to undo decades of PS thinking and wrap my head around AP's different process. Have fun! Kirk Here are some tutorials that will help:
  3. Node-based UI and workflow for AP. It would be so good. Kirk
  4. While I do not profess to know how AP is structured under-the-hood (bonnet!), it seems like a lot of the tools are implemented in a real-time, live way that seems as if they would work in a node-based workflow. For example, the node editor in Blender or Davinci Resolve. If this is the case, it would be an incredibly terrific feature if the user could select between the current "traditional" interface and workflow for AP, or a node-based interface. I would love to be able to create an image-processing pipeline with a network for nodes with preview renders along the way to see each stage of the workflow and variations of the node chain. It would be terrific if node groups could be saved as "presets" that became single nodes themselves, which could be expanded and the contents exposed for tweaking and customization. Please consider this approach, if it is possible. Rendering low-res preview proxies during node assembly would hopefully be a lot less taxing on the interface than the current full-res rendering of Live Filters that tends to get laggy when there are even a modest amount of layers in the stack. You could save full, non-destructive workflows as a pre-built node chain, you could have a single node chain branch into multiple variants, and have a batch node that feeds an entire directory of images into the node chain for processing. Maybe even macro nodes, etc. It would be so much more flexible and serve to further differentiate AP from PS. The output of the node-based workflow could be fed into the "traditional" photo persona (a Photo persona node) for local, destructive edits, painting on masks, etc. One can dream.... LOL Thanks for pushing the boundaries with your applications. Kirk
  5. Thee are usually a bunch of different ways to do things, I agree. Apply Image must be partially broken in Lab equation mode on the Mac in V1.7.2. I can use Lab equations (DL = SL, Da = 0.5, Db = 0.5 - if you do all SL, you get a color image that reflects the mix of the a and b channels) to generate the equivalent of the L* channel. Let's say I want to apply a Curves adjustment through an inverted L* mask, a typical task to boost shadows or target shadow tones for color correction, etc. So I have a layer stack that has the working image (background layer) and a curves adjustment layer above it. Recall that adjustment layers have their own inherent mask built in. So, I select the Curves adjustment layer to make it active (the target of the Apply Image command): 1) Open the Apply Image dialog (Filter > Apply Image) 2) Drag the background layer onto the dialog box to make it the source of the Apply Image operation 3) Use Equations - DL = SL, Da = 0.5, Db = 0.5, DA = SA And apply image. The result gets applied to the mask of the Curves adjustment, but the resulting image is distorted and scaled up and is not the L* channel of the background. (see attached screenshot). To make this work properly, I have to create an empty pixel layer and generate a pixel-based representation of the L* channel - i.e., new pixel layer, fill it with white, then do the apply image operation to generate the L* channel on this pixel layer (the second screenshot is what the L* channel should look like). Then I have to rasterize it to a mask, then drag it to the curves layer. That is a lot of steps for a simple operation. The use of the Levels filter with your settings at the top of the stack will also generate the desired L* output, but then you have to stamp the layer stack, rasterize the resulting pixel layer and then apply it as a mask. It is a nifty way of doing things though. I prefer the Duplicate Image method because I can work in Lab mode with Lab tools to choke the mask, etc. and then simply apply the L* channel to the working master image (to an adjustment layer or as a pixel-based layer, etc.) when I am finished in the duplicated image. I can also leave the duplicate image open and tweak the operations and reapply the result to refine or edit the result for use in the original. Kirk
  6. I second this request. The Photoshop equivalent is Image > Duplicate and is useful for myriad tasks, especially when you want to perform an action or a color mode change for a particular purpose and then introduce the result back into the master working document without having to Save As... and create another file in the OS (for example, you would like to create a mask from the L* channel of your current RGB document - dup the current RGB document, change the dup's mode to L*a*b*, and Apply Image > L* channel to the RGB doc's active layer as a mask). The duplicated version is simply a document that exists in memory and is usually something that is temporary or can be saved after the fact if that is its intended purpose (for example, if you create an action that duplicates the master document, flattens it, reduces the size and performs output sharpening and color space change for output). This seems like a no-brainer and I am repeatedly surprised after major updates that AP has not implement this. Kirk
  7. You can export 16bit TIFFs of each channel from Raw Digger, using the option to fill in the missing pixels with zeros (black). Then you can open each of the four channel TIFFs (R, G1, B, G2) and convert them to 32bit (they are linear) and stack them in linear dodge (add) mode - for each of the four layers, you can clip a recolor layer to add the CFA color that corresponds to that layer. This will effectively give you a UniWB image - you can clip an exposure adjustment to the R and B layers and adjust the exposure of R and B to white balance the image. kirk
  8. You will have to use Adobe DNG converter on a PC or Mac desktop - there is no iPad version. However, you can batch convert RAFs from your older S5000 and then make them available to your iPad through a number of different transfer/storage paths, including iCloud, Dropbox, a WiFi accessible drive, Airdrop, etc. Once you convert your files to the appropriate form of DNG and upload them to your preferred storage/iPad accessible location, you are good to go. Easy peasy. Astropad is simply an application that permits you to use your iPad as a second display, a feature that the upcoming MacOS will have built into it. It does not permit you to transfer files to the iPad, etc. kirk
  9. I was able to convert the raw file in AdobeDNG Converter using some custom settings: (Change Preferences > Custom (Compatibility) > Backward Version DNG 1.1, Linear (demosaiced). Play around with the custom compatibility settings to see what works best for your workflow. In this example the raw data were demosaiced, so the file has been rotated appropriately. Attached is a screenshot of your raw file, converted to DNG, opened in the Develop persona. Here is a Dropbox link to the converted DNG: https://www.dropbox.com/s/0rfzunzpx8jxk6i/DSCF2600.dng?dl=0 kirk
  10. Ok - the camera uses the Fujifilm SuperCCD, which has its quirks. I was able to use a Mac desktop application called LumaRiver HDR (meant to blend raw exposures into a single HDR DNG, TIFF or EXR). This application must use a more robust technical implementation of the DNG API, because I was able to open your raw file in it and then export a DNG from it that I was able to open in AP on my iPad. Here is a link to the DNG: https://www.dropbox.com/s/rfdzu8z3fi9ty1r/testDNG.dng?dl=0 and attached is a screenshot of the DNG opened in the Develop persona in AP on the iPad (Import from Cloud, loading the file from my Dropbox). The image is rotated 45 degrees because of the orientation of the photo sites on the sensor of the SuperCCD. kirk
  11. The Fujifilm Finepix S5000 is a pretty old camera. There is a raw converter for MacOS and iOS called Photo Raw - here is the camera support webpage: https://mcguffogco.freshdesk.com/support/solutions/articles/8000063657-supported-cameras the list of supported cameras includes the S5000; however, I cannot open your file with Photo Raw. Perhaps the file itself has an issue that is causing applications that would otherwise support it to fail to recognize the file format or open the image. Do you have other raw files shot with this camera available to test, or, even better, do you still have the camera and can you update it to the latest firmware and shoot some test images to try? kirk
  12. @Rob Chisholm Bummer. Obviously it would be nice to have AP work properly on the iPad, but in the interim you might want to try using Raw Power as a raw converter for iPad. It is made by the former lead of the Apple Aperture application and has a similar control UI. kirk
  13. I had the same issue with my new iPad and my x-h1. Restart your iPad (shut it down and turn it back on). Then try again. Worked for me. I found this solution in another similar thread in the forum. The mod was puzzled as to why it worked, but it worked. Who knows what the iOS is doing to adapt to the user in the background. Kirk
  14. Try: 0) Open the App Store and make sure that you do not have updates for AP or AD pending. 1) Restart your Mac. 2) Launch Publisher, Designer and Photo so they are all running. Then, in Publisher, drag an image onto the page (create a new document if you have not already), select the image and click on the Photo persona button in the upper left corner of the Publisher window. You should get the Photo tools to edit your image as if you are in Affinity Photo. The ability to dip in and out of the various personas all within a single application is next level stuff. Nice work Serif folks. Kirk
  15. I apologize - it looks like this topic has been covered, with the answer being that I need to update the App Store versions to the latest version (1.7.1) of Photo and Designer, which are available now. The Mods can delete this topic if they would like. Kirk
  16. I purchased Photo and Designer a while ago through the Apple App Store and purchased Publisher through the Affinity webstore. I just was able to download Publisher (congratulations Affinity!) and have found that Studiolink does not "see" my installs of Photo or Designer. Do the App Store and Affinity Store versions of these applications play nice through Studiolink, or will I have to purchase Photo and Designer again through Affinity to get the Studiolink functionality? Thanks, Kirk
  17. The attached images are from an admittedly extreme example, but it was actually this image set that revealed the problem. The raw converter I used has a half-res or full-res setting and one of the images was inadvertently rendered at half-res, while the others were rendered at full-res. kirk
  18. When performing a merge to HDR, the resulting merge completes successfully (or gives the impression that it has) even when the source images are not all equally sized. When the clone source panel opens for manual deghosting, etc., you can examine the resulting processing that AP has performed on each image to equalize the exposure - for the smaller images, the right and lower edges of the undersized images are repeated to the borders of the large image size, making these source images useless for cloning, etc. Also, the undersized image appears as if it has been scaled up to the full size of the other images but then only the top left quarter of the image has been properly used, with the rest of the image being the repeated streaks of pixels - either that or one of the other images has been substituted into the undersized image's slot but, again, improperly rendered. In any case, it makes me wonder how the actual merge completed successfully (was the undersized image ignored completely?). It seems like there should be some error checking going on prior to attempting the merge that would prevent this from occurring - "WARNING: Source images are not all equally sized." Or something to that effect. Kirk
  19. The user could also make selections on the Soft Proof false color map of particular colored pixels (red, for example) and then use that selection to make a mask for an adjustment layer, such as hue-sat, to isolate just the areas on the original OOG image where an adjustment might be necessary. kirk
  20. A workflow that ultimately results in a print can benefit greatly from a color managed environment and soft-proofing for print. In pretty much every pixel editor and raw conversion application that provides soft-proofing, the gamut warning overlay is simply a single color that indicates, presumably, that those indicated pixels are out of gamut. Unfortunately, the gamut warning does not indicate how far out of gamut (in terms of dE, for example) so the warning is not very useful. It would be nice if the Soft Proof adjustment layer had a gamut warning that, when enabled, completely replaced the current working image with a false color map of the image where pixels in gamut, and up to a user-defined acceptable dE threshold (like dE=1 or 2), were green; pixels within a second user-defined dE interval were yellow (dE = 2-4) and pixels outside a third user-defined dE were red (dE > 4), or something similar. Because soft-proofing in AP is an adjustment layer, the user could pull down the opacity of this false-color map and overlay it on the original image to identify the areas on the image that fall within the most out-of-gamut regions and implement local adjustments to those areas without making global changes to areas where the issue is not such a problem. Because the rendering intent can be selected in the adjustment layer drop-down, the effect of rendering intent would be more informative by examining how the most problematic areas out of gamut get mapped toward, or into, the output gamut, instead of simply relying on the current gamut warning which is a go/no-go indicator. This approach would be really helpful in isolating specific problem areas of an image so that the user could make more informed and targeted decisions about transforming and correcting an image in a large working space, such as ProPhoto, to a smaller output space for a particular printer/paper combination. The only utility where I have seen this feature is in ColorThink, which is an application specifically intended to explore color profiles and compare images against profiles for purposes such as evaluating gamut and gamut mapping. Thank you for you consideration! Kirk Thibault Berwyn, PA
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.