Jump to content
You must now use your email address to sign in [click for more info] ×

irascible

Members
  • Posts

    21
  • Joined

  • Last visited

  1. Photo 1.6.4.104 on Windows. Here's what happens: Open Photo Open CR2 (To get access to Develop Assistant) Develop Assistant Enabled RAW Output Format = RGB (32 bit HDR) (everything else Take no action) Cancel Develop Restart Photo (To make sure Develop Assistant change takes) Profiles Checked, Set To ROMM Develop Result: Image Converted To RGBA/16 ROMM:RGB: ISO 22028-2:2013 Close Open File (Don't Save Changes) Open CR2 Again Profiles Checked, Set To ROMM Develop Result: Image Converted to RGBA/32 (HDR) - sRGB IE61966-2.1 (Linear) So not only does RAW get converted to RGBA/16 ROMM:RGB on first pass, after that it gets stuck in converting to RGBA/32 sRGB?
  2. Thanks. My research indicates that ROMM is basically ProPhoto RGB; gamut covers almost every reflected color humans could see; only possible weakness in specular reflections. Oddly, when I use the Develop Persona, if I choose the ROMM Profile, even with the Develop Assistant set to RGB 32 (HDR), Photo assigns the RGBA/16 color model (and ROMM profile) to the image. Why the 16 instead of the 32?
  3. Either, but I must take back what I wrote: "However, I notice when converting an RGB image to LAB in Photo, the photo does change slightly." I cannot replicate this. I thought this had occurred in the Photo Persona with an RGBA/8 sRGB JPG, but I confused repainting flicker with color change. I do notice dramatic color change when developing from RAW and configuring Develop Assistant to apply a Tone Curve, which I expect. However, without Tone Curve, I see that even with RGB (32 bit HDR), Photo assigned the sRGB color space to the image. If I understand correctly, even with RGB (32 bit HDR), Photo converted RAW sensor data only into colors inside the sRGB gamut. When I convert to LAB, these colors will be translated through the profile connection space to the CIELAB D50 color space without noticable change. However, any RAW sensor data that could have been translated into colors outside the sRGB gamut, but inside the LAB gamut, will have been lost (shifted into the sRGB gamut). Correct? I notice Development Persona the ability to assign a profile to the image when it is converted from RAW. For example, I could assign Adobe RGB instead of sRGB. To translate a RAW image into a LAB color model and color space without any loss of color information, do I simply need find a profile that represents the CIELAB D50 color space in the RGB color model, have Photo develop the RAW and assign that profile in the RGB color model, and then convert the document to the LAB color model?
  4. I fully understand everything you wrote here on the advantages of LAB for editing except this part: "The difference in RAW -> LAB versus a RAW -> RGB -> LAB is working and having access to the whole sensor data (RAW) here versus an already converted/stripped down data image format." In my mind this shouldn't make a difference because after we go RAW -> RGB, we can than pass through the profile connection space to go to LAB and although the individual pixel values would be different, the original color and brightness of each pixel should be the same and displayable any application that knows how to render LAB back through the profile connection space to monitor RGB. However, I notice when converting an RGB image to LAB in Photo, the photo does change slightly. Perhaps this goes to the fuzziness of RGB being a color model and LAB being a color model and color space? Are you saying that when you go RAW -> RGB, some arbitrary RGB color space (ex: sRGB or Adobe RGB) is going to be assigned? Since LAB color space is larger, if you had gone RAW -> LAB, you would have colors falling into the entire LAB space from the RAW sensor data. Admittedly, some of these would be out of gamut for any monitor and definitely outside sRGB and Adobe RGB, but you could still edit them, even if you couldn't see them, and ultimately make your decision on how to force them into a displayable color space after editing?
  5. Was your issue with the recognition of strokes or the creation and tweaking of brush settings?
  6. Thanks for example about recovering the over-exposed shots. What was it about RawTherapee that made those recoverable versus, say, doing RAW development in Affinity and then switching to the LAB space? I think I am missing a key piece of understanding about the relative advantages of a RAW -> LAB versus a RAW -> RGB -> LAB conversion and also, perhaps separately from color space/model, why one application in the class of sophisticated RAW development applications like Photo, RawTherapee, etc. would develop RAW better than another unless, I suppose, one of the applications is better attuned to the RAW format of a particular camera manufacturer you use?
  7. Thanks. So you use tools independent of Photo for RAW conversion? Does that imply Photo doesn't have RAW to LAB conversion or that it is not as full-featured in RAW conversion (yet?) as, for example RawTherapee, at least in terms of LAB?
  8. Is it possible to develop RAW directly into LAB? Any tips/cautions about doing this?
  9. Anyone else notice pen (Wacom Cintiq) response improvement between 1.6.2 and 1.6.4? Before 1.6.4, Designer would not recognize small, quick pen strokes or would create small jagged lines. Better now, especially pixel persona. Vector persona still not as good pen response as Mischief, but good improvement.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.