Jump to content
You must now use your email address to sign in [click for more info] ×

AP's equivalent to PS "smart objects"...with RAW image changes after edits in pixel persona


Recommended Posts

HI all,

I've been trying to research Affinity PHoto's equivalent behavior to smart objects in PS.

Let's say I have a couple of RAW images, in this case, DNG specifically.

I bring them in and in the RAW personana I do a couple of adjustments and bring them into the pixel persona.

I'm thinking similar how I"d bring them into PS as smart object which I can double click to go back and edit still in Adobe RAW....even after I've applied smart filters to them, etc.

So, I'm bringing these into AP in a similar way, and I'm doing a little composite work and I notice on one of the images, that I need to bring up the shadows....if I go back into the RAW persona and make an adjustment, will it be reflected back in the pixel persona?

 

I'm wanting to try to keep as MUCH in the RAW workflow as possible, I know there are limitations at some point with some adjustments...but can you do something similar to what I'm describing in AP?

 

If the example I described with 2x images and the unknowns with what I'd be compositing is making the question too complicated for simple answers, let's just change it to one image coming in RAW...and doing a bit of work on it and then adjusting again the RAW part and having it reflected in the pixel persona....

 

Am I being clear in what I"m attempting to do?

 

Thank you in advance,

cayenne

Link to comment
Share on other sites

13 minutes ago, cayenne said:

So, I'm bringing these into AP in a similar way, and I'm doing a little composite work and I notice on one of the images, that I need to bring up the shadows....if I go back into the RAW persona and make an adjustment, will it be reflected back in the pixel persona?

You can select that image's pixel layer, and then select the Develop Persona, and you will then have access to that pixel data in the Develop Persona. But you do not have access to the original RAW data, as you did when you first brought the RAW image into the Develop Persona. Photo has transformed the RAW data into an internal format, and removed it from your document, leaving you with only the developed data.

You can still use the Develop Persona's tools with that data when you go back into the Develop Persona, but because they are not working with RAW data some will work differently.

 

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro, version 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro, version 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
iPad:  iPad Pro M1, 12.9": iPadOS 17.4.1, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sonoma 14.4.1

Link to comment
Share on other sites

AFAIK APh doesn't preserve or tracks RAW image processing states, aka something other RAW processors offer via sidecar handling. So I doubt it would handle RAW files in that PS smart objects like fashion at all here. Further I doubt it can embbed RAW files in a way like it can with Affinity format files.

☛ Affinity Designer 1.10.8 ◆ Affinity Photo 1.10.8 ◆ Affinity Publisher 1.10.8 ◆ OSX El Capitan
☛ Affinity V2.3 apps ◆ MacOS Sonoma 14.2 ◆ iPad OS 17.2

Link to comment
Share on other sites

17 minutes ago, walt.farrell said:

You can select that image's pixel layer, and then select the Develop Persona, and you will then have access to that pixel data in the Develop Persona. But you do not have access to the original RAW data, as you did when you first brought the RAW image into the Develop Persona. Photo has transformed the RAW data into an internal format, and removed it from your document, leaving you with only the developed data.

You can still use the Develop Persona's tools with that data when you go back into the Develop Persona, but because they are not working with RAW data some will work differently.

 

So, you're saying if I go back to the develop persona....I'll no longer have the wide gamut and leeway to make edits to the same extent I had with the RAW image that comes in?

Thank you for the reply!!

 

C

Link to comment
Share on other sites

17 minutes ago, cayenne said:

So, you're saying if I go back to the develop persona....I'll no longer have the wide gamut and leeway to make edits to the same extent I had with the RAW image that comes in?

Thank you for the reply!!

 

C

Yes, one might infer that, but I don't know if that's true, or to what extent it's true. So I intentionally did not say that :)

If I understand correctly, the Developed image is (at a minimum) RGB-16 and typical cameras produce RAW images with 14-bit data or less. 

If that's true, I'm not sure why the Developed image would lose anything that was present in the original, depending on the color profile you work with. If you work with the Adobe RGB color profile, then you shouldn't lose any color information and I think you'd need to have a high-end camera that produced 17-bits or more before you'd really lose anything.

(I'm quite open to corrections from those with more expertise in this area than I have.)

 

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro, version 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro, version 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
iPad:  iPad Pro M1, 12.9": iPadOS 17.4.1, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sonoma 14.4.1

Link to comment
Share on other sites

1 hour ago, walt.farrell said:

Yes, one might infer that, but I don't know if that's true, or to what extent it's true. So I intentionally did not say that :)

If I understand correctly, the Developed image is (at a minimum) RGB-16 and typical cameras produce RAW images with 14-bit data or less. 

If that's true, I'm not sure why the Developed image would lose anything that was present in the original, depending on the color profile you work with. If you work with the Adobe RGB color profile, then you shouldn't lose any color information and I think you'd need to have a high-end camera that produced 17-bits or more before you'd really lose anything.

(I'm quite open to corrections from those with more expertise in this area than I have.)

 

Well, these images are from a Fuji GFX100....and I have it set to the highest image level...so, it is pretty high being a medium format 100mp image ( on a similar workflow to this question, original images is about 208mb, the focus stacked set of images using Helicon Focus RAW-to-DNG workflow, results in about  297-332mb images each.)

 

Again, THANK you very much for your input!!

If anyone else out there knows, more please chime in.

:)

 

cayenne

Link to comment
Share on other sites

Maybe some interesting information about RAW.

https://affinityspotlight.com/article/raw-actually/

https://affinityspotlight.com/article/whats-new-with-raw-in-affinity-photo-17/

P.S. If maximum output quality is required when developing RAW data, then why not use 32 bits?

Affinity Store (MSI/EXE): Affinity Suite (ADe, APh, APu) 2.4.0.2301
Dell OptiPlex 7060, i5-8500 3.00 GHz, 16 GB, Intel UHD Graphics 630, Dell P2417H 1920 x 1080, Windows 11 Pro, Version 23H2, Build 22631.3155.
Dell Latitude E5570, i5-6440HQ 2.60 GHz, 8 GB, Intel HD Graphics 530, 1920 x 1080, Windows 11 Pro, Version 23H2, Build 22631.3155.
Intel NUC5PGYH, Pentium N3700 2.40 GHz, 8 GB, Intel HD Graphics, EIZO EV2456 1920 x 1200, Windows 10 Pro, Version 21H1, Build 19043.2130.

Link to comment
Share on other sites

4 hours ago, Pšenda said:

P.S. If maximum output quality is required when developing RAW data, then why not use 32 bits?

 

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro, version 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro, version 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
iPad:  iPad Pro M1, 12.9": iPadOS 17.4.1, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sonoma 14.4.1

Link to comment
Share on other sites

9 hours ago, Pšenda said:

Maybe some interesting information about RAW.

https://affinityspotlight.com/article/raw-actually/

https://affinityspotlight.com/article/whats-new-with-raw-in-affinity-photo-17/

P.S. If maximum output quality is required when developing RAW data, then why not use 32 bits?

Thank you for your reply.

But it isn't quite so much the bit depth...which I do like, and at anything but HDR, 16-bit is the usually high standard, I'm looking to keep the work flow as close to RAW as possible even with some pixel manipulations ( like compositing, filters, etc)....I'm looking for answers if AP can come as close as PS can with smart objects...in that after some/many pixel manipulations...that if you need to actually do RAW level edits to the file in there..that AP can handle those and update the pixel manipulations to accept those and update as required.

I'm dropping Adobe PS and wanting to try to figure out if I can do some rather important things in AP that I can do in PS.

 

I realize there are limitations on how far this can go, but I"m wanting to know if I can get and how I can get the same level in AP as I can get in PS.

Thanks!!

:)

 

cayenne

Link to comment
Share on other sites

Short answer - you cannot emulate a smart object in AP with a raw file.   That being said, a gamma-encoded 16bit RGB image usually has enough dynamic range (and "gamut" if you choose a large working space like ProPhoto RGB) to get the job done.  You can still use the Develop module to tweak the gamma-encoded (TIFF, for example) RGB image that is rendered from a raw file, if you prefer the Develop persona interface and controls.  Another alternative would be to render your raw file to a 32bit RGB file and then use OCIO adjustment layers to transform the linear 32bit image data to something that gives you as much latitude as the raw image (i.e., working in linear or log with ACES color, etc.).  Because the OCIO adjustment layer is nondestructive, you can always go back and change the transform to something else, and the linear 32bit RGB file preserves your ability to make exposure and channel adjustments as if you were operating on the raw data.  You can make these adjustments on the layer stack prior to the OCIO transform (ie., on the linear data) and let the OCIO transform convert the linear data to gamma-encoded or log encoded nondestructively higher up in the stack, for further editing.

This workflow can be onerous if you are not familiar with moving in and out of a traditional color managed workflow - with OCIO, you do the color management explicitly with the transforms you use and then, when you are ready to commit the edits, you assign the appropriate output color profile.  Also, you really want to keep your raw data "raw" - that is, do not necessarily try to perform highlight reconstruction, etc.  Again, this kludge of a workflow is not really practical and I think you will be better off just doing a gentle conversion of the raw data to 16bit in a reasonable RGB working color space (maybe bring your white and black points not to the edge of the histogram, keep low contrast and conservative saturation during the conversion and add all of that once the conversion is done).

Kirk

Link to comment
Share on other sites

17 hours ago, kirkt said:

Short answer - you cannot emulate a smart object in AP with a raw file.   That being said, a gamma-encoded 16bit RGB image usually has enough dynamic range (and "gamut" if you choose a large working space like ProPhoto RGB) to get the job done.  You can still use the Develop module to tweak the gamma-encoded (TIFF, for example) RGB image that is rendered from a raw file, if you prefer the Develop persona interface and controls.  Another alternative would be to render your raw file to a 32bit RGB file and then use OCIO adjustment layers to transform the linear 32bit image data to something that gives you as much latitude as the raw image (i.e., working in linear or log with ACES color, etc.).  Because the OCIO adjustment layer is nondestructive, you can always go back and change the transform to something else, and the linear 32bit RGB file preserves your ability to make exposure and channel adjustments as if you were operating on the raw data.  You can make these adjustments on the layer stack prior to the OCIO transform (ie., on the linear data) and let the OCIO transform convert the linear data to gamma-encoded or log encoded nondestructively higher up in the stack, for further editing.

This workflow can be onerous if you are not familiar with moving in and out of a traditional color managed workflow - with OCIO, you do the color management explicitly with the transforms you use and then, when you are ready to commit the edits, you assign the appropriate output color profile.  Also, you really want to keep your raw data "raw" - that is, do not necessarily try to perform highlight reconstruction, etc.  Again, this kludge of a workflow is not really practical and I think you will be better off just doing a gentle conversion of the raw data to 16bit in a reasonable RGB working color space (maybe bring your white and black points not to the edge of the histogram, keep low contrast and conservative saturation during the conversion and add all of that once the conversion is done).

Kirk

Wow...definitely some advance terms and workflows I'll need to research and try to understand.

But more in simple terms.

If I bring in a RAW image to AP.

I may do a tweak or two in Develop persona.

I come into pixel persona....

I do some work there, and decide, you know...I need to bring the midtowns up more....

In PS, I could double click the image layer and it throws me back into RAW and I can do that, I close RAW  and come back and the images is updated in my layer and you see the change propagate.

Of course this is with limitations, etc...but in general I think you can see my question...of can I do that in AP similarly.

At this point, it appears the answer is NO.

ON a slight tangent from that....in PS with a smart object, if you while working with it with transform make it better or smaller and back and forth, you don't lose image quality.

With Affinity Photo...is this also a capability....or do I lose resolution if while in the pixel layer when I'm transforming larger/smaller...while trying to decide and work with which is the best size etc.

 

I"m generally wanting to know how Affinity Photo deals with these issues in general as that PS has you put things into a "smart object", and what I read about Affinity Photo is that it takes care of this behind the scenes and I'm trying to gauge the abilities and shortcomings so that I know best how to work within AP.

 

I thank everyone for their contributions!!  This is how I learn...and hoping this thread could be beneficial to others looking for similar answers.

 

C

Link to comment
Share on other sites

6 minutes ago, cayenne said:

I come into pixel persona....

I do some work there, and decide, you know...I need to bring the midtowns up more....

In PS, I could double click the image layer and it throws me back into RAW and I can do that, I close RAW  and come back and the images is updated in my layer and you see the change propagate.

You can do that in Photo, but once back in the Develop Persona you are no longer working with the original RAW data, but with the pixel layer you've been manipulating and any changes you made to it.

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro, version 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro, version 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
iPad:  iPad Pro M1, 12.9": iPadOS 17.4.1, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sonoma 14.4.1

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.