Jump to content
feuerfloh

"afphoto"-Files get HUGE

Recommended Posts

Hi Ablichter,

 

just saying that even as per my example image - sometimes a 'lossy 8-bit image' can generate a nice image (IMHO) - with imagination and skill - often a huge Raw file gives nothing extra! ;-)

 

cheers, Paul

Ah, okay.

Aside that I believe that we don't really are able to discover the source(s) this collage have been made from, if it was made from a photo, someone indeed don't need to use a cam >500$ or shot RAW to ruin 'photos' this way. Same with most results of HDR I see around for years. Serious.


regards,

Ablichter

Share this post


Link to post
Share on other sites

The original question here was about .afphoto files becoming much larger than the original image file. For example starting with a 20MB to 30MB raw image file and getting .afphoto project files ranging from 150MB to more than 400MB.

 

There also was mention on 20 November by a moderator that this was a bug to be fixed in the next beta. As I mentioned earlier, it is still present in the 1.5.0.42 beta.

 

I would very much like to hear from Affinity Photo designers whether this large file size issue is still considered a bug that is scheduled for a fix.

Share this post


Link to post
Share on other sites

The original question here was about .afphoto files becoming much larger than the original image file. For example starting with a 20MB to 30MB raw image file and getting .afphoto project files ranging from 150MB to more than 400MB.

 

There also was mention on 20 November by a moderator that this was a bug to be fixed in the next beta. As I mentioned earlier, it is still present in the 1.5.0.42 beta.

 

I would very much like to hear from Affinity Photo designers whether this large file size issue is still considered a bug that is scheduled for a fix.

My post was to explain that I have the same problem when I start with a RAW file, "develop it" and save it as a "afphoto" file. It gets abnormally huge, but I don't have this problem when I edit a JPG file and save it as a "afphoto" file.

Share this post


Link to post
Share on other sites

My post was to explain that I have the same problem when I start with a RAW file, "develop it" and save it as a "afphoto" file. It gets abnormally huge, but I don't have this problem when I edit a JPG file and save it as a "afphoto" file.

I don't know why someone would do it, but convert it to 16 bit and its size will at least quadruple sextuple 4,26MB to 25MB.

 

PS open a JPEG and only converting it to sRGB (even when it is in sRGB already) quadruples it.


regards,

Ablichter

Share this post


Link to post
Share on other sites

I don't see why convert a file to 16 bits when it's already down to 8 bits, not much of a gain. Once you have make all the ajustments from the RAW file there shouln't be much work to do in JGP. None of my clients ever asked me if I worked with 16 or 8 bits files anyway.

Share this post


Link to post
Share on other sites

I just hit this. Opened a *.CR2 file with roughly 8700x5800, original size 53 MB.

converted to BW during development, adjusted sharpness, added one pixel layer with about 1/20th of the image used, the rest is transparent (small fix).

saved .afphoto image is > 700 MB.

 

Affinity 1.5.1 on Mac, macOS 10.12.2.

 

This is crazy. Now every time I want to save to continue later I am wasting 0.7 GB?! I really hope you can fix this.

 

 

Apart from that - thanks for Affinity :)) . You guys rock. (had to be said).

Share this post


Link to post
Share on other sites

FYI, a CR2 file is 14bpp (or 10bpp). This means one pixel is taking 14 bits, that "compresses" RGB data a bit (gross simplification, I know).

 

When RAW is possessed it's changed into 3x8bit or 3x16 bit (or even 3x32bit):

RAW/DNG 14 bits/pixel        -> 88.1 MB

BMP RGB 3x8 bit/pixel        -> 151 MB
TIFF CMYK 4x8 bit/pixel      -> 201 MB
OPENEXR RGBA 4x16 bit/pixel  -> 403 MB

See: http://toolstud.io/photo/megapixel.php?width=8688&height=5792&compare=video&calculate=uncompressed#calculate


"I'm a lumberjack and I'm OK, I sleep all night, and I work all day..."

Amature camera user, Adobe avoider. Still missing: HLS improvements, Raw development with sidecars, Gradient list in gradient layer/tool. Do want: Liquefy as Layer.

Share this post


Link to post
Share on other sites

I just hit this. Opened a *.CR2 file with roughly 8700x5800, original size 53 MB.

converted to BW during development, adjusted sharpness, added one pixel layer with about 1/20th of the image used, the rest is transparent (small fix).

saved .afphoto image is > 700 MB.

This is crazy. Now every time I want to save to continue later I am wasting 0.7 GB?! I really hope you can fix this.

 

To put this into perspective, let's first remember that the .afphoto file isn't an image file (or an image format). It's a project file, that contains everything you do in a Photo editing session for a specific image file. Its whole purpose is to allow you to save your editing work when you exit Photo and return to where you left off when you restart it. It contains your edits in a non-destructive form, so you can go back in time and change your mind about individual edits, etc. That potentially adds up to a lot of information to save.

 

Let's revisit some numbers from your specific example.

Your image is approx. 8,700 x 5,800 pixels. This gives us a total of 50,460,00 (about 50 megapixels).

 

When you develop this in Photo, you automatically get a background pixel layer. This layer will be 16 bits/channel (unless you chose the 32 bits/channel HDR option in the develop assistant). 16 bits is of course 2 bytes. Your pixel layer is RGB, so there are 3 channels (Red, Green, Blue). Therefore, your pixel layer will be approx. 50 (megapixels) x 3 (channels) x 2 (bytes) = 300 megabytes.

 

You added a pixel layer. Not knowing the inner workings of Photo (or the .afphoto format), I'm guessing this pixel layer is also 300 megabytes by the same logic as above. In fact, it's probably more, as the "transparent" pixels you refer to are likely still there in the pixel layer, just marked as transparent in some way, and this marking would also take up some memory. After all, you can always come back and change this layer to broaden or narrow the transparent selection, so the pxiels would still need to be there in the pixel layer for you to be able to do that. So, now we have 600+ megabytes.

 

Now we add in information recorded about the other non-desctructive edits you've made, like sharpening (recorded so you can later manipulate them again, view them in history, etc). Some more megabytes. Finally, it's possible that Photo also records some information from your original raw file and/or the Develop process (though that's just a wild guess). If so, still more megabytes.

 

Note that some of the calculations above are estimates, and there are some assumptions about how Photo stores info in your .afphoto project file. Nonetheless, it's probably a fairly reasonable guess as to how in your case you could end up with a 700 megabyte project file. Not sure there's much that can be done to get around this if you want to maintain project files. ;)


Len

--------------------

Over the hill, and enjoying the glide.

Share this post


Link to post
Share on other sites

I posted an item about this in the customer release forum at the link below:

 

https://forum.affinity.serif.com/index.php?/topic/31890-ap-afphoto-file-size-large/

 

Since the conversation seems to be continuing here I'll add thoughts.

 

I certainly realize the .afphoto file is a project file. So is the .XMP file Adobe Lightroom generates. The .XMP file allows complete nondestructive editing, reversibility of editing, and snapshots. But the two apps have rather different design approaches. The XMP file is just instructions, uses the original image file that is not touched, and is usually smaller than 10KB and almost always smaller than 100KB. As Len pointed out, the .afphoto file contains a version of the original image file and is much larger.

 

So the image overhead in AP can be 5 to 10 times the size of the original file whereas in Lightroom the overhead is usually a small fraction of the size of the original image.

 

I very much want to find a way to make AP my standard editor, perhaps complemented by something like Photo Mechanic for cataloging, key words, etc. But starting with a 30MB D7200 raw file and adding a .afphoto project file ranging from 150MB to 300MB (actual numbers I've seen) is simply not practical.

 

I come back to my original question: Is it possible to dramatically reduce the size of the .afphoto project file? Perhaps by "re-using" the original image file as does Lightroom? If so I would very much like to know a timeframe. If not, I'd very much like to know that as well since most regrettably it means I will only be able to make limited use of AP and will have to continue using Adobe Lightroom.

 

Thanks,

Pete

Share this post


Link to post
Share on other sites

But the two apps have rather different design approaches.

Hi Pete, I think you've rather answered your own question. Photo is a pixel editor, while LR is a parametric editor (as are most raw converters, like DxO Optics Pro, which I use).

 

The advantage of parametric editing, as you pointed out, is that the software only stores editing instructions. The disadvantage, to many, is that you have no control over the sequence in which those instructions are executed in order to render an output image. The instruction processing pipeline is baked into the software. The sequence in which you actually apply your edits is ignored. Plus, you simply can't achieve the same fine level of control you can with a pixel editor.

 

If the sequence of edits is important to you, or you want to change them (e.g. repositioning layers), it's hard to see how a parametric editor could enable you to do that. That's when you need a layer based pixel editor.

 

It boils down to what kind of editing you need to do, and how much control you need, or are prepared to sacrifice.


Len

--------------------

Over the hill, and enjoying the glide.

Share this post


Link to post
Share on other sites

Hey there,

 

I am also using lightroom. I think raw files are a great thing if you want to recover highlights or shadows. In addition, the parametric approach with small size overhead is great too. But I am using LR6 instead of CC to not have monthly fees. The only thing I use AF photo for is as last step for inpainting (because it is not possible in LR) and maybe some adjustment layers / creative work (text, combining photos, ...). The inpainting algorithm is really great in AF Photo. I compared it with the one in PS Elements, and the PS Elements algorithm really sucks.

 

For me I decided to not import a RAW file in photo, because it was slow the last time I tried and the lightroom RAW development process had much more time to mature. So my workflow is:

 

1) Import RAW photos in LR into one catalog with one root folder, do not think about your file structure twice, because you have your import presets.

2) Work as long as possible in LR, create your virtual copies, make your adjustments

3) If there is some functionality missing at the end of the process, generate 16Bit TIFF (Edit In > AF photo).

4) File auto opens in photo, work, save in photo => Changes on TIFF are visible in LR automatically, do not touch the TIFF in LR anymore, only to view or to export it

 

What I did not know and was happy to see, that the TIFF preserves the layer information from AF photo. So the TIFF can be reopened next time and the layers are there.

 

The only problem is when the underlying RAW/LR parameters change, then there is no way I know of to auto synchronize the derived TIFFs. This is a market gap for a lightroom plugin or so. Here it would be a really cool feature, if the AF photo history could be exported as a file and reapplied inside another TIFF, whithout replacing the base data in the background layer.

 

I think AF photo will never be and is not intended to be a replacement for LR in the same way as PS will never replace LR. For me LR6 + AFP is the best combination you can get for the price of 150€.

Share this post


Link to post
Share on other sites

Hi Len,

 

interesting thoughts. 

 

Hi Pete, I think you've rather answered your own question. Photo is a pixel editor, while LR is a parametric editor (as are most raw converters, like DxO Optics Pro, which I use).

The advantage of parametric editing, as you pointed out, is that the software only stores editing instructions. The disadvantage, to many, is that you have no control over the sequence in which those instructions are executed in order to render an output image. 

 

I thought all the time that a requirement for "real" (aka not just saving series of snapshots) non-destructive work is the parametric approach. So as you've comprehensively laid out, every single step (aka parameter change) gets filed/protocol-led and processed on the fly. In my belief, having a fixed order of the instruction sequence is no inherent "feature" of that approach. Afair, in LR the fixed sequence is the result of optimizations with respect to quality and performance and hence doesn't want to be exposed to the user simply by decision of the product management.

 

As for AP - and leaving all destructive procedures aside -, aren't all adjustment layers in the Photo Persona nothing but parametric containers? Which can be freely re-sequenced?

In my over-simplified view, the crucial entity is the base "layer" on which all those Photo Persona adjustments are working on. In detail, it is the question, why one need to explicitly hit the "develop" button to continue the work in the Photo Persona. Of course, all signs indicate that this step indeed is creating the pixel base layer for further processing, and which then gets co-stored in the .afphoto file.

 

But what if in theory all settings in the Develop Persona would be changed into (invisible?) parametric "adjustment" layers building up their stack on the referenced raw file? The base pixel layer would be created on the fly in-memory each time a .afphoto file is generated and re-opened. Like in LR the performance critical sequence of those development instructions could be hardwired as well. This way, the files could become smaller.

 

I know, reality is tremendously more complicated (performance!), and lots of internal dependencies not known to me are likely to conflict with this in practice. But my idea is, that AP has all the major building blocks already in use.

 

 

affinian

Share this post


Link to post
Share on other sites

I think if you do not have a limited set of non destructive commands and the user has the possibility to change the pipeline execution order or influence its length (layers) it is really hard to realize the parametric approach, because you cannot assume and optimize anything and will fail because of performance when the pipeline gets to long. I think, I have read somewhere, that people tried exactly that and failed.

 

Here is a nice response to a user, who asked the Adobe guys, why there is no content aware fill (inpaint) in lightroom, with the following response:

 

"No, Lightroom does not have content-aware fill. I think that everyone would like to see that, but it's not that easy. The biggest problem is that any changes to the image will affect content-aware filled areas, so it would require a lot of recalculations that will slow down Lightroom. Photoshop does not have content-aware fill for smart objects either, possibly for the same reason."

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×