Jump to content
You must now use your email address to sign in [click for more info] ×

Filesize: please work on that...


Recommended Posts

Hi Serif,

I know this has been asked multiple times and your answer usually is something about "optimization for speed" etc.

Sorry, but blowing up a file (2970 x 3688) to more than 10 times its original size (RAW!) is ridiculous...

image.png.ed138a93027e4d6c11733f5e46b79fa9.png

...esp. since I only added 3 adjustment layers and one Live-Filter (only 2 of them using a Mask).

image.png.2a169ab5fc410e05864463fe14aaed37.png

 

Please: fix that.
Thanks.

Fritz

Link to comment
Share on other sites

A RAW file only stores one color channel.

When the RAW data is developed to RGB it will (uncompressed) take up three times the amount of space, assuming equivalent bit depth.

If the results of the adjustment layers are cached for better performance, you then have 3 channels * 4 layers = 12 times the size of the RAW data, which is almost exactly what you are seeing.

Compression of the pixel layers in memory wouldn't make sense for a photo editor, and for performance reasons I can see why on-disk compression might be limited in the Affinity file format (at least for pixel layers) - not sure if they are doing any or not, but considering that most "modern" RAW files are at least somewhat compressed, I would estimate that you are probably already a bit ahead.

Link to comment
Share on other sites

@fde101 

I am not sure about the "one color channel"-thing but
- Nikon says: "(NRW)...Images are saved as 12-bit RAW uncompressed data ..."
- Wikipedia says: "Many raw file formats, including (...)  NEF NRW (Nikon)  (...) are based on the TIFF file format."
- Wikipedia also has a list of lossless compression formats for raster graphics (
https://en.wikipedia.org/wiki/Lossless_compression#Raster_graphics)

Conclusions:
- it is possible to reduce the filesize without loss of quality.
- the uncompressed, TIFF-based RAW-File is more space-efficient than the Affinity-File; 
   Even if you count every Layer as a full image-File: 5x 25,7 MB = 128,5 MB << 313 MB

Regarding your calculation "3 channels * 4 layers": 
2 of these layers do not have a mask = no pixel-data, just a formula to manipulate color-parameters.

I think, Serif should offer a compressed file-format option.

Those seconds it may take longer to open a compressed Affinity-File are negligible compared to the hours of time spent on editing the same file.
Also think about the time saved when transferring smaller files (e.g. Backup..).

kind regards
Fritz

Link to comment
Share on other sites

23 minutes ago, Fritz_H said:

I am not sure about the "one color channel"-thing

Most image sensors are monochrome.  There is a color filter array (CFA) in front of the sensor, most commonly in a Bayer pattern, which makes each photosite (pixel) sensitive to a single color by only allowing that color of light to reach that photosite.  Thus for each pixel in the captured RAW image, there is only one color.

Developing the RAW file involves processing this to interpolate in some hopefully-intelligent way the color data from nearby pixels to produce the other colors at each site.  This results in three color channels instead of the original one, but only the original color data is stored in the RAW file.

 

23 minutes ago, Fritz_H said:

"(NRW)...Images are saved as 12-bit RAW uncompressed data ..."

Yes, this is common.  If you develop to 8 bits per color channel, the final result will be 24 bits of color data from those 12 bits, though in many cases an alpha channel is added making it 32.  This causes some loss of detail as you are going from 12 bits of data to only 8 - either the highlights will clip or the shadows will be crushed, unless you squeeze the two ends together to get a low-contrast look, destroying some details in the middle.  Alternatively, you could develop to 16 bits per color channel, resulting in 48 bits of color data, or 64 bits if an alpha channel is present.

 

23 minutes ago, Fritz_H said:

it is possible to reduce the filesize without loss of quality.

In most cases yes, if compression is used, but how much you can do so will vary from image to image.  There are lossless compression options for TIFF and PNG, but JPEG for example is lossy (throws away data).  The catch is that compression/decompression can take time inside the computer, and if the software is optimizing for performance, the time taken to compress/decompress the data might be a tradeoff that they opted not to make.  In some cases the time it takes to read data from the disk can actually be longer than the time it takes to decompress it so compression can sometimes speed things up, but that is not always the case - there are a lot of variables based on how it is being used.

 

23 minutes ago, Fritz_H said:

Even if you count every Layer as a full image-File: 5x 25,7 MB = 128,5 MB << 313 MB

If you develop the RAW data into 24-bit RGB you are doubling the size, as it is storing 3 * 8 = 24 bits per pixel instead of 12 bits.  Even with identical compression to the original RAW (assuming there was any) you would be comparing with 51.4 MB not 25.7, so that 128.5 should be 257.  Add the alpha channel and you have another 1/3 of the size, or 342.7 MB, which is more than the 313 MB you are observing.

 

23 minutes ago, Fritz_H said:

2 of these layers do not have a mask = no pixel-data, just a formula to manipulate color-parameters.

 

Adjustments take time to perform.  I did make an assumption in my analysis:

2 hours ago, fde101 said:

If the results of the adjustment layers are cached for better performance

 

I am guessing that they are not just storing the formula for the adjustment layers but the actual output of the calculations so that they don't need to perform the adjustments all the way through the layer stack each time.  You could be correct that this is not the case, however, as I did forget to take your masks into account.

 

 

EDIT: here is a link I found very quickly in which someone was observing the same phenomenon with PhotoShop files, in which a PDF file is four times the size as the original RAW file was - this is not exclusive to Affinity documents:

https://graphicdesign.stackexchange.com/questions/46086/difference-between-raw-file-size-and-photoshop-image-size

Link to comment
Share on other sites

@fde101

Thanks a lot for taking the time for your extensive answer.

At some point I think you may have perhaps mixed up the definition of "size":  size im RAM vs. size of file on disk.
(Like a JPEG: small file, much bigger in RAM.)
->  I don´t care about the RAM-usage - I just want smaller files.

regarding storing the calculation-output of adjustment-Layers in the file: that makes no sense.
Since the final result (= shown image) is the sum of all adjustment-Layers, there is no benefit from storing the result of a particular 
adjustment-layer if there are more adjustments on top of it.

And: if the computer is too slow to re-calculate all layers after opening the file, its also not fast enough to work with that file.

I still think an option to save in a (lossless) compressed format is useful.

kind regards
Fritz

Link to comment
Share on other sites

16 hours ago, Fritz_H said:

there is no benefit from storing the result of a particular 
adjustment-layer if there are more adjustments on top of it

There is if you change the adjustment on top.  It avoids needing to re-calculate the ones underneath to apply the modified one on top to.

 

16 hours ago, Fritz_H said:

I still think an option to save in a (lossless) compressed format is useful.

I never said it wasn't.

 

16 hours ago, Fritz_H said:

I don´t care about the RAM-usage - I just want smaller files.

Yes, I got that.  From the clues I've seen around the forum, I suspect the Affinity applications are actively using the stored native Affinity documents while they are open, swapping data into memory as needed.  That is a guess, but it would make sense when supporting very large documents, as well as explaining why the document files might be larger than in other programs, because they would be designed for efficiency during active operation, which can have different requirements than simple efficient storage of the data.  This would also help to explain why the programs don't play nice with the "cloud storage" solutions, because both the program and the "cloud" sync software could be trying to modify the file simultaneously which can corrupt the files.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.