Jump to content
You must now use your email address to sign in [click for more info] ×

Support of 16 bits compressed TIF files


Recommended Posts

When using the Affinity format, it is obvious that we want to get the possibility to edit our files. So there is no need to reduce color information till final export for production. I'm now plundered with photos of 150MB that are coming from NEF files of 30 MB. 

1942156400_Screenshot2022-12-14204534.png.252bbc0142b420bdbc23bc14536fb006.png

Link to comment
Share on other sites

Sorry, but I'm a bit confused. Your topic title is "Support 16-bit compressed TIF Files" but you seem to then describe a problem you're having with NEF files?

Can you give us some more information, to help us understand?

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
    Laptop 2: Windows 11 Pro 24H2,  16GB memory, Snapdragon(R) X Elite - X1E80100 - Qualcomm(R) Oryon(TM) 12 Core CPU 4.01 GHz, Qualcomm(R) Adreno(TM) X1-85 GPU
iPad:  iPad Pro M1, 12.9": iPadOS 17.7, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sonoma 14.7

Link to comment
Share on other sites

7 minutes ago, walt.farrell said:

Sorry, but I'm a bit confused. Your topic title is "Support 16-bit compressed TIF Files" but you seem to then describe a problem you're having with NEF files?

Can you give us some more information, to help us understand?

On the screenshot you can see the .afPhoto size of a NEF file that is around 30MB => 16 bits compressed.
The .Tif file is an export of .Affinity to TIF and converted to 16 bits compressed. Which is as well useful if I need to do extra work, as for now I can't translate yet all my tasks frrom Photoshop or DXO in Affinity what is mainly Dust Removal wich seems a pain in Affinity and I still didn't got it to work :)

Link to comment
Share on other sites

@Crolow, welcome to the forums!

Unfortunately I am also a bit lost here.

Is the issue that you don't believe the .afphoto file is retaining all of the data from the original NEF, that the file is too large/unwieldy making it difficult for you to manage storage, that you are trying to export smaller files for distribution (in which case the use of the TIFF format may not be the best choice), or something else that I am missing here completely?

Link to comment
Share on other sites

1 minute ago, fde101 said:

@Crolow, welcome to the forums!

Unfortunately I am also a bit lost here.

Is the issue that you don't believe the .afphoto file is retaining all of the data from the original NEF, that the file is too large/unwieldy making it difficult for you to manage storage, that you are trying to export smaller files for distribution (in which case the use of the TIFF format may not be the best choice), or something else that I am missing here completely?

Why it so difficult ? 16 bits with lossless compression images are smaller than 16 bits images without compression (+/- 50%). Export some Affinity photo file to TIFF and convert it into TIFF 16 bits with compression. And you will see the difference. 

So I didn't said Affinity internal bitmaps were losing pixel information, but it could be saved another way. Maybe there is alternative to TIFF but as it is straight forward, i'm not sure , maybe using another compression algorithm :)

Link to comment
Share on other sites

Are you talking about the .afphoto being bigger than the original .nef file? There are many reasons for that, chief of which is that a NEF file does not contain pixel data. It contains sensor data, which is much smaller.

A .afphoto also contains other information, such as an initial snapshot, which you can delete if you don't want it.

But I'm still very confused about what you're taking about.

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
    Laptop 2: Windows 11 Pro 24H2,  16GB memory, Snapdragon(R) X Elite - X1E80100 - Qualcomm(R) Oryon(TM) 12 Core CPU 4.01 GHz, Qualcomm(R) Adreno(TM) X1-85 GPU
iPad:  iPad Pro M1, 12.9": iPadOS 17.7, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sonoma 14.7

Link to comment
Share on other sites

3 minutes ago, walt.farrell said:

Are you talking about the .afphoto being bigger than the original .nef file? There are many reasons for that, chief of which is that a NEF file does not contain pixel data. It contains sensor data, which is much smaller.

A .afphoto also contains other information, such as an initial snapshot, which you can delete if you don't want it.

But I'm still very confused about what you're taking about.

I know that NEF file that pixel info is diff on Raw images. But I'm talking about the final outpt as TIFF it is 3 times smaller then the .afPhoto. If know there is any tricks to reduce that size of .afPhotos, I can go with it 🙂 

Link to comment
Share on other sites

26 minutes ago, Crolow said:

But I'm talking about the final outpt as TIFF it is 3 times smaller then the .afPhoto. If know there is any tricks to reduce that size of .afPhotos, I can go with it 🙂 

Ok, so your concern has nothing to do with compressed TIFF, but rather with how large the .afphoto files are.  There are numerous threads discussing this.  The file format is optimized for speed of working with and editing the data, not for efficient storage size.  Using compression is likely to be counter-productive depending on how the data is actually organized and worked with, as there are trade-offs involved and Serif had to make some choices about how to design them.  You might be able to free a bit of space from any one file by doing a "Save As" to save it under a different name (then deleting the original), but if you keep working with that file, it will eventually grow large again.

Also know that enabling the "Save History with Document" option in the File menu will increase the size of the files even more, since historic copies of edited parts of the images will likely need to be retained along with the current ones.

For more details I would suggest that you do a search and review some of the commentary on the numerous other threads that have been started on this subject.

Link to comment
Share on other sites

8 hours ago, fde101 said:

Ok, so your concern has nothing to do with compressed TIFF, but rather with how large the .afphoto files are.  There are numerous threads discussing this.  The file format is optimized for speed of working with and editing the data, not for efficient storage size.  Using compression is likely to be counter-productive depending on how the data is actually organized and worked with, as there are trade-offs involved and Serif had to make some choices about how to design them.  You might be able to free a bit of space from any one file by doing a "Save As" to save it under a different name (then deleting the original), but if you keep working with that file, it will eventually grow large again.

Also know that enabling the "Save History with Document" option in the File menu will increase the size of the files even more, since historic copies of edited parts of the images will likely need to be retained along with the current ones.

For more details I would suggest that you do a search and review some of the commentary on the numerous other threads that have been started on this subject.

OK, i will do some experiments :) Thanks.

But I'm not sure it is counter productive. Loading a huge image is already huge, not sure that uncompression will be a benefit if it takes 200% of the original size 🙂AT least I can not work out an image in 10 seconds, and if it is so I can just use original image and use a LR preset or whatever :)

 

Link to comment
Share on other sites

It might help us understand your problem, and perhaps provide suggestions, if you were to provide one of your NEF files and the .afphoto file you got from it.

-- Walt
Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases
PC:
    Desktop:  Windows 11 Pro 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 

    Laptop:  Windows 11 Pro 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU.
    Laptop 2: Windows 11 Pro 24H2,  16GB memory, Snapdragon(R) X Elite - X1E80100 - Qualcomm(R) Oryon(TM) 12 Core CPU 4.01 GHz, Qualcomm(R) Adreno(TM) X1-85 GPU
iPad:  iPad Pro M1, 12.9": iPadOS 17.7, Apple Pencil 2, Magic Keyboard 
Mac:  2023 M2 MacBook Air 15", 16GB memory, macOS Sonoma 14.7

Link to comment
Share on other sites

How I understood it. Please correct me if I am wrong.

.aphoto files getting bigger and bigger with any save.

They do this because the app does not save the complete file (overwriting the existing file) but adds the changes as a "delta" to the "end" of the already stored file.

Advantage: Saving changes is faster because only the delta needs to be written, not the whole file.

Disadvantage: The files get bigger and loading a file takes longer.

At some point in time, when a threshold unknown to the user is reached, APhoto automatically decides to make a complete "overwrite" of the file when save is used. In that case the file size decreses again. Let's call this "flushing".

There was a feature request to add a user controllable "flush" button to the save dialogue. This way the user could trigger the full overwrite and reduction of file size. As there is no roadmap we don't know if this will be implemented.

When you save to a new file (save as), APhoto stores a complete new file without all the deltas -> smaller files.

I understand the idea behind this concept of writing files, but I am unsure whether in times of insanly fast SSDs and M.2 drives the speed of saving a file is still a benefit in relation to the extra space needed. But the Affinity apps are from the mid of last century decade and at that time SSDs and M.2s where not the norm, so implementing a fast save concept was beneficial then.

Regarding compression:

I don't know if the apps do any compression in the .aphoto file format. If they convert e.g. a compressed tiff 16 to an uncompressed format when embedding it and save it like that, it would add to the file size of course. On top comes the file management overhead any app specific file format has.

Edited by cgidesign
used wrong word - century replaced with decade now
Link to comment
Share on other sites

3 hours ago, cgidesign said:

But the Affinity apps are from the mid of last century

These apps have NOT been around since the 1950's.  They are much newer than that.  Consider that Affinity Designer was originally a Mac exclusive, for MacOS X, and the original Macintosh computer was released in 1984.  The original MacOS X was released in 2001, and Designer originally required a newer version than that.

The prices prevented SSDs from becoming common until around 2004 when those prices started dropping, and they gradually started making inroads into common systems, laptops in particular (but now desktops also).  I believe that trend started to take place before the Affinity apps came about, so while hard drives were still more common than SSDs at that point, the trend toward using SSDs was likely something that could be observed and I suspect Serif accounted for that in the design of their file format.

Not everything related to performance is tied to the speed of the underlying drive.  When a document is too large to fit in memory, the application either needs to rely on the underlying virtual memory resources of the operating system (which does not know what the application is doing and cannot optimize as effectively for its usage patterns as the application itself might be able to) or it needs to implement its own scheme to control memory usage to optimize performance in a way that takes into account the way the data will be used.

In addition to the public information that @cgidesign reiterated above, the file format that Serif came up with may have been optimized to let them rapidly access arbitrary sections of data when running low on memory so they can effectively "page" data from the document file itself, which would require a rather different organization of the file than would be used if it were a simple read-once, write-once format.  I haven't taken the time to observe the I/O activity related, but I suspect that may be what was done.  In this case, they would need to carefully control how the file was written to, as overwriting the entire file (or sections of it) could throw off internal information the application may have cached relating to where things are located within the file - if they need to adjust all of that information, there could be a performance cost involved that they are hoping to avoid.

I do not have inside information related to how they chose to organize the file format so I don't know for sure that this is what they did, but it is likely as good of an "outsider" guess as any (without taking a lot of time to study it and try to figure it out), and should be reasonable enough to demonstrate why the format might benefit from the larger sizes regardless of the underlying storage media.

 

3 hours ago, cgidesign said:

when a threshold unknown to the user is reached

It happens when 33% of the file's size would be reduced, per the developers:

 

3 hours ago, cgidesign said:

loading a file takes longer.

Does it though?  Just because extra data is there doesn't mean that the application needs to read the whole thing.  Ben explained in that post that the file format keeps track of how much redundant/"dead" content is in the file, so it may be that the application can simply skip over that while reading the file?  It may also be that it doesn't read an entire large file anyway if it only needs to display some portion of it - if you have a 1200-page Publisher document but are only displaying one page at a time, with a few having visible thumbnails in the Pages panel, why read all of the data for every page until you need to show it?

If you are only accessing part of the data when opening a file, skipping the unneeded "live" content and skipping the "dead" content would be about the same?

Link to comment
Share on other sites

I still don't see the point of saving all the data in the saved file, you need to reapply all the changes back when you load the image into memory. So it is probably useless at that time, certainly if your file ends up 3 times bigger then the original RAW file with only one layer.

In the meantime i found some bug in DXO that seems to mess up a bit with TIFF compression with 16 bits format. So was confused when doing some tests related to that file size

Link to comment
Share on other sites

3 hours ago, Crolow said:

you need to reapply all the changes back when you load the image into memory

Not necessarily.

Consider how zip files work: each file is stored as a header followed by some compressed data.  A catalog is stored at the very end of the file with a list of the files which are stored in the zip file and where to find them.  The format was originally designed so that you could modify a file without rewriting the whole thing: if you add a few files to an existing zip file, it can just tack them on the end and append a new catalog which points to both the old and new files.

Now consider replacing a file within a zip file with one that is larger: the new one can be added at the end (since there is not enough space for it where the old version was) and the new catalog that gets written out could point to the new version of that file, so the old one is still technically occupying space in the file, but noting is pointing to it, so it simply gets ignored.

You could easily say that the old version of the file is wasting space, and it effectively is, but by taking that approach you don't need to rewrite the data which comes after it, which saves you the time of needing to rewrite the entire zip file (which may have some very large files stored after the one what was changed) when only one of the files within it was modified.

 

Now consider that you could take image data from even a single layer of an Affinity Photo document and break it up into smaller tiles.  To pick a random size, consider that it might be 128 pixels wide by 128 pixels high.  Each tile could be stored as a separate "file" inside of a zip file format (I don't think this is what Serif has actually done; I am offering a rather simplified example of one possible mechanism that might explain some of what they *might* be doing).  In this case, for each tile whose data is modified, that one tile could be rewritten at the end of the file when saving it, and a catalog replaced or updated to point to the new tile data, leaving the old one as dead space.  When the application loads a file, it could load the catalog, and when it needed a particular tile from a particular layer, the catalog would tell it where to look.  If an unmodified tile was then taking up memory when it wasn't needed for what was being done, it could be released from memory and reloaded from the file later when it was needed again, helping to manage memory more effectively when under pressure.

If the catalog is rewritten while leaving the old one intact, this approach would have the benefit of being able to adjust a pointer somewhere within the file to point to the new catalog after it has been completely written, so that if the application crashes while saving the new data, it would still be pointing to the old catalog, and you would only lose the new data - if the data in the old locations is preserved and the old catalog is still pointing to them, then if you open the file after the save process started and failed part way through, you would still at least have a consistent copy of the old version of the image, instead of a potentially corrupted file that would result in losing the entire document.

Also, if you are saving history with the document, then you need the historic information as well, so the history information might simply point to the older versions of the tiles that were changed (for example).

 

Again, I don't know how much of this Serif is actually doing in their file format, I'm just tossing out one example of how a format like this *might* be designed, to also help explain why it might not be necessary to reread all of the "dead" data when loading the file later on.

Link to comment
Share on other sites

7 hours ago, Crolow said:

I still don't see the point of saving all the data in the saved file, you need to reapply all the changes back when you load the image into memory. So it is probably useless at that time, certainly if your file ends up 3 times bigger then the original RAW file with only one layer.

In the meantime i found some bug in DXO that seems to mess up a bit with TIFF compression with 16 bits format. So was confused when doing some tests related to that file size

I have found it useful to have undo information stored with the file, particularly if I have to save something quickly ‘as is’ and come back to it later when I may change my mind. Would what you are suggesting/requesting affect that?

Link to comment
Share on other sites

4 hours ago, Paul RB said:

I have found it useful to have undo information stored with the file, particularly if I have to save something quickly ‘as is’ and come back to it later when I may change my mind. Would what you are suggesting/requesting affect that?

Well this can be achieved local cache. Actually by default this option is disabled, enabling it makes it behaves globally and  you lose complete contron on what's on your files. Doing it on the save functionnality leaves you with the choice to reuse history of not. For sure this way undo date is not portable but , should it be ? You could use the snapshots in order to do that maybe ?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.