Jump to content
You must now use your email address to sign in [click for more info] ×

fde101

Members
  • Posts

    4,976
  • Joined

  • Last visited

Everything posted by fde101

  1. There is if you change the adjustment on top. It avoids needing to re-calculate the ones underneath to apply the modified one on top to. I never said it wasn't. Yes, I got that. From the clues I've seen around the forum, I suspect the Affinity applications are actively using the stored native Affinity documents while they are open, swapping data into memory as needed. That is a guess, but it would make sense when supporting very large documents, as well as explaining why the document files might be larger than in other programs, because they would be designed for efficiency during active operation, which can have different requirements than simple efficient storage of the data. This would also help to explain why the programs don't play nice with the "cloud storage" solutions, because both the program and the "cloud" sync software could be trying to modify the file simultaneously which can corrupt the files.
  2. It is unlikely that this will ever happen. INDD is an undocumented proprietary format that changes from release to release to the point that even Adobe came up with IDML as a mechanism for carrying files between versions of InDesign. If you need to have those documents converted, convert them to IDML using one of the batch conversion scripts that are floating around before you ditch the Adobe software, then you will have the IDML files that are compatible with a handful of other programs, soon to include Affinity Publisher.
  3. Most image sensors are monochrome. There is a color filter array (CFA) in front of the sensor, most commonly in a Bayer pattern, which makes each photosite (pixel) sensitive to a single color by only allowing that color of light to reach that photosite. Thus for each pixel in the captured RAW image, there is only one color. Developing the RAW file involves processing this to interpolate in some hopefully-intelligent way the color data from nearby pixels to produce the other colors at each site. This results in three color channels instead of the original one, but only the original color data is stored in the RAW file. Yes, this is common. If you develop to 8 bits per color channel, the final result will be 24 bits of color data from those 12 bits, though in many cases an alpha channel is added making it 32. This causes some loss of detail as you are going from 12 bits of data to only 8 - either the highlights will clip or the shadows will be crushed, unless you squeeze the two ends together to get a low-contrast look, destroying some details in the middle. Alternatively, you could develop to 16 bits per color channel, resulting in 48 bits of color data, or 64 bits if an alpha channel is present. In most cases yes, if compression is used, but how much you can do so will vary from image to image. There are lossless compression options for TIFF and PNG, but JPEG for example is lossy (throws away data). The catch is that compression/decompression can take time inside the computer, and if the software is optimizing for performance, the time taken to compress/decompress the data might be a tradeoff that they opted not to make. In some cases the time it takes to read data from the disk can actually be longer than the time it takes to decompress it so compression can sometimes speed things up, but that is not always the case - there are a lot of variables based on how it is being used. If you develop the RAW data into 24-bit RGB you are doubling the size, as it is storing 3 * 8 = 24 bits per pixel instead of 12 bits. Even with identical compression to the original RAW (assuming there was any) you would be comparing with 51.4 MB not 25.7, so that 128.5 should be 257. Add the alpha channel and you have another 1/3 of the size, or 342.7 MB, which is more than the 313 MB you are observing. Adjustments take time to perform. I did make an assumption in my analysis: I am guessing that they are not just storing the formula for the adjustment layers but the actual output of the calculations so that they don't need to perform the adjustments all the way through the layer stack each time. You could be correct that this is not the case, however, as I did forget to take your masks into account. EDIT: here is a link I found very quickly in which someone was observing the same phenomenon with PhotoShop files, in which a PDF file is four times the size as the original RAW file was - this is not exclusive to Affinity documents: https://graphicdesign.stackexchange.com/questions/46086/difference-between-raw-file-size-and-photoshop-image-size
  4. A RAW file only stores one color channel. When the RAW data is developed to RGB it will (uncompressed) take up three times the amount of space, assuming equivalent bit depth. If the results of the adjustment layers are cached for better performance, you then have 3 channels * 4 layers = 12 times the size of the RAW data, which is almost exactly what you are seeing. Compression of the pixel layers in memory wouldn't make sense for a photo editor, and for performance reasons I can see why on-disk compression might be limited in the Affinity file format (at least for pixel layers) - not sure if they are doing any or not, but considering that most "modern" RAW files are at least somewhat compressed, I would estimate that you are probably already a bit ahead.
  5. Initial versions of these two features are present in the 1.8 beta, though the "packaging" is for images and does not include fonts yet.
  6. If Serif does choose to provide preset presets for the preflight panel, I think it would be best if they would stick with presets that fit well-defined standards in the print industry rather than cater to all of the oddball requirements that individual users are likely to come up with. If the presets can be imported/exported, then print shops could provide presets matching their requirements so their users could download them and add them to Publisher if they want to. Agreed 100%.
  7. Tossing this one out there for discussion: poor contrast (luma or chroma) between overlapping vector objects, optionally accounting for various forms of colorblindness?
  8. Suggest providing a crash log to help the developers figure out where it is crashing... preferably as an attachment rather than as text pasted into the forum...
  9. What is the rationale behind this design? It obviously is not a bug if it was designed this way but it is a rather questionable design choice...
  10. They generally do not respond to feature requests, but they actually did respond to this one a few times: etc.
  11. I don't think I agree with this one at all. And do what specifically? This is the wrong type of application for these behaviors. You are asking for DAM-like software, and that doesn't exist in the Affinity lineup yet. Serif has indicated they are planning one, but it may be a while until we have any real details on what it will offer. In the meantime consider software such as Capture One, DXO PhotoLab, On1 Photo RAW, etc. Those are DAM/RAW processing applications. Affinity Photo is an editor, optimized for working on one image at a time when you need to focus in on it. Different tools entirely. If this doesn't work for you, make sure there is a thread reporting this in the bug section for your platform. It does work for me (Mac).
  12. To a point... the catch is that this would need to be set up for each file rather than something integrated into the document format itself. Maybe a "save documents with large previews" option in preferences wouldn't be too bad of a thing to offer, but I definitely wouldn't want a "full size" preview of large documents saved in every file... Meanwhile, this at least does exist.
  13. You can also use the export persona to configure a png (or other format) and set up a "live" export as you modify the native file, and keep that next to the main file if desired.
  14. If you haven't already, you might want to post about this on the Windows bug forum and not just here. You could link back to this one. It should get attention from the right people a bit faster that way.
  15. A full-size preview will also require full-size disk space, will take the full-size export time whenever saving the document, and this is in addition to the space already occupied by the file. The Affinity team has already indicated that they designed the file format to optimize the speed of file operations, and this would not only contradict that purpose, but would also waste even more disk space where there are already people complaining that the files are sometimes too large. I for one specifically do NOT want this. Larger than 500 x 500 I could possibly live with, but the full resolution of my 20+ MPix camera? No. The size of the file is not constant either, and neither is the size of the suggested png "wrapper" - as a png file's size can vary even with the resolution being fixed, each time the file was saved there is the potential that the Affinity data might need to move within the file to accommodate changes to the size of the wrapper. The files are not likely being rewritten in their entirety each time the document is saved, so this is potentially a much bigger change to the file format than you seem to realize.
  16. Based on some of the hints I've seen from reading between the lines of some of the various issues and the like that have been identified related to the Affinity document file format, I don't think this would be as simple as you seem to think. The Affinity document format seems to be structured more like a database file than a traditional document structure, and that might not play nicely with a tagged format like the ones you suggest, as the files might be updated constantly while they are open, not just when you explicitly save the document.
  17. I'm sure it is a priority, just not as high a priority as some would like when compared to other things that people are also crying out loud for. Right now the second major release (1.8 up from the first 1.7 release) is in beta, and they are still adding features to the betas, so it's not like they have let this slip for a long time now either in the grand scheme of things. They won't get it all in at once so they need to pick and choose. I think it's too soon for you to make the assumption that this is particularly low on the totem pole, but I would expect that this is much easier to work around than some of the other problems people are seeing and it may be prioritized accordingly.
  18. My vote (only pretending that I actually have one) is for the current behavior when manipulating the layers on the Layers panel to be retained, but for the image to be matched to the boundaries of the old image (adjusted by some method X for a difference in aspect ratio) when replacing one by dragging into the frame from outside the application, when pasting an image from the clipboard, or when explicitly using the "place" feature. I think that probably makes the most sense from a user perspective in most cases? Maybe have a preference to instead center the new image on the boundaries of the old one and match the placed DPI instead of the boundaries...
  19. In InDesign, the picture and the frame are most likely the same "object" as they are in QuarkXPress, so in that context the positioning of the image would be intertwined with the concept of the frame. Publisher is different in that the image and the frame are two separate objects represented as two separate layers on the Layers panel. You can drag an image layer out of its frame and the position is retained, which implies that the position of the image is not a property of the frame, but of the actual image layer. You can similarly drag an image layer into a frame from within the Layers panel. Different program, different design, different behavior. Having said all of that, it is not unreasonable to request that when replacing an image within a frame Publisher adapt the properties of the incoming image to those of the one that it is replacing within the frame (as I would expect it to do if the frame is set to scale to fit or scale to fill)... but that doesn't change the very likely probability that the scale and position of a manually placed/sized image is a property of the image and not of the frame. They are two different layers and each layer in the products has its own size and position; there is no reason why these would be any different.
  20. I think that depends on whether using a setting that automatically scales to fit the frame, vs. a manual scale. If the original image was scaled or positioned manually that could be a different story from one that is set to scale to fit or fill the frame. If the image is manually scaled/positioned, then its scale/position is that of the image layer which is separate from the frame itself, so if you are replacing that layer it makes sense that the position and scale of the old layer would not carry over. It might not be the most optimal behavior, but it might not be a bug either.
  21. True. However, the file format being shared with Photo places the Affinity suite in a somewhat unique position of benefiting from having this set (for the benefit of documents that are then opened in Photo), even though it technically should make little or no difference to the operation of Publisher, except when rasterizing a layer. Agreed, as long as the IDML has such images. If importing an IDML file with no images at all, the program still needs *something* to default to. Also, depending on how images are scaled, the maximum placed or "original" DPI of any embedded images might not be the most appropriate either.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.