Jump to content
You must now use your email address to sign in [click for more info] ×

fde101

Members
  • Posts

    4,967
  • Joined

  • Last visited

Everything posted by fde101

  1. This thread appears to be yet another duplicate of the one here:
  2. You need to make sure that both nodes are part of the same object (use one of the boolean ops to combine the objects if needed) then connect them using the pen tool.
  3. I get that you don't like 1-bit being converted to grayscale, but I was referring specifically to the "300dpi" part of it. I would have expected the 1-bit 1200dpi image layer to be converted to an 8-bit grayscale pixel layer at the dpi which is set for the document. I believe the document dpi setting would also be used for rendering the display in photo which would cause the 1200dpi image to "look like" a 300dpi one simply because of the way the display is rendered, even though it would still be stored as a 1200dpi image until rasterized to a pixel layer. I'll need to play with this when I get some time later on.
  4. Even if they did it would only be an approximation. High-end printing presses sometimes have color calibration setups that run in real-time against the product as it is being printed to compensate for variances in the ink that might take place over time as the printing takes place. If the standardization of the ink could be trusted at the VERY high-end of this market... why would anyone bother with that? (And yes, this does mean that the color values are adjusted from those in the provided document during printing, when the printer may not know the difference between what started as a swatch, or a photo, or whatever...)
  5. Not in the Affinity products. A swatch is no different from the colors in a placed image which is using the same color profile as the document itself. That makes it possible to match graphics objects you create within the document to the colors of the placed image, for greater consistency.
  6. If you ignore the difference in color profiles and translate the (78, 69, 66, 82) to RGB, it translates to #0A0E10 - that works out as roughly 13% gray if I did my math correctly. The fact that they look the same on the display tells me the numbers are being interpreted through different profiles. If you changed the original document to use the values from the second one and put that side by side with a screenshot of the color as it is now the difference would likely be more apparent.
  7. By the way, this is hardly the first time this has been discussed:
  8. That would be great if everyone followed the standard. I would imagine most high-end commercial printers do, but there are plenty of printers which do not fit that description, and not all of them follow the standard. If you mean the two blacks on the original screenshot, they look practically identical on the display that I am using. I would imagine there would be some difference when printed if the numbers were applied using the same color profile to the same printer (obviously), but with two different profiles in use, they are being appropriately interpreted through the different profiles and mapping to what appears to be the same display color. Do they actually look different on your display?
  9. They stay as close as possible to the same color instead, which is not the same thing.
  10. You are assuming that cyan ink is always the same cyan ink and yellow ink is always the same yellow ink. They are not. That is why profiles exist - to compensate for the differences in ink between printer ink manufacturers, the differences in colors produced by color monitors, etc. If you send the same color values to two different printers and they are using inks that do not match each other, even if they are calibrated to put the same amount of each ink on the page, the results will not look the same because the inks themselves are different. Producing different combinations of the ink (or light from a monitor) to get a matching final appearance is the entire reason the profiles exist. One company produces something like seven different black fountain pen inks and they all look different when used next to each other. They are all black, but they are different blacks. Same thing with printer ink. That said, it might be helpful to have a "pure black" option as a swatch that could be applied to prevent a specific black from being translated when the color space changes... kind of like a spot color except that it always becomes a process color upon print/export but maps directly to the appropriate black for the printer.
  11. Then you need to make sure you always use the same profile. The numbers are interpreted relative to the profile, so if the profile is different, the numbers will not mean the same thing. To keep the color consistent when moving something to a document with a different profile, the numbers must change. This is very wrong. To ensure accurate colors on a profiled display, a color profile needs to be assigned to each device (your display in this case, typically by the OS). In order to know what to display, the colors need to already match a profile of some kind to know how to translate them for the display. This is determined by the document's profile, so that definitely does matter during the design process. The numbers you use are meaningless without a profile to interpret them against, so the standard for that interpretation needs to come from somewhere. There are many shades of black. Which one does "100k" represent? That will depend on the brand and formulation of the ink for example. To say "exactly 100k" black is meaningless without knowing the standard that this is being measured against - 100% of what black? Note that if the document profile does not match the one used for exporting or printing your document it will export/print with the same change in the colors so that they match "subjectively" between the document and the printout.
  12. This sounds quite like the "layer tags" feature I suggested a while back... I guess I wasn't the first to have this particular good idea And why not on vector objects while we are at it?
  13. I am skeptical that this will make its way into Photo, it is more likely that this will be part of the future DAM solution which is said to be in the works (just like Photoshop is not Bridge or Lightroom).
  14. Wouldn't this be controlled by the document resolution setting? I would expect the image layer to rasterize to a pixel layer at the resolution specified for the document. If that is defaulting to 300 dpi, then you would get a 300 dpi pixel layer.
  15. Just to verify, are both documents configured with the same color space and profile? If they have different color profiles set up then the conversion makes sense as the colors would need to be adapted to the new profile. Otherwise, I agree that should not be happening.
  16. Hmm... if a page contains only a plain white rectangle, the same color as the paper, then is it still empty? The preflight panel in Publisher is strictly pre-export at this time and there is no indication that this is likely to change... can't say for sure, but I would not expect this. However, some of these checks might be relevant once PDF "pass-through" is supported, to validate the PDFs that are embedded within the document? I suspect "font cannot be embedded" would be a more generically appropriate check, in case the permissions in a font that was used forbid embedding it?
  17. There is if you change the adjustment on top. It avoids needing to re-calculate the ones underneath to apply the modified one on top to. I never said it wasn't. Yes, I got that. From the clues I've seen around the forum, I suspect the Affinity applications are actively using the stored native Affinity documents while they are open, swapping data into memory as needed. That is a guess, but it would make sense when supporting very large documents, as well as explaining why the document files might be larger than in other programs, because they would be designed for efficiency during active operation, which can have different requirements than simple efficient storage of the data. This would also help to explain why the programs don't play nice with the "cloud storage" solutions, because both the program and the "cloud" sync software could be trying to modify the file simultaneously which can corrupt the files.
  18. It is unlikely that this will ever happen. INDD is an undocumented proprietary format that changes from release to release to the point that even Adobe came up with IDML as a mechanism for carrying files between versions of InDesign. If you need to have those documents converted, convert them to IDML using one of the batch conversion scripts that are floating around before you ditch the Adobe software, then you will have the IDML files that are compatible with a handful of other programs, soon to include Affinity Publisher.
  19. Most image sensors are monochrome. There is a color filter array (CFA) in front of the sensor, most commonly in a Bayer pattern, which makes each photosite (pixel) sensitive to a single color by only allowing that color of light to reach that photosite. Thus for each pixel in the captured RAW image, there is only one color. Developing the RAW file involves processing this to interpolate in some hopefully-intelligent way the color data from nearby pixels to produce the other colors at each site. This results in three color channels instead of the original one, but only the original color data is stored in the RAW file. Yes, this is common. If you develop to 8 bits per color channel, the final result will be 24 bits of color data from those 12 bits, though in many cases an alpha channel is added making it 32. This causes some loss of detail as you are going from 12 bits of data to only 8 - either the highlights will clip or the shadows will be crushed, unless you squeeze the two ends together to get a low-contrast look, destroying some details in the middle. Alternatively, you could develop to 16 bits per color channel, resulting in 48 bits of color data, or 64 bits if an alpha channel is present. In most cases yes, if compression is used, but how much you can do so will vary from image to image. There are lossless compression options for TIFF and PNG, but JPEG for example is lossy (throws away data). The catch is that compression/decompression can take time inside the computer, and if the software is optimizing for performance, the time taken to compress/decompress the data might be a tradeoff that they opted not to make. In some cases the time it takes to read data from the disk can actually be longer than the time it takes to decompress it so compression can sometimes speed things up, but that is not always the case - there are a lot of variables based on how it is being used. If you develop the RAW data into 24-bit RGB you are doubling the size, as it is storing 3 * 8 = 24 bits per pixel instead of 12 bits. Even with identical compression to the original RAW (assuming there was any) you would be comparing with 51.4 MB not 25.7, so that 128.5 should be 257. Add the alpha channel and you have another 1/3 of the size, or 342.7 MB, which is more than the 313 MB you are observing. Adjustments take time to perform. I did make an assumption in my analysis: I am guessing that they are not just storing the formula for the adjustment layers but the actual output of the calculations so that they don't need to perform the adjustments all the way through the layer stack each time. You could be correct that this is not the case, however, as I did forget to take your masks into account. EDIT: here is a link I found very quickly in which someone was observing the same phenomenon with PhotoShop files, in which a PDF file is four times the size as the original RAW file was - this is not exclusive to Affinity documents: https://graphicdesign.stackexchange.com/questions/46086/difference-between-raw-file-size-and-photoshop-image-size
  20. A RAW file only stores one color channel. When the RAW data is developed to RGB it will (uncompressed) take up three times the amount of space, assuming equivalent bit depth. If the results of the adjustment layers are cached for better performance, you then have 3 channels * 4 layers = 12 times the size of the RAW data, which is almost exactly what you are seeing. Compression of the pixel layers in memory wouldn't make sense for a photo editor, and for performance reasons I can see why on-disk compression might be limited in the Affinity file format (at least for pixel layers) - not sure if they are doing any or not, but considering that most "modern" RAW files are at least somewhat compressed, I would estimate that you are probably already a bit ahead.
  21. Initial versions of these two features are present in the 1.8 beta, though the "packaging" is for images and does not include fonts yet.
  22. If Serif does choose to provide preset presets for the preflight panel, I think it would be best if they would stick with presets that fit well-defined standards in the print industry rather than cater to all of the oddball requirements that individual users are likely to come up with. If the presets can be imported/exported, then print shops could provide presets matching their requirements so their users could download them and add them to Publisher if they want to. Agreed 100%.
  23. Tossing this one out there for discussion: poor contrast (luma or chroma) between overlapping vector objects, optionally accounting for various forms of colorblindness?
  24. Suggest providing a crash log to help the developers figure out where it is crashing... preferably as an attachment rather than as text pasted into the forum...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.