Jump to content
You must now use your email address to sign in [click for more info] ×

lacerto

Members
  • Posts

    5,779
  • Joined

Everything posted by lacerto

  1. If you mean that the documentation is "hidden" amongst 81,960 files within the ID Plug-in SDK so that a possible competitor gets exhausted and will give up already when extracting the about 3.5GB package (a task that takes about an hour), then perhaps so. But seriously, being afraid of competition and someone developing interchange support for Adobe created things is not the first thing that comes in mind when visiting developer.adobe.com and seeing the sheer amount of open and free as per version targeted documentation, SDKs and tools available for anyone interested. Even with full IDML export support, true co-operation between diverse professionals already happens in the cloud and using common tools. I suppose IDML export is more a kind of a one-off tool for the purpose of providing a client with an editable document they wished to have (and many clients do nowadays, in addition to getting a production file as a PDF). If it is required, why not pick a tool that can already do it, there are multiple available, including ones that are not (necessarily) subscription-based (and not developed by Adobe, if that is important).
  2. If you mean a trick where Affinity Publisher is fooled to replace images in one go by using a renamed folder, note that when you get a warning about missing images, you need to select "Yes" from the dialog. Using Resource Manager does not allow you to actually replace images but just locate the saved but missing ones in a different folder (which I think that Walt does above). Replace_trick.mp4 UPDATE: So what you assume, that there is something extra that Publisher uses to match an image must be true, mere filename is not enough.
  3. I do not think that there are alternative antialiasing options (other than what you mentioned, turning off antialiasing completely), so to simulate PS and GIMP antialiasing method options: ...you could try to achieve something similar by using e.g. sharpening and blurring live filters: EDIT: ...or using Threshold Adjustment (if no kind of antialiasing is wanted, but something more usable with small sizes than what the blend option can offer):
  4. Yes, it is the one that is tagged as having the sRGB2014 color profile. The transparency values of downsampled and non-downsampled images are the same but the one that has sRGB2014 profile has significantly and systematically brighter RGB values. I just rebuilt the file in Publisher 1.10.6.1665, and exported directly to PDF (Digital - high quality), and the issue does not happen there: ...so it is clear that this is an error introduced in versions 2. BTW, I use Adobe Acrobat Pro 2020 (perpetual license) on Windows as a prepress tool, and on mac I have Packzview (exists also for Windows) which is a free (though the license is obviously only admitted for professional use), but more limited, yet very useful as a tool for getting print-pertinent information on PDFs.
  5. Checked this now on macOS (Sonoma 14.1.2 native M1 Publisher 2.3.0), and it is not different there: a) Non-downsampled: b) Downsampled: So it appears that the inadvertently applied sRGB2014 would explain this, and that the issue would be automatically fixed when downsampling and sRGB2.1 is applied? I say mysteriously because Resource Manager states this: My guess is that this is partially an old issue related to Affinity apps not supporting indexed images so perhaps the original sketch and/or the paper background were paletted images originally and got converted to RGB images when placed, and were assigned with the app workplace RGB default sRGB2.1. For some reason sRGB2014 gets involved on export which I think is a later issue (somehow similar to an issue with renamed ISO Coated v2 CMYK profile)... I am not sure if color management within Affinity apps is getting any better, but perhaps it is a good sign that it gets different 🙂
  6. I had another look on this, and there is something odd going on with handling of the PNG transparency of the top image that I cannot figure out. The transparency values behave erratically if the image is just passed through (e.g. exported using "PDF (Digital - high quality)": but work fine if the image is downsampled (e.g., exported using "PDF (Digital - low quality)": In both cases the transparent foreground PNG image is handled as a 24-bit RGB image containing RGB values and a device gray image containing the transparencies, but the non-downsampled image has sRGB2014 as an assigned color profile, which I do not get, because the embedded image has the standard sRGB2.1 color profile, as does the externalized linked image. If I create the main image (sketch paper and and the foreground image with transparency) in InDesign and export to "interactive" PDF without resampling, I will get the following: ...so basically the same as in Publisher but with an indexed (paletted) RGB image with a device gray image telling the transparencies, but ICC based on standard sRGB2.1. Somehow the error would appear to be related to using sRGB2014. I do have that profile installed on the system, and I thought that this might be Windows-only issue, but it appears that you are on macOS and seem to experience the same issue. I need to examine this on my mac to see if it is any different there...
  7. Ok. Since Affinity apps cannot automatically flatten transparencies without converting them to CMYK, I would probably switch the color mode of the document to RGB and take the background image from the Master page to the back of the first page and group it with the top image and then rasterize, to have the transparencies flattened, and then export to RGB. You could also rasterize the top image in the current CMYK color mode to convert it to CMYK color space (= document color space) and then export directly to RGB (e.g. using PDF - Digital preset)), to have transparency values cause the visual appearance that you see before exporting. There is something that I do not understand in the source Publisher file since if I rebuild it in Photo in sRGB color space (with the embedded RGB page background and main image, and the text and vector object group at the top), I get the expected RGB exports without merging the top and bottom images, and without any image conversions. Perhaps the file was somehow corrupted because of adjustment layers applied at the top group with gray values?
  8. It is not so surprising in the light that much of vector graphics is still created using AI, and Illustrator EPS retains much of the editability of the document (even if basically only when using AI, and to some extent also certain 3rd party apps), while also allowing passing through similarly as PDFs (though not in Affinity apps that always interpret EPS and convert fonts to curves, and lose spot colors and overprint attributes). Illustrator EPS is a better option e.g. for stock vector graphic providers than offering complexity (e.g., mixed color spaces) and mediocre editability provided by PDF format (unless native AI is preserved), or providing multiple native file formats (basically requiring multiple source files and partial recreation of designs). EPS can still be useful also to avoid complexities of color management, and sometimes, because of its compactness (e.g. transferring precisely placeable equations). But it is of course a legacy file format increasingly losing support everywhere. I suppose native AI with PDF streams would be the modern alternative but if universal editability is hoped for, non-Adobe users will be equally disappointed and confused as with non-editable or only partially editable and inadequately or incorrectly interpreted EPS files: much of expected information will be unavailable, and much of color information wrongly interpreted. But which .AI format, then (considering those who still use Illustrator)?
  9. You have mixed color space (RGB and CMYK) images placed in a CMYK document with the default US Web Coated target profile. All images however are relatively low res so it is unclear whether CMYK output is really wanted, but let us assume that CMYK was really intended. The PDF created is also mixed color space file, and retains the involved transparency of the topmost image so the color values have not been resolved. How the color values will be resolved when the image is ripped (and the transparencies flattened) at print time will depend on in which transparency blending color space flattening will happen. Affinity apps do not allow the user to explicitly specify this, but there are ways to resolve the problem so that you more or less get what you see and can reasonably expect. One part of the problem is that whenever you use CMYK document color mode, everything in the document will be shown as if proofed to simulate the target CMYK profile, and this view is not necessarily realistic as regards the final output you will get. One way to resolve this is to explicitly export to CMYK color space and additionally use "Convert image color spaces" option: This still leaves the file somewhat ambiguous, leaving the transparency blending color space in RGB mode, but the result on paper would already be pretty much what you expect based on what you see on the display: SketchTest_master_forced_cmyk.pdf The safest choice is to flatten transparencies, either by merging the involved raster images before exporting, or at export time, using either PDF/X-1a or PDF/X-3 based exports, which both do the job. Using PDF/X-1a will make the exported file DeviceCMYK, while PDF/X-3 can optionally leave non-flattened images in RGB color space. SketchTest_master_pdfx1.pdf SketchTest_master_non_forced_pdfx3.pdf Without having PDF preflight tool like Adobe Acrobat Pro, IMO it is safest to go with PDF/X-1a, but there are some serious flaws [within Affinity apps] with this production method whenever placed PDFs are involved, or when needing to flatten vector-based transparencies. Neither are involved in your job, so forcing flattening and going DeviceCMYK is a safe choice.
  10. That could perhaps also be indication of underlying flattening process going on. Anyway, opening your recreation of the file showed that versions 1 do not have that F/X bug that causes export time rasterization of everything beyond the text layer. The resulting PDF however is still one that causes drawing / out of memory errors in Adobe Acrobat Pro (so that raster layers consisting of cables and connectors are not rendered at all). As with original version, the file can still be opened without issues in Affinity Designer and also in Illustrator CS6. Maybe it is my laptop with just 16GB RAM, or maybe it is an issue with Adobe Acrobat Pro 2020, but I would not be comfortable sending this kind of a file to printer. Btw, opening the 1.7.3 reconstruction in 2.3.0 causes the same issues, which kind of proves that it is not a question about file structure or file corruption, but an error in code / PDF handling. What actually triggers the F/X error (as I tested, and it is not universal at least with simpler designs) remains a mystery. Things learned along this thread suggest (for me) that even if effects, adjustment layers and generally non-destructive design are great to have at design time, just rasterizing everything at export time is safest choice -- at least with designs this large and complex. As noted, partial rasterizations (flattening of effects and adjustments) that I did to simplify the design, did not help at all in reducing the file size -- quite the contrary. Non-flattened mixed color space ICC-based PDF/X-4 would be a challenge even without bugs and production challenges (with RAM and rendering), or issues with huge file sizes (which exporting to PDF/X-4 should in principle help to avoid). E.g., I noticed that when exporting the original design (with F/X on top text layer turned off), the rotated separate cables on top of the globe (possibly because of having Levels adjustment applied on it) caused transparency flattening in PDF/X-4 export (but not in the simplified, merged image in your reconstruction). This is possibly universal and would imply that adjustments and F/Xs cannot be handled selectively, without flattening the whole area (down to background layer) defined by the layer where adjustments and F/Xs have been applied. One does not want to experience any such anomalies in production, even if in this case, such an anomaly caused a beneficial result of keeping the file size reasonable. UPDATE: Afterwards I realized that the Out of memory and drawing errors that I experienced with Adobe Acrobat Pro when opening these files are probably related to the perpetual license version of Pro 2020 still being available only as a 32-bit version [at least on Windows] so it has has technological limits as for maximum raster image sizes it can render.
  11. Now that I have tested this also in AI, having transported there the above shown bunch of a couple of dozen images with miscellaneous effects and mixed color spaces, and exported the lot to PDF/X-4 from AI, getting technically a "valid" (verified) and equally complex file, even if "only" with 55MB file size, but one that causes drawing errors in Adobe Acrobat Pro, I think that the correct analysis is that the shadow F/X applied on the top text object does cause inadvertent flattening (rasterization) of everything below. So yes, you were right, this layer explains what happens and is the trigger of this behavior. I have noticed this kind of behavior earlier when placing Affinity documents within Affinity documents but does this really happen universally when applying F/X effects?
  12. Yes: It is interesting that just hiding the top text layer causes this (exporting with same settings to PDF/X-4): This seems more a PDF processing issue than any kind of corruption in document structure.
  13. I do not know, and I am not sure if it is actually by design. I learned that in the beginning of development upsampling happened always (or at least more generally) when exporting to PDF. I think this may be related to transparency flattening, which happens with PDF/X-1a and PDF/X-3 since that is basically always done (within Affinity apps) by rasterizing at document resolution. Having images involved in transparency rasterization makes flattening complex so rasterizing and resampling uniformly at document resolution is probably a reasonable choice.
  14. It is possible that there is a technical flaw, but they might also be "just" memory issues. At times Acrobat Pro fails to render these kinds of files and shows an error message "Out of memory", and when closed and relaunched, might render properly. The file can also be examined with Preflight tools even if not rendered. And the fact that Affinity Designer opens these files without issues seems to support assumption on "mere" memory error, too. I think that the "additional" grayscale raster layers are results of having transparent backgrounds in two layers (connectors and globe), and they are probably used as kinds of masks. The huge file size is then simply just an issue caused by producing multiple large images with poor compression rather than caused by complexity of the file (the original file WAS complex). When using auditing tool, the following is shown:
  15. You're welcome! I forgot a thing that is important when working with large designs: Affinity apps will upsample images to the defined export DPI (in your case 250dpi) when exporting using PDF/X-1a (and in certain cases also when using PDF/X-3), but not when using PDF/X-4. They also do it for images that have been cropped by using Vector Crop tool (or manually masked), and in this case using the document DPI (which in your case was also 250). Rasterizing on screen will use the defined document DPI so images may be upsampled (using Bilinear algorithm) if you rasterize on screen, which you probably would not like to have (since you often intentionally use low-res images in jobs like these that are viewed at long distance), so in that sense my advise above about rasterizing on canvas and flattening effects may actually just backfire and produce poorer results (blurred and bloated images). It much depends on details...
  16. Possibly so, but it is still interesting to note that everything appears to work fine within .afdesign file itself, and that when using the original file without touching anything, it is possible to produce a PDF/X-4 non-flattened export file size of which is 25MB. Yet when the same file is simplified to the point of having flattened all effects and combined multiple raster layers so that only three are left, hiding the vectors, and leaving just the top bar, as follows: ...Designer goes on and creates -- very slowly -- a 165MB PDF/X-4 file consisting of 6 raster layers, 3 of which CMYK and 3 Gray, sizes of which are up to 33,811x29,528px. So simplifying the file, rasterizing on screen, removing layers that are supposed to be problematic, etc., and saving as: nothing helps to make the export sizes smaller, because basically layered images just increase in size, and seem to be somehow buried deep in the file structure beyond user control... The file still verifies as a valid PDF/X-4 without any issues, but Adobe Acrobat Pro cannot render it, and 64-bit Illustrator CS6 crashes when trying to open it. But no worries: Affinity Designer can still open the produced PDF file without flaws (even if pretty slowly), thus verifying that everything is correctly produced 🙂 Printer would not probably be happy, though... Based on all this, it is of course possible that the file is corrupted, but as it exports properly when opened and without making any changes (at 25MB size, even if over twice the size of the same content exported by AI), and each time after making changes to it, even if producing hugely bigger file sizes, and opens correctly in Designer even when other apps fail to render the file at all, I think that the core of the problem is in poor processing of file elements when exporting, inability to merge raster layers where possible, and inability to compress effectively. The standard solution offered on this forum has often been: get more RAM (I have only 16GB on my laptop), and more effective processor. Doing so will certainly make processing these kinds of files smoother, but it does not fix fundamental flaws in PDF production that Affinity apps still have, especially when working with large designs.
  17. I probably misunderstand something, but if you, too, get equally large export file size when you completely remove that layer, then how can the issue be related to that specific layer? I can also hide the both raster layers below, and when I export, still get about 70MB file size (and pretty long export time). Leaving all these layers visible gives me a 25MB file. To me that implies that there is something wrong with the file in general, or with the way Affinity processes the file's elements when exporting to PDF. I have done all the tests with these files using Windows 11 Pro and latest Designer (2.3.0). Filesize-wise both files come significantly smaller when exported from AI (opening the same file, exporting using the same method), which IMO basically answers OP's question. The thing with this other file, whatever is wrong with it (or Affinity / PDFLib code that processes the file) is more a kind of a secondary glitch (one possibly related to use of effects, which kinds of issues are not uncommon, either, but in a way, secondary, because the issue with the file sizes can be experienced also when no effects are used). The fact that the export file sizes are significantly smaller when exporting from Adobe apps is a thing that has been shown in numerous posts on the forum, and it is primarily explained by poor compression done for large images within Affinity apps because files are not analyzed at export time to determine optimum compression rate and method, but instead a fixed quality setting (compression rate) is applied. Here are the file sizes for the other file (the one with odd error when the file content is getting simplified, whether text layer removed, or the underlying image layers Cables and Connectors hidden, and file sizes getting bigger than when leaving these layers visible, or not deleting them). The bottommost file is exported from .afdesign using PDF/X-4, and then this file is opened in AI and AD and again exported using PDF/X-4 with default settings (that means on second time, 98% quality at 300DPI from AD). AI produces over a half smaller PDF with the same settings. It seems rather a coincidence how the original and re-export from AD have more or less equal file size, the exact byte counts being 25.685,239 bytes vs. 25,685,251 bytes), but the point is that the same file content is exported at 11,530KB when done from AI CS6:
  18. I removed the whole layer, yet the size of the exported file was 216,058KB (PDF/X-4), so it is not the layer that is corrupt; it is some odd Affinity error. It is strange that it happens when making the file simpler (removing text), but not when not making any changes.
  19. My standard when using InDesign is using PDF (press ready) based (default) routine which uses PDF 1.4 so it does not flatten transparencies, but converts everything to CMYK. I used PDF/X-1 with this job mainly because it simplifies the export file by converting everything to CMYK, and also because PDF/X-4s that I exported from the original file caused bad rendering and memory issues in Adobe Acrobat Pro. Yes, I mean rasterizing objects and effects as much as possible on the canvas, before exporting but keeping vectors as vectors. So saving a copy of the current file and then rasterizing part by part and exporting (or at least previewing, since the preview window shows realistically the file size as it appears to be doing pretty much the same that is done in actual export). Btw, I did not experience issues with the other file, so I opened it without making any changes, exported it to PDF/X-4 without issues and the file size was 25,084KB so a bit less than what you mentioned above.
  20. It is probably mostly a question of having many large raster images (plus effects that will get rasterized) using ineffective image compression (Affinity apps do not seem to analyze images in anyway so the compression rate is fixed and cannot vary dynamically). I tested this by opening the larger document in Designer and exporting from there using PDF/X-1a with default settings, which flattens transparencies. I then opened this file in Adobe Illustrator CS6 and Designer v2 and re-exported to PDF/X-1. The sizes are as follows (the bottommost is the one that was exported from .afdesign file from Designer v2, and the other two are the PDFs exported from Designer v2 and Illustrator CS6 using PDF/X-1a export method (after having opened Showstand B_pdfx1a_ad_fromoriginal.pdf): I had problems opening the large Designer-created files in Adobe Acrobat Pro 2020, which gave error messages related to drawing, lack of memory, etc. or simply just crashed. After some retries and closing and reopening the app, Adobe Acrobat could display the files. But I doubt that there might be issues with ripping if files of this size and complexity are delivered for processing. I would try to simplify the job by rasterizing objects already on the canvas.
  21. That's the point of the whole thread: how to avoid inadvertent upscaling. No one wants that.
  22. This is a bit better on macOS (at least current 2.x versions) than on Windows, but the only thing that really helps is changing to list view:
  23. I think this is basically because Affinity apps use PDFLib and as far as I know it does not support PDF 1.3 (I think I read on this forum that this was dropped at some point). I agree that having PDF1.3 would be useful, but there is another issue with transparency flattening within Affinity apps and that is that it basically always happens via rasterizing, so doing it by using Boolean operations is not supported. Now that Shape Builder is supported in Designer, you could manually do something like this by using Designer Persona, but it is of course pretty clunky. As for color profiles, is your production workflow such that you want to have embedded CMYK profiles (in files that are not passed through) causing conversions at export time / rip time? Contrary to e.g. InDesign (strong default), embedded CMYK profiles are not discarded. On the other hand, if you leave placed RGB images in RGB color mode, their ICC profiles will be embedded disregarding the "Embed ICC Profiles" setting. ICC-based workflows are confused in Affinity apps, especially because of so called "compatibility rules", so aiming at resolved CMYK values (with optionally images left in RGB color space with their respective ICC profiles) would generally be the safest choice to get expected results.
  24. No there is not. PDF 1.3 is not supported within Affinity apps so PDF/X-1a:2003 that it can produce, is PDF 1.4. Where do you need color profiles (embedded in the export PDF) if you want to produce transparency flattened DeviceCMYK PDF (that is what you would get if you had a chance to produce in PDF 1.3)?
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.