Jump to content
You must now use your email address to sign in [click for more info] ×

lacerto

Members
  • Posts

    5,783
  • Joined

Posts posted by lacerto

  1. Within Affinity apps, if you let a placed PDF to pass through (the default), the color mode of the source file will be retained (which is odd, and something you would not expect if you have experience of this functionality in other software). If you cannot edit the source, you have basically two options to have these images in grayscale:

    a) Change the placing mode from "Passthrough" to "Interpret" and then export using e.g. PDF (for press), using the document color space (Gray) and the document color profile (D50).

    image.png.101fa23898e7fdd3f56e4b535091fe06.pngimage.png.7287aa8eab16669c7707749da8f9d028.png

    (If you additionally have RGB raster images, check "Convert image color spaces".) The shortcoming is that when you let Affinity app interpret the image (it would be basically the same as if you opened the PDF in an Affinity app), the embedded fonts might get replaced if not installed, and with complex drawings miscellaneous rendering errors may also occur [some printer settings like overprint attribute will be lost, too]. In CMYK images K100 text and line elements would typically get converted to dark gray (via RGB interpretation of grays), but in RGB images RGB blacks and grays (like text and lines in the graph above) would be interpreted correctly as gray values. Note, too, that the way the colors are converted (e.g., the blues in the graph) depends on the target color profile. It may well be that you do not have alternative gray color profiles installed, in which case you would typically get Gamma 2.2 interpretation of colors (which normally works ok), but to have more control, you would ideally open the source PDF and define the exact gray values.

    b) Use Black and White Adjustment layer on the images that appear colored. This, however, will rasterize the affected images when exporting.

  2. 4 hours ago, PaoloT said:

    That's not the method I know and use. It would be simply foolish doing a translation in a page layout document. This would mean less coherence and higher cost, by not reusing any existing translations.

    I do not know what translation tools and specific workflows professional translation services use (certainly ones I have worked with use both translation memory of sort, and native translators), but the initial step I have performed has been delivering an IDML file saved from InDesign, and the final step has been receiving either a translated IDML (for simpler layouts not requiring visual/structural changes), or translated RTF documents initially exported by translators from InDesign (including formatting), to be placed back (still including formatting) in layout by me. So I do not think that they typically translate anything "in place". 

    Another scenario where IDML has been requested are situations where a client wants to have a chance to make minor edits like change of name or address in the layout, and in these situations their "editor" has typically been InDesign. I have not seen it necessary to advise them to use (and learn to use) something more cost-effective.

    Whenever professional collaboration is needed for a specific project, the tool required has always been CC, which I have then rented for the time needed (typically just for a month or two). This has worked well.

    4 hours ago, PaoloT said:

    Their dominance is still basically untouched, and they will always get the advantage of the incumbent

    IMO their dominance is primarily based on trusted workflows. The cost and longevity of Adobe-based tools for the designer has much been a question on whether one operates primarily on Windows or mac; and if subscribing, whether one app is enough, and whether what is needed can be hired just for short time periods. I started my school with Affinity apps in order to learn a backup suite. They can mostly be used as such once one has learned enough, but as there has not arisen any cause for making a "switch" and abandon trusted workflows, I only use them for isolated tasks in areas where they are at their strongest. I don't much think about companies in my work but I do not feel that I have been supporting villains whilst using software by Adobe or Microsoft. I try to stay practical and co-operative, but also economical. 

  3. 14 hours ago, PaoloT said:

    You are referring to some use cases I'm totally unaware of. But which type of cooperation?

    I referred to collaboration that involves working with the same file using compatible (= common, same) tools. I do not think that exchanging IDML files back and forth between two different apps can be productive as there are several things that do not fully translate. See e.g. notes related to QuarkXP 2024 IDML Export. As a workflow example you mention giving a finished layout for translation. Personally I would not be happy to hand over a finished layout that has any degree of complexity to translators that would first import the document to e.g. Affinity Publisher, edit it there native, and then send it back converted to IDML using proprietary export method (which I'd need to check and most probably spend considerable amount of time making fixes and get back features that were lost in translation). I have done these kinds of tasks a few times when all involved have been using InDesign, even if different versions, and it was ok then.

    Today the files involved could also reside on cloud so there is no need to send lots of files back and forth. This is one issue more when working with Affinity or other 3rd party apps.

    14 hours ago, PaoloT said:

    If you can find a link in the Adobe site where to download it, I'd be happy to know.

    The most recent file format specs that I have are version 8 and I think it came with the Master Suite CS6 (2012). I have also an IDML Cookbook that came with CS6. I am not sure if there are any later versions but perhaps not because I have understood that the point is that IDML files can be opened in different versions of InDesign from the most recent one down to CS4. I have not tried to look for newer versions of these files on Adobe developer sites -- the ones I have might still exist there in context of legacy CS5/6 documentation, or perhaps as part of XML documentation. The plugins SDK I referred to has a reference manual for IDML (server based) tools.

    I think that if someone is determined to develop IDML export support, they can find the required documentation. One prerequisite for developing such a tool is that multiple versions of InDesign are accessible (for testing alone), so that would at least be a guaranteed source.

    I responded because you seemed to imply that Adobe has hidden IDML related documentation to make it more difficult for other software developers to create full-fledged competitive software with equally rich feature set. If so, they are a bit late, since tools supporting IDML in both directions already exist, and they have not been game changers (e,g,, there are 5 comments in about 4 years for Q2ID Markzware introduction on YouTube). Who knows if it is the other way around: making an agreement with developers of existing export plug-ins that availability of IDML documentation is screened to reward efforts of early-birds.

    On the other hand, I think that support for IDML export is a double-edged sword: it makes it easy also to come back if it turns out that the competition was less competitive than was hoped. Adobe has also taken big steps in creating new development environments and tools (e.g. UXP) for CC-based apps, and server based plugins, on the other hand, so the focus of development has changed a lot in recent years.

  4. 17 hours ago, PaoloT said:

    Adobe has already hidden the IDML specs document. It should mean that they wouldn't be too happy, if IDML export from Publisher does happen.

    If you mean that the documentation is "hidden" amongst 81,960 files within the ID Plug-in SDK so that a possible competitor gets exhausted and will give up already when extracting the about 3.5GB package (a task that takes about an hour), then perhaps so. But seriously, being afraid of competition and someone developing interchange support for Adobe created things is not the first thing that comes in mind when visiting developer.adobe.com  and seeing the sheer amount of open and free as per version targeted documentation, SDKs and tools available for anyone interested.

    Even with full IDML export support, true co-operation between diverse professionals already happens in the cloud and using common tools. I suppose IDML export is more a kind of a one-off tool for the purpose of providing a client with an editable document they wished to have (and many clients do nowadays, in addition to getting a production file as a PDF). If it is required, why not pick a tool that can already do it, there are multiple available, including ones that are not (necessarily) subscription-based (and not developed by Adobe, if that is important).

  5. If you mean a trick where Affinity Publisher is fooled to replace images in one go by using a renamed folder, note that when you get a warning about missing images, you need to select "Yes" from the dialog. Using Resource Manager does not allow you to actually replace images but just locate the saved but missing ones in a different folder (which I think that Walt does above).

    UPDATE: So what you assume, that there is something extra that Publisher uses to match an image must be true, mere filename is not enough.

  6. On 12/11/2023 at 10:45 AM, pigeon said:

    is there a better way?

    I do not think that there are alternative antialiasing options (other than what you mentioned, turning off antialiasing completely), so to simulate PS and GIMP antialiasing method options:

    Textrendering.png.d89a57519060d25600cc0f0d4624eecf.png

    ...you could try to achieve something similar by using e.g. sharpening and blurring live filters:

    image.png.096f59c78da4f4e0fe7e2122a33113ad.png

    EDIT: ...or using Threshold Adjustment (if no kind of antialiasing is wanted, but something more usable with small sizes than what the blend option can offer):

    image.png.1f22902869676d9b2cb5f10bb0ad08e3.png

  7. 11 hours ago, GregoryOR said:

    In the digital HQ pdf, the image is faded, and is a TIFF with RGB color space and no ICC profile. 

    Yes, it is the one that is tagged as having the sRGB2014 color profile. The transparency values of downsampled and non-downsampled images are the same but the one that has sRGB2014 profile has significantly and systematically brighter RGB values.

    I just rebuilt the file in Publisher 1.10.6.1665, and exported directly to PDF (Digital - high quality), and the issue does not happen there:

    image.png.1e85827c0a5f7e65fbd662ed1c1eb5de.png

    ...so it is clear that this is an error introduced in versions 2.

    BTW, I use Adobe Acrobat Pro 2020 (perpetual license) on Windows as a prepress tool, and on mac I have Packzview (exists also for Windows) which is a free (though the license is obviously only admitted for professional use), but more limited, yet very useful as a tool for getting print-pertinent information on PDFs.

  8. Checked this now on macOS (Sonoma 14.1.2 native M1 Publisher 2.3.0), and it is not different there:

    a) Non-downsampled:

    image.thumb.png.a2ad76247eca421009ffee794be5bd19.png

    b) Downsampled:

    image.thumb.png.1ffb6f416e8f5b4e33ea56486c000a16.png

    So it appears that the inadvertently applied sRGB2014 would explain this, and that the issue would be automatically fixed when downsampling and sRGB2.1 is applied?

    I say mysteriously because Resource Manager states this:

    resource_manager.thumb.png.716ed782ec41ac8537a7dfde11bad082.png

    My guess is that this is partially an old issue related to Affinity apps not supporting indexed images so perhaps the original sketch and/or the paper background were paletted images originally and got converted to RGB images when placed, and were assigned with the app workplace RGB default sRGB2.1. For some reason sRGB2014 gets involved on export which I think is a later issue (somehow similar to an issue with renamed ISO Coated v2 CMYK profile)...

    I am not sure if color management within Affinity apps is getting any better, but perhaps it is a good sign that it gets different 🙂

     

  9. I had another look on this, and there is something odd going on with handling of the PNG transparency of the top image that I cannot figure out. The transparency values behave erratically if the image is just passed through (e.g. exported using "PDF (Digital - high quality)":

    sketch_apub_nodownsampling.png.40ed39463e33f4c5556464e0c97afd49.png

    but work fine if the image is downsampled (e.g., exported using "PDF (Digital - low quality)":

    sketch_apub_downsampled.png.e6e3c507ead9c0fbc37834f00844b08e.png

    In both cases the transparent foreground PNG image is handled as a 24-bit RGB image containing RGB values and a device gray image containing the transparencies, but the non-downsampled image has sRGB2014 as an assigned color profile, which I do not get, because the embedded image has the standard sRGB2.1 color profile, as does the externalized linked image.

    If I create the main image (sketch paper and and the foreground image with transparency) in InDesign and export to "interactive" PDF without resampling, I will get the following:

      sketch_id.png.bb9ae08807a97bd912f809d0e6f50585.png

    ...so basically the same as in Publisher but with an indexed (paletted) RGB image with a device gray image telling the transparencies, but ICC based on standard sRGB2.1.

    Somehow the error would appear to be related to using sRGB2014. I do have that profile installed on the system, and I thought that this might be Windows-only issue, but it appears that you are on macOS and seem to experience the same issue. I need to examine this on my mac to see if it is any different there...

  10. 2 hours ago, GregoryOR said:

    with no change to the output, so then chose PDF/X-1a like you suggested, and it looks great!

    Ok. Since Affinity apps cannot automatically flatten transparencies without converting them to CMYK, I would probably switch the color mode of the document to RGB and take the background image from the Master page to the back of the first page and group it with the top image and then rasterize, to have the transparencies flattened, and then export to RGB. You could also rasterize the top image in the current CMYK color mode to convert it to CMYK color space (= document color space) and then export directly to RGB (e.g. using PDF - Digital preset)), to have transparency values cause the visual appearance that you see before exporting.

    There is something that I do not understand in the source Publisher file since if I rebuild it in Photo in sRGB color space (with the embedded RGB page background and main image, and the text and vector object group at the top), I get the expected RGB exports without merging the top and bottom images, and without any image conversions. Perhaps the file was somehow corrupted because of adjustment layers applied at the top group with gray values?

  11. On 12/10/2023 at 2:41 PM, firstdefence said:

    I'm surprised it's still used so much

    It is not so surprising in the light that much of vector graphics is still created using AI, and Illustrator EPS retains much of the editability of the document (even if basically only when using AI, and to some extent also certain 3rd party apps), while also allowing passing through similarly as PDFs (though not in Affinity apps that always interpret EPS and convert fonts to curves, and lose spot colors and overprint attributes). Illustrator EPS is a better option e.g. for stock vector graphic providers than offering complexity (e.g., mixed color spaces) and mediocre editability provided by PDF format (unless native AI is preserved), or providing multiple native file formats (basically requiring multiple source files and partial recreation of designs). EPS can still be useful also to avoid complexities of color management, and sometimes, because of its compactness (e.g. transferring precisely placeable equations). But it is of course a legacy file format increasingly losing support everywhere.

    I suppose native AI with PDF streams would be the modern alternative but if universal editability is hoped for, non-Adobe users will be equally disappointed and confused as with non-editable or only partially editable and inadequately or incorrectly interpreted EPS files: much of expected information will be unavailable, and much of color information wrongly interpreted. But which .AI format, then (considering those who still use Illustrator)?

  12. You have mixed color space (RGB and CMYK) images placed in a CMYK document with the default US Web Coated target profile. All images however are relatively low res so it is unclear whether CMYK output is really wanted, but let us assume that CMYK was really intended. The PDF created is also mixed color space file, and retains the involved transparency of the topmost image so the color values have not been resolved. 

    How the color values will be resolved when the image is ripped (and the transparencies flattened) at print time will depend on in which transparency blending color space flattening will happen. Affinity apps do not allow the user to explicitly specify this, but there are ways to resolve the problem so that you more or less get what you see and can reasonably expect.

    One part of the problem is that whenever you use CMYK document color mode, everything in the document will be shown as if proofed to simulate the target CMYK profile, and this view is not necessarily realistic as regards the final output you will get. 

    One way to resolve this is to explicitly export to CMYK color space and additionally use "Convert image color spaces" option:

    image.png.8b92aa9d5de6478c3237e48f45669889.png

    This still leaves the file somewhat ambiguous, leaving the transparency blending color space in RGB mode, but the result on paper would already be pretty much what you expect based on what you see on the display:

    SketchTest_master_forced_cmyk.pdf   

    The safest choice is to flatten transparencies, either by merging the involved raster images before exporting, or at export time, using either PDF/X-1a or PDF/X-3 based exports, which both do the job. Using PDF/X-1a will make the exported file DeviceCMYK, while PDF/X-3 can optionally leave non-flattened images in RGB color space.

    SketchTest_master_pdfx1.pdf

    SketchTest_master_non_forced_pdfx3.pdf

    Without having PDF preflight tool like Adobe Acrobat Pro, IMO it is safest to go with PDF/X-1a, but there are some serious flaws [within Affinity apps] with this production method whenever placed PDFs are involved, or when needing to flatten vector-based transparencies. Neither are involved in your job, so forcing flattening and going DeviceCMYK is a safe choice.

     

  13. On 12/8/2023 at 5:59 PM, Hangman said:

    One thing I do notice with the original v2 version of the file when the FX is enabled on the Text layer is that as soon as I go to the Export window my mouse becomes juddery, no longer moving slowly and I see the spinning beachball when editing values in the export window so it physically feels as though my Mac is being impacted heavily. This doesn't happen when removing the FX on the text layer.

    That could perhaps also be indication of underlying flattening process going on.

    Anyway, opening your recreation of the file showed that versions 1 do not have that F/X bug that causes export time rasterization of everything beyond the text layer. The resulting PDF however is still one that causes drawing / out of memory errors in Adobe Acrobat Pro (so that raster layers consisting of cables and connectors are not rendered at all). As with original version, the file can still be opened without issues in Affinity Designer and also in Illustrator CS6. Maybe it is my laptop with just 16GB RAM, or maybe it is an issue with Adobe Acrobat Pro 2020, but I would not be comfortable sending this kind of a file to printer.

    Btw, opening the 1.7.3 reconstruction in 2.3.0 causes the same issues, which kind of proves that it is not a question about file structure or file corruption, but an error in code / PDF handling. What actually triggers the F/X error (as I tested, and it is not universal at least with simpler designs) remains a mystery. 

    Things learned along this thread suggest (for me) that even if effects, adjustment layers and generally non-destructive design are great to have at design time, just rasterizing everything at export time is safest choice -- at least with designs this large and complex. As noted, partial rasterizations (flattening of effects and adjustments) that I did to simplify the design, did not help at all in reducing the file size -- quite the contrary.

    Non-flattened mixed color space ICC-based PDF/X-4 would be a challenge even without bugs and production challenges (with RAM and rendering), or issues with huge file sizes (which exporting to PDF/X-4 should in principle help to avoid). E.g., I noticed that when exporting the original design (with F/X on top text layer turned off), the rotated separate cables on top of the globe (possibly because of having Levels adjustment applied on it) caused transparency flattening in PDF/X-4 export (but not in the simplified, merged image in your reconstruction). This is possibly universal and would imply that adjustments and F/Xs cannot be handled selectively, without flattening the whole area (down to background layer) defined by the layer where adjustments and F/Xs have been applied. One does not want to experience any such anomalies in production, even if in this case, such an anomaly caused a beneficial result of keeping the file size reasonable.

    UPDATE: Afterwards I realized that the Out of memory and drawing errors that I experienced with Adobe Acrobat Pro when opening these files are probably related to the perpetual license version of Pro 2020 still being available only as a 32-bit version [at least on Windows] so it has has technological limits as for maximum raster image sizes it can render. 

  14. 2 hours ago, Hangman said:

    so I'm starting to think this is an FX specific bug rather than file corruption.

    Now that I have tested this also in AI, having transported there the above shown bunch of a couple of dozen images with miscellaneous effects and mixed color spaces, and exported the lot to PDF/X-4 from AI, getting technically a "valid" (verified) and equally complex file, even if "only" with 55MB file size, but one that causes drawing errors in Adobe Acrobat Pro, I think that the correct analysis is that the shadow F/X applied on the top text object does cause inadvertent flattening (rasterization) of everything below. So yes, you were right, this layer explains what happens and is the trigger of this behavior. I have noticed this kind of behavior earlier when placing Affinity documents within Affinity documents but does this really happen universally when applying F/X effects? 

  15. 42 minutes ago, k_au said:

    Has this been reported as a bug yet? I couldn't find something like this on the forum.

    I do not know, and I am not sure if it is actually by design. I learned that in the beginning of development upsampling happened always (or at least more generally) when exporting to PDF. I think this may be related to transparency flattening, which happens with PDF/X-1a and PDF/X-3 since that is basically always done (within Affinity apps) by rasterizing at document resolution. Having images involved in transparency rasterization makes flattening complex so rasterizing and resampling uniformly at document resolution is probably a reasonable choice.   

  16. 7 minutes ago, Hangman said:

    The fact that Adobe Acrobat Pro cannot render it, and 64-bit Illustrator CS6 crashes when trying to open it suggests file corruption.

    It is possible that there is a technical flaw, but they might also be "just" memory issues. At times Acrobat Pro fails to render these kinds of files and shows an error message "Out of memory", and when closed and relaunched, might render properly. The file can also be examined with Preflight tools even if not rendered. And the fact that Affinity Designer opens these files without issues seems to support assumption on "mere" memory error, too. I think that the "additional" grayscale raster layers are results of having transparent backgrounds in two layers (connectors and globe), and they are probably used as kinds of masks. The huge file size is then simply just an issue caused by producing multiple large images with poor compression rather than caused by complexity of the file (the original file WAS complex). When using auditing tool, the following is shown:

    auditing.jpg.d82c0d9132d168e3007901ce73ef8f10.jpg

  17. 16 minutes ago, Gigatronix Pete said:

    Cool :)  ...thanks for the info @lacerto

    You're welcome! 

    I forgot a thing that is important when working with large designs: Affinity apps will upsample images to the defined export DPI (in your case 250dpi) when exporting using PDF/X-1a (and in certain cases also when using PDF/X-3), but not when using PDF/X-4. They also do it for images that have been cropped by using Vector Crop tool (or manually masked), and in this case using the document DPI (which in your case was also 250).

    Rasterizing on screen will use the defined document DPI so images may be upsampled (using Bilinear algorithm) if you rasterize on screen, which you probably would not like to have (since you often intentionally use low-res images in jobs like these that are viewed at long distance), so in that sense my advise above about rasterizing on canvas and flattening effects may actually just backfire and produce poorer results (blurred and bloated images). It much depends on details... 

  18. 5 hours ago, loukash said:

    Whereas IBC2023-SideWall_Version2.afdesign causes the same issues as reported above. My conclusion is that there must be something corrupt about the file structure.

    Possibly so, but it is still interesting to note that everything appears to work fine within .afdesign file itself, and that when using the original file without touching anything, it is possible to produce a PDF/X-4 non-flattened export file size of which is 25MB. Yet when the same file is simplified to the point of having flattened all effects and combined multiple raster layers so that only three are left, hiding the vectors, and leaving just the top bar, as follows:

    image.png.5f6143a3cb55c6ca09398cdaad3e5c4f.png

    ...Designer goes on and creates -- very slowly -- a 165MB PDF/X-4 file consisting of 6 raster layers, 3 of which CMYK and 3 Gray, sizes of which are up to 33,811x29,528px. So simplifying the file, rasterizing on screen, removing layers that are supposed to be problematic, etc., and saving as: nothing helps to make the export sizes smaller, because basically layered images just increase in size, and seem to be somehow buried deep in the file structure beyond user control... 

    image.png.7b75451d64e8bf541f775e7f8988e108.png

    The file still verifies as a valid PDF/X-4 without any issues, but Adobe Acrobat Pro cannot render it, and 64-bit Illustrator CS6 crashes when trying to open it. But no worries: Affinity Designer can still open the produced PDF file without flaws (even if pretty slowly), thus verifying that everything is correctly produced 🙂 Printer would not probably be happy, though... 

    Based on all this, it is of course possible that the file is corrupted, but as it exports properly when opened and without making any changes (at 25MB size, even if over twice the size of the same content exported by AI), and each time after making changes to it, even if producing hugely bigger file sizes, and opens correctly in Designer even when other apps fail to render the file at all, I think that the core of the problem is in poor processing of file elements when exporting, inability to merge raster layers where possible, and inability to compress effectively. The standard solution offered on this forum has often been: get more RAM (I have only 16GB on my laptop), and more effective processor. Doing so will certainly make processing these kinds of files smoother, but it does not fix fundamental flaws in PDF production that Affinity apps still have, especially when working with large designs.

  19. 3 hours ago, Hangman said:

    That's not what I'm seeing on macOS, if I either toggle the visibility of that layer off or delete that layer completely and export the file (PDF/X-4) 92% Quality, both come in at 135.7 MB.

    I probably misunderstand something, but if you, too, get equally large export file size when you completely remove that layer, then how can the issue be related to that specific layer? I can also hide the both raster layers below, and when I export, still get about 70MB file size (and pretty long export time). Leaving all these layers visible gives me a 25MB file. To me that implies that there is something wrong with the file in general, or with the way Affinity processes the file's elements when exporting to PDF.

    I have done all the tests with these files using Windows 11 Pro and latest Designer (2.3.0). 

    Filesize-wise both files come significantly smaller when exported from AI (opening the same file, exporting using the same method), which IMO basically answers OP's question. The thing with this other file, whatever is wrong with it (or Affinity / PDFLib code that processes the file) is more a kind of a secondary glitch (one possibly related to use of effects, which kinds of issues are not uncommon, either, but in a way, secondary, because the issue with the file sizes can be experienced also when no effects are used). The fact that the export file sizes are significantly smaller when exporting from Adobe apps is a thing that has been shown in numerous posts on the forum, and it is primarily explained by poor compression done for large images within Affinity apps because files are not analyzed at export time to determine optimum compression rate and method, but instead a fixed quality setting (compression rate) is applied. 

    Here are the file sizes for the other file (the one with odd error when the file content is getting simplified, whether text layer removed, or the underlying image layers Cables and Connectors hidden, and file sizes getting bigger than when leaving these layers visible, or not deleting them). The bottommost file is exported from .afdesign using PDF/X-4, and then this file is opened in  AI and AD and again exported using PDF/X-4 with default settings (that means on second time, 98% quality at 300DPI from AD). AI produces over a half smaller PDF with the same settings. It seems rather a coincidence how the original and re-export from AD have more or less equal file size, the exact byte counts being 25.685,239 bytes vs. 25,685,251 bytes), but the point is that the same file content is exported at 11,530KB when done from AI CS6:

    image.png.bceda02aa82110a4a735a118df77b7ca.png

  20. 40 minutes ago, Hangman said:

    Hide the Artistic Text layer, 12G SDI, 4K in that file and re-export it..

    I removed the whole layer, yet the size of the exported file was 216,058KB (PDF/X-4), so it is not the layer that is corrupt; it is some odd Affinity error. It is strange that it happens when making the file simpler (removing text), but not when not making any changes.

  21. 1 hour ago, Gigatronix Pete said:

    Do you use PDF/X-1a for any PDF that will be printed?  ...I was told to use PDF/X4 for anything CYMK?

    My standard when using InDesign is using PDF (press ready) based (default) routine which uses PDF 1.4 so it does not flatten transparencies, but converts everything to CMYK. I used PDF/X-1 with this job mainly because it simplifies the export file by converting everything to CMYK, and also because PDF/X-4s that I exported from the original file caused bad rendering and memory issues in Adobe Acrobat Pro.

    Yes, I mean rasterizing objects and effects as much as possible on the canvas, before exporting but keeping vectors as vectors. So saving a copy of the current file and then rasterizing part by part and exporting (or at least previewing, since the preview window shows realistically the file size as it appears to be doing pretty much the same that is done in actual export).

    Btw, I did not experience issues with the other file, so I opened it without making any changes, exported it to PDF/X-4 without issues and the file size was 25,084KB so a bit less than what you mentioned above.

  22. It is probably mostly a question of having many large raster images (plus effects that will get rasterized) using ineffective image compression (Affinity apps do not seem to analyze images in anyway so the compression rate is fixed and cannot vary dynamically). I tested this by opening the larger document in Designer and exporting from there using PDF/X-1a with default settings, which flattens transparencies. I then opened this file in Adobe Illustrator CS6 and Designer v2 and re-exported to PDF/X-1.

    The sizes are as follows (the bottommost is the one that was exported from .afdesign file from Designer v2, and the other two are the PDFs exported from Designer v2 and Illustrator CS6 using PDF/X-1a export method (after having opened Showstand B_pdfx1a_ad_fromoriginal.pdf):

    image.png.c395bd66a3b09ca2958687e0271912d2.png

    I had problems opening the large Designer-created files in Adobe Acrobat Pro 2020, which gave error messages related to drawing, lack of memory, etc. or simply just crashed. After some retries and closing and reopening the app, Adobe Acrobat could display the files. But I doubt that there might be issues with ripping if files of this size and complexity are delivered for processing.

    I would try to simplify the job by rasterizing objects already on the canvas. 

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.