Jump to content
You must now use your email address to sign in [click for more info] ×

lacerto

Members
  • Posts

    5,807
  • Joined

Posts posted by lacerto

  1. Hi, @Jmes,

    If you are on Windows, you should be able to download both PS and PCL drivers for your Xerox -- I have myself a Xerox color laser printer with the same color capabilities as your model, and can confirm that I experience this problem both on Windows (11 Pro) and macOS Ventura 13.2.1 with the latest drivers (just updated) from Xerox. I have tested this on the latest 2.0.4 version of Publisher on both operating systems.

    This is basically what I get, depending on how I print:

    transparency_mixed_mode.jpg.e641f63336a47c95a53c3e14d956c773.jpg

    The topmost is what I get when I print on the PostScript driver of the printer (directly from Affinity Publisher), using Windows (it seems such driver is not available on macOS, and if I use macOS Preview app to create a PDF, everything looks fine but when printed (from Preview app), I'll get similar mixed colors as you, and as shown in the middle of the screenshot above.

    In the middle is the mixed color output that I get if I print to the PCL driver directly from Publisher. It does not seem to matter even if I switch to RGB color mode or use different Color Management settings within the Print dialog box.

    In the bottom is the result (showing darker color throughout) if I export first to PDF and then print from within Adobe Acrobat Pro. I get similar darker violet tone also if I print directly from InDesign (defining the color in CMYK and using Coated Fogra 39 profile), but there I would get consistent color output on my desktop printer no matter which printer driver I use.

    IMO the problem is related to transparency in the text part getting flattened in RGB color space when outputting to a PCL driver -- I cannot say why the color is different -- it should not be because the color has been originally defined in CMYK color model, and exists and is exported from a CMYK color mode document. This specific color does look brighter on screen also when using Adobe apps, and would probably be accurate and similarly brighter when actually producing on printing press using this profile.

    Desktop laser printers, however, even if CMYK devices, typically either use their own specific color driver (possible combined with a specific color profile), or expect color input in (s)RGB, and then make the conversion, so unless you are on Windows and can use the PostScript driver that can use PostScript passthrough (and produce acceptably realistic colors), you should probably work in RGB color space or define your colors in RGB, and then do CMYK conversions only at export time. 

    You can also work around these kinds of issues by avoiding live transparencies, ideally using Boolean operations to preserve vector output (combined with compound objects, to also preserver editability of objects that have Boolean operations applied). But these kinds of manual flattening operations can of course get rather complex so using RGB color definitions in shapes and objects that will be overlapping and that have interchanging, live color data (transparencies, blend modes, overprint attributes, etc), or which are wanted to printed with unnarrowed color gamut, is probably the most convenient solution. 

  2. I have experienced this a few times, but could not find now a file that is causing this, and I think that when you post JPG files here on the forum they will be resaved at upload and the property that is causing the behavior is lost. Could you repost placing a JPG file showing this behavior in a zip file? It might also be related to specific Photoshop version so if possible, please specify!

  3. 3 hours ago, Mike W077 said:

    We receive way too many PDF placeable files from customers and agencies, as well as elements — logos, other art — that we design up ourselves — to make anything this complicated work.

    Yes, this is a true problem -- it is unthinkable that 3rd party providers would be required to deliver material in so specific format as e.g. PDF/X-4 (and exclusively so without including mixed content in how they build up their PDF content). PDF/X-4 is often seen as a kind of a magic wand because of being basically tolerant of mixed content and letting late-bound (print-shop/RIP-based) rasterization. It is absurd that it should be incompatible with non-PDF/X based placed PDF content.

    In addition, I have learned in last three or four years (during the time I have used Affinity app suite) that PDF production is not necessarily nearly as automated and "final" as I have thought it is. E.g. transparency flattening seems to be often something that is done as a prepress job rather than on RIP and basically something that requires skillful print personnel. Manual adjustments made by print personnel seem to be much more common than is generally known. Such routines might well be based on assumption that Adobe (or QuarkXPress or Corel) based production workflows have been used, so not only is the printer typically incapable of providing exact instructions (to avoid or correct erroneous output), but might actually end up producing unsatisfactory / unexpected results on paper because of such assumptions .

    Affinity specific "PDF compatibility rules" are not something that print personnel is likely to know about.  There are other production quirks, too. All this means that in more complex productions some kind of prepress software is required not only to do proper preflight, but sometimes also to make corrections to production PDFs themselves.

  4. The whole package is quite complex, but here are few rules of thumb whenever placing PDF content and exporting to PDF, and PDF/X is involved either in export or placement:

    1) If there is placed content using PDF/X-1a or PDF/X-3 (which are version 1.4 when produced from within Affinity apps), you need to export using PDF/X-3 or PDF/X-4, or any non-PDF/X-based method, to not cause rasterization / translated color values (e.g. rich black).

    2) If there is placed content using PDF/X-4 (which is version 1.6), you need to export using PDF/X-4, or any non-PDF/X-based method using PDF version 1.6 or later, to not cause rasterization / translated color values (e.g. rich black). 

    3) If there is non-PDF/X-based placed content, you need to export using non-PDF/X-based method using the same or later PDF version than a placed PDF content, to not cause rasterization / translated color values (e.g. rich black). The most compatible choice would then be PDF 1.7. Whatever the non-PDF/X-based choice used, live transparencies (opacity values and blend modes) would not be flattened (because Affinity apps do not support PDF version 1.3, which would cause flattening). In Affinity apps transparency flattening is only used in PDF/X-1a and PDF/X-3. 

    4) Placing PDF as interpreted causes failure to read the overprint status of native objects. Also, if the files use embedded CMYK profiles and there is a conflict with the export target, or if the files do not use embedded CMYK profile and the document working profile (as defined in Preferences) that these files will be assigned with, and there is a conflict with the export target, CMYK color values of the placed PDF content will be translated at export time. This basically corresponds a situation where non-document CMYK profile is defined at export time. Letting Affinity apps interpret the placed content may also result in failure to map fonts (even when they are installed on the system), especially if the PDFs have been created with non-Affinity software (e.g. Adobe apps or CorelDRAW), even on the same computer. 

    Here are demo files:

    a) Miscellaneous placed PDF content exported using PDF/X-1a (all placed content will be rasterized):

     pdfxcompatibility_pdfx1.pdf

    b) Miscellaneous placed PDF content exported using PDF/X-4 (non-PDF-based content will be rasterized):

    pdfxcompatibility_pdfx4.pdf

    c) Miscellaneous placed PDF content exported using non-PDF/X-based PDF1.7 (note that Adobe Acrobat Pro shows the color values of the non-PDF/X-based PDF at the lower left corner incorrectly; the true color values are shown using the Object Inspector of the Output Preview dialog box):

    pdfxcompatibility_pdf17.pdf

    d) Example of an export file (PDF/X-4) using interpreted placed PDFs (note the lost overprints):

    pdfxcompatibility_pdfx4_interpreted.pdf

    A couple of further notes:

    1. Affinity apps always convert RGB values of native objects (shapes and text) to CMYK, also when using PDF/X-3 or PDF/X-4 which allow RGB definitions (this is unlike e.g. InDesign). Additionally, PDF/X-3 also converts image color spaces to CMYK, disregarding the value of "Convert image color spaces" setting...[EDIT: After rechecking, this only seems to happen if there is need for transparency flattening, and InDesign does behave here similarly.]
    2. When using PDF/X-1 or PDF/X-3, all transparencies are flattened by using rasterization instead of Boolean operations (which are tried to be used when exporting from Adobe apps or QuarkXPress).

    A long story short: Export using PDF/X-4 (while having the content placed to be passed through), or using non-PDF/X-based version 1.7, and if using the latter, remember to uncheck the "Embed ICC profiles". The latter will keep placed non-PDF/X-based content unchanged, the former will not. In both cases, your export files will contain live transparencies, in case not flattened in the placed document or on the canvas. If you need to export transparency flattened PDF/X-1a content and you have them in placed PDF/X-4 content, you are out of luck. That means: the content will be rasterized unless you can flatten it in the source files and reproduce.

    Note: These tests and files have been created using the 2.0.4 Windows version of Affinity apps, but versions 1.10.6 and the latest v2 beta behave similarly, on both Windows and macOS.

     

  5. I do not think that it is a reasonable feature request.

    First, because this is not a standard practice in any notation system, as far as I know, and second, because there basically needs to be a unique one-to-one link between a note marker and the note (so that you can move from the note to the note marker in text). There are also systems where numbering restarts per page or spread, and if multiple markers were allowed, the secondary markers should be forced to stay on the same page to have a clear reference, which would just make things unnecessarily complex and confusing. 

    If this kind of notation practice is wanted to be used, it is always possible to just place a plain superscript, without an actual link. 

  6. 2 hours ago, N.P.M. said:

    Aren't job options just textfiles which one can read in a texteditor?
    That way one can have the specs for that printshop at hand.

    Possibly so, but I mean the practice of print shops providing the job options and user then just loading the appropriate settings in the layout app by using an interface like this:

    joboptions.jpg.2f0e638ee3c189b4aca3e2a11f8fbc8f.jpg

    ...from something like provided e.g. by Helsingin Sanomat (the major newspaper in Finland):

    joboptions_hs.jpg.efaf6d90cffcddec3c34e5560c1cf9c2.jpg

  7. 9 hours ago, Ldina said:

    @lacerto Thank you, as always. If it's not fully clear to you, then those of us who are less knowledgeable about Publisher's "ins-and-outs" and PDFs are in trouble!! 😳

    Thanks for the kind words, but I want to make clear that my comments are actually just based on "user experience" rather than on true understanding of underlying technology.

    I have decades of experience in desktop publishing but have not been required to be concerned about these kinds of things when exporting from Adobe apps because print shops would provide the necessary information by giving step-by-step instructions or creating job options files so that creating production files would be easy for anyone, even without understanding of printing procedures.

  8. This (screenshot from InDesign CS6) is basically the de-facto standard of PDF exports (variations just being dependent on whether the basic settings are based on American or European standards): 

    image.png.0c60c9d4241e597d5c211eab8e071528.png

    Affinity apps have PDF version 1.7 as the default which is just practical considering their internal PDF version compatibility rules (so having the latest possible within Affinity apps, 1.7, as default, certainly makes sense), but this does not make any difference as regards "necessity" of including the profiles.

     

  9. Very good questions, and i am not sure if I really can answer in a way that is fully satisfactory.

    1) I have not been able to figure out what this option does exactly, or why it is enabled by default, but I regularly give this instruction (of unchecking it) because it:

    • Causes issues with apps like Adobe Acrobat Pro when using its Output Preview because it marks CMYK objects as ICC dependent even if they are clearly meant to be passed through with original or native color values, needing no interpretation when ripped to plates; this means that unless the implied target CMYK profile is used as the simulation profile in Adobe Acrobat Pro, the color values are translated according to whatever is chosen as the simulation profile (and that would be e.g. Coated Fogra 39 in European context). This is at least confusing, but might also result in actual false reproduction of colors, much because of misunderstanding (on part of print shop).
      EDIT: PackzView shows PDF native color values despite of this setting. It is not clear whether the simulation based ad-hoc color conversions experienced in Adobe Acrobat Pro might be something that result in wrong color values produced also when ripping the file. But they might be a cause of "human error", print personnel misinterpreting the color content and causing unnecessary retranslation of colors that are actually already targeted to final color space and that would just get worse because of retranslation.
    • I have not found a situation where embedding ICC profiles could be useful, because Affinity apps basically always convert in press related context native RGB colors (those applied to shapes and text) to CMYK (even if not necessary, like when exporting to PDF/X-3 or PDF/X-4). Checking or unchecking the option does not seem to have effect on handling of placed RGB content so whether it is e.g. in sRGB or Adobe RGB color space, embedding or not embedding does not seem to have any significant effect.
    • The equivalent export methods in InDesign, whenever CMYK values are involved, never embed ICC profiles. CMYK values are basically "Device-CMYK" and are not meant to be translated, and RGB values always get translated based on target CMYK.

    2) I did not specifically check whether leaving "Embed ICC Profiles" checked would cause file-specific additional issues, but because it basically by definition does in context of using apps like Adobe Acrobat Pro, I regularly just recommend unchecking the option. I have not seen Serif commenting my regular recommendation of unchecking the option, and giving explanations why or in which situations it should be left checked.

    3) I think that the option basically refers to conversion of raster-based placed content. The option does not seem to apply to native content (shapes and text). I am not sure but I think that whenever converting to CMYK target (PDF press or any PDF/X), Affinity apps always convert native and text objects to CMYK, and in context of PDF/X-3 (which by definition allows RGB content), even all placed raster content to CMYK.

    I need to check that what I mentioned here is still valid in context of latest versions. This is not the easiest thing to understand in PDF production methods, which are already very complex in context of Affinity apps., because of the "compatibility rules".

  10. The reason you are getting duplicate content is that you have a facing pages layout. The record pointer will not move automatically as per page so you get fields from the same record when placing multiple receiving controls of the same field on a page spread (and, on the other hand, if you just place controls on one page, you would not get the other page filled with data). In a spread layout, you need to use Data Merge Layout containers to cause correct advance of the record pointer. An alternative could be generating all pages with a single page layout (in which case when you generate the merged pages, each page would automatically have record pointer advanced, without using Data Merge Layouts), and then change the document layout to Facing Pages, starting from the left side, if needed, or inserting a right hand page before the merged pages to get the first record on the left side of a page spread. 

    1) Create a facing paged document and start it on the left page.

    2) Place repeating items on the master page spread.

    3) Specify Data Merge source file (e.g. an Excel sheet or a .csv file) in Window > Data Merge Manager.

    4) Place a Data Merge Layout both on the left and right page of the first actual spread. The layout needs to have one row and one column. Use the default record behavior according to which index pointer is moved by one after processing each row of source data.

    5) Populate each Data Merge Layout with receiving controls like text frames, picture frames, etc., then assign them with source fields using the Fields panel.

    5) Generate the pages using Data Merge Manager.

    a) Setup

    image.thumb.png.6cc7d785b7078fb3545390be7a8ed981.png

    b) Generated pages

    image.thumb.png.c6deaa18a7993e645ccfbd2ea32b94a6.png

    testfacingpages.afpub

    facingpages.xlsx

  11. 7 hours ago, Romaleon said:

    Sounds like a weird bug to me, probably caused by working in an outdated file with outdated elements

    I think the "compatibility rules" are "features", though. Exporting placed PDFs using different PDF version number in exported file, and accordingly different capabilities (e.g., live transparencies, when needing to export to format that does not allow them) is probably quite complex, trying to retain all essential properties of the source files (color values, print attributes, embedded fonts, etc.) despite version conversion. Hopefully this will be supported in future versions.

  12. 3 hours ago, kenmcd said:

    Could you please attach the PDF so I can take a look?

    Here is one created by Affinity Publisher and another by InDesign CS6. It is interesting that Adobe Acrobat Pro 2020 cannot find the ligature words unless a space character is inserted between the ligature and the rest of the text, even when using InDesign created PDF. As mentioned, PDF/X-Change can find the texts (that is, it ignores the space). I am not sure if I used kerning (which then could perhaps explain the additional space character?). The swash example however cannot be found (without adding a space in between) neither by Adobe Acrobat Pro or PDF/X-Change. 

    opentypedropcaps_apub.pdf

    opentypedropcaps_id.pdf

  13. 10 hours ago, Ken Hjulstrom said:

    so it seems as if there might be some inconsistency between the different leader types

    Yes, at least in 2.x versions (tested on macOS). I had some difficulties in general trying to specify paragraph indents and tab stops (e.g., the leading character field is not properly enabled when needed, and sometimes even the paragraph alignment setting does not seem to "take"; it seems there can be hidden settings affecting paragraph formatting so using the Revert defaults button on the Toolbar is something necessary.

    More specifically, there seem to be certain prerequisites in operation of leader characters in context of hanging indents, so it seems workarounds are needed to get the feature operate as desired, e.g. something like this:

     image.png.329f8156d32be0ec263521f1f7c373ab.png

    Here the same trick is applied on your example:

    TabLeadersInTableCells_Workaround.afpub

  14. 12 hours ago, thomaso said:

    How does the "Auto" option work in your sample? To me (mac, V1) "Auto" seems to ignore the number in the value field, e.g. 3 …

    It does not seem that its operation has changed. To me it appears that the function of "Auto" is to sense presence of a ligature and adjust the number of characters to apply drop cap attribute to, but the feature misfires in more than one way. In addition to failing to properly handle ligatures, it somehow assumes that the paragraph must start with a capital letter (as it does when the ligature is "Th", in which case the auto-count applies drop cap attribute to two characters (which would result in the character following the ligature "Th" to become a dropped character, as well, like "Thi" in the example below), but if the paragraph starts with "ffi" then the count stays at "1", and only "ffi" would have the attribute. On the other hand, if "All caps" is applied to "fi" ligature, "Auto" stays at "1", but manual setting would not have any effect, so in the screenshot below the manual setting is set to "6" characters in bottom two paragraphs starting with a hard coded ligature "fi", but the manual setting only works when the ligature has not "All caps" formatting applied.

    dropcapsauto.thumb.png.f018eb986d74dc96978e132147591738.png

    In InDesign the feature works so that the character count always affects the specified number of characters, whether a ligature is involved or not, so "fi" is two characters and "ffi" three. There is no "Auto" feature, but if the user specifies the number of characters to drop as "1" in "finland", the ligature will be broken and only "f" is affected. If it is set to "2", the ligature attribute is used (if applied to text) and "fi" are affected. The drop cap effect is applied disregarding the case of the letters, but if "All caps" formatting is involved, ligature is discarded as it does not make sense, so "fi" where "All caps" is applied, will separate to two characters and ligature is lost. This is the correct and logical behavior of the feature, and would work identically no matter if ligatures or capital formatting is involved or not: the specified number of characters, no more or less, would always be affected.  

  15. It might depend on font (or perhaps the version of the app, these tests were run on v2 beta on Windows), but it seems that the ligatures that are manually entered from within the Glyphs panel, do retain their text properties in an exported PDF and are searchable and copy-pasteable. A manually entered ligature also accepts manual kerning normally. This seems to apply also to e.g. the standard ligature "Th", which in this font is in location G+013c. The other standard ligatures like "ff", "fi", "fl", "ffi" and "ffl" have standard Unicode names and are categorized under "Alphabetic Presentation Forms". 

    On the other hand, OpenType features like swashes and ornaments (alternates), at least in the example font (Adobe Caslon Pro), seem to work fine in context of drop caps in Affinity apps. 

    opentype_dropcaps.thumb.png.fc7c60fe4d85f95a85e0006b2c93aa4a.png

    Interestingly, when searching text containing ligatures in Adobe Acrobat Pro, it is necessary to add space between the drop cap and rest of the text (also when searching an InDesign encoded PDF). In PDF/X-Change this is not necessary. On the other hand, in the example, InDesign encoded "S washes..." is searchable without entering a space, while Affinity encoded "S washes..." requires it. In both files "Affi nity" and "Th e" can be searched without entering the space in between.

    opentype_dorpcaps_id.thumb.png.07fdd29fbec130aeb514b79e2b30d6ce.png

    opentype_dropcaps_apub.thumb.png.c5c94f8a20caad6dbd27ba9cf8ae4288.png

  16. I had a closer look on Affinity upsampling behavior, and it is baffling.

    If you have a document like below, where there is hard-edged hatching, a low-res screenshot and then just basically low-quality but antialiased image, all images well below the "standard" print PPI of 254+ that is needed to get decent halftones in printing press, but for the screenshot and hatch-kind of images are customarily intentionally left low-res, to be upsampled by RIP (using nearest neighbor kind of algorithm to basically just duplicate "missing pixels"), this is what you get in Affinity apps:

    a) In workspace, you get the lowres images correctly flagged in Preflight:

    image.thumb.png.f80ab37f2e356f8ad8272d6065b1a26c.png

    Here's a clip from Packzview showing how upsampling is done when exporting from Affinity Publisher using different kinds of PDF export presets:

    I had not noticed these oddities before as along years of experience, "low-res" images very seldom get placed in layouts unless done in purpose, e.g. when working with large documents like posters etc. where high (placed/effective) PPI is not needed. The tests above have been done on latest macOS v2 beta, but version 1.10.6 shows exactly same behavior. It may be that autoupsampling in context of PDF/X-1 and PDF/X-3 is intentional (to avoid failures in automated preflight routines), but why then PDF/X-4 behaves differently? IMO autoupsampling should never happen. I can understand if it happens in software targeted for hobbyists and semiprofessionals but this should still behave consistently and there should be an option to turn off auto-upsampling (whether by default the kind of option being turned on or off).

    Note that a cropped image (when Vector Crop tool is used) creates a mask-based crop, which causes autoupsampling. In my earlier post I intentionally used this "feature" to get upsampled images, but it does not always work (I have not quite figured out the exact conditions, but it may be related to the kind of image that has been cropped, in this case it is a grayscale image in a CMYK/8 document and there autoupsampling works as I hoped it would also work in the demo that I created, but there a more convoluted mask had to be created). If autoupsampling is not wanted, cropping in Affinity apps should always be done using clipping.

    Here's for the reference how Affinity Publisher and InDesign (CS6) create PDF/X-1a exports from the same source:

    pdfx1_autoupsample.pdf

    pdfx1a_id.pdf

    Anyway, considering the OP's dilemma, simply just exporting using PDF/X-1a or PDF/X-3 would autoupsample the low-res images to document DPI. The upsampling method is probably bilinear, so blurring will occur. But it may be ok if there are enough pixels to work with, and an ok routine to avoid complaints of automated preflights.

  17. Your placed PDF (using the default setting of "Passthrough") is in version 1.7 and you are exporting to PDF/X-4, which uses version 1.6. In Affinity apps there are "compatibility rules" (not known in other apps) according to which the target must use the same or later version number than a placed PDF. Otherwise the output will be rasterized. [There are compatibility issues also when using non-PDF/X-based placed images and exporting using PDF/X-based methods, so care must be taken whenever working with placed PDF files.]

    Preflight by default informs about this "incompatibility" but the fix it offers is to export the placed PDF as "interpreted", which then typically causes many other issues, e.g. the embedded fonts need to be installed and mapped correctly, overprint status of source objects is lost, etc.

    The way to resolve this would be using "PDF (press-ready)" preset (which by default uses PDF version 1.7), and uncheck the "Embed ICC profiles" setting. If you also want the images to be converted to CMYK, check "Convert image color spaces". The issue with this method is that live transparencies won't be flattened (this only happens if you choose PDF/X-1a or PDF/X-3 in Affinity apps), but as you used PDF/X-4, this probably does not matter (as this mode keeps live transparencies).

    image.png.1124cd803e0eb134d5875d6a666d4926.png

    Here's a fixed version created by using the settings above:

    Problem_PDF_Rasterization_fixed.pdf

    UPDATE: The other problem that you mentioned, rasterization immediately after import, is illusory since the app basically uses rasterized preview instead of trying to fully render the document.

  18. The easiest method to upsample to desired document DPI would be simply just rasterize all image layers that are below the required PPI. That, however, means that the affected images will have their color mode permanently converted to the document color space, which probably would not be what is wanted, in case there is need to export to multiple color spaces.

    One solution would be to mask the low-res images with a 300dpi pixel iayer, as that would keep the color mode of the original images. The following clip shows both methods and shows how rasterization in CMYK color mode results in desaturated colors when afterwards exporting to RGB mode (the raster mask could actually be disabled when doing an RGB export so that the original image would be used instead of upsampled one, assuming that the original DPI is high enough for the purpose):

    Upsampling is of course a bit questionable operation so to get satisfactory results, it may be necessary to do it manually. In some situations (e.g. when there is low-resolution, hard edged line art or screenshots containing text), it might be the best choice to just leave the images low-res and let RIP handle upsampling (it is typically done using nearest neighbor to avoid blurred images). Sometimes an effective solution could be using Resource Manager to place low-res images in a separate folder, taking a backup copy of the originals, and then upsample the images in a batch using "by 2" upsampling (2x, 4x, etc.) of a dedicated app like Topaz Gigapixel, and overwriting the low-res files so that images can simply just be updated in Resource Manager after they have been processed.  

  19. Note too that in order to simulate InDesign kind of functionality, you should make the swatches global. This can be done by using the button on the right of the button clicked in the clip above, but then the color names would be more or less meaningless rather than showing the color values (the swatch names also cannot be programmatically updated according to changed definitions).

    FORGOT TO ADD: So instead of using the button, right click on the created swatch that has the color values as the name, and choose "Make Global" from context menu that is shown.

    Global swatch assignments are useful not only to have linked color assignments but also when applying tints since it is possible to change in one go all objects sharing the same parent color (but using varied tints) simply by changing the color definition of the parent swatch. (However, you cannot create a child swatch using a tint of a parent swatch so the original link is lost once a swatch is created.)

  20. 15 hours ago, HarryW said:

    I converted the linked image files to Tiff and the PDF is now clean, with no artifacts!

    Thanks for the update. Very useful information. I have noticed similar issues when placing Designer files in Publisher. Both files can get very complex, involve multiple color modes, rasterizations, etc. and it is not always obvious how the files interact and how complex compositions are resolved when exported. In such situations old-school workflows, importing fully resolved content and applying pre-export flattening is more fail-safe. Working non-destructively is fun and allows experimentations, but might require creating a copy of the original file when it is time to create production files. Export-time flattening may be impossible, and even when the production file is seemingly successful, the end results may be surprising when a complex PDF with live effects is rasterized on printer.

  21. On 3/2/2023 at 6:06 PM, laurent32 said:

    Knowing that I'm exporting to PDF/X-1a:2003, using CMYK format document (apub) with undefined color profile yet, would you say :

    - I can use RGB files with undefined color profiles ?

    - I can use RGB files with defined color profiles that would match the color profile I would set to the apub document ?

    The inner workings of color management are not easy to describe, so seemingly simple questions like ones related to ink control often require a longish answer to become answered at any depth (or at all). Some of what is said below has been mentioned in another recent post I made on this forum related to changing a CMYK profile while keeping existing color values, and in numerous posts made earlier both by me and by other users. Apologies for the repetition and wordiness, but here goes:

    When you create a publication in RGB or CMYK color mode, you always have both RGB and CMYK profiles assigned for the document at its creation time. Whatever color mode you choose for your publication, you will be suggested the default color profile for that color mode and given a list of profiles for the chosen mode installed on your system. The defaults in Affinity apps are sRGB for RGB documents and US Web Coated v2 for CMYK (just to mention these two color modes). The user can change the defaults via Preferences > Color. The latent color modes (e.g., CMYK if you create an RGB document, and RGB if you create a CMYK document) will be assigned to a new document based on these settings (and called as global or working color profiles).

    You can subsequently change the latent secondary color profiles by switching to desired color mode (e.g., in Publisher and Designer, using File > Document Setup > Color), selecting the desired profile and choosing "Assign" to keep the current color values unchanged (note however that switching between the modes causes conversion of (Pixel) layers, so switching between document color modes can normally be done without problems only in Publisher and Designer where raw pixel content is typically not used. You could then switch back to former principal color mode, if preferred, but whenever you export to secondary color spaces, the underlying profiles determine how the colors will (by default) be converted at export time, and how their values are displayed when you switch the color model in the Color panel while having the lock turned on (which is the default setting). 

    If your principal target is CMYK and you do not yet know the exact color profile, I'd choose CMYK as the document color mode and, as the profile, something close to the final target, e.g. if you already know that the job will be printed on coated, uncoated or newspaper stock, choose something like ISO Coated v2/US Web Coated v2, Uncoated Fogra 29/US Web Uncoated v2, ISO Newspaper/US Newsprint as the CMYK profile. As the RGB profile, I would choose sRGB (you could then place RGB files without a profile that would be assigned with the sRGB profile, or RGB files with wider color gamut having their profiles embedded).

    For RGB files the profile based TAC limit would automatically be applied when exporting to CMYK. Within Affinity apps, native RGB objects like vector shapes and text are always converted whenever exporting to CMYK formats (even when using PDF versions that do not require it; this is a clear difference to InDesign). Whether placed RGB images (most importantly raster images) will be converted, depends on the export settings but e.g. PDF/X.1a:2003 will always convert RGB images to CMYK, so the total amount of ink could be checked with e.g. Adobe Acrobat Pro, or by opening the PDF in an Affinity app, whenever using export methods that create pure device-CMYK output. Other PDF/X-based export methods would by default leave RGB images in RGB color mode [but if I remember correctly, in Affinity apps PDF/X-3 will convert image color spaces to CMYK, even if not necessary] and would just include necessary ICC information so that the separation can later be made on RIP by the printer. Using PDF tools like Adobe Acrobat Pro it would be possible to check the TAC that would be applied with the target CMYK profile.

    For placed CMYK files, and native objects defined in CMYK, working with indeterminate (non-final) CMYK profile would be more problematic, since when you finally know the profile and need to prepare the production PDF with correct specs, you'd need to either convert existing CMYK values according to the new CMYK profile -- which always also causes K-only based black color values to change to four-color values, but also in losing swatch assignments, so post-conversion tasks would need to be done like changing former K-only parts back to K-only based. But the conversion would also apply TAC limit and automatically take care of excessive ink usage. If the current and final CMYK target do not differ much, it would be fine to just assign the new profile (via File > Document Setup > Color), which would keep the existing color values, and then just manually adjust some key colors if necessary and ensure that there is no excessive ink usage.

    As for placed CMYK files with no embedded ICC profiles (which would e.g. include EPS files), the situation would be more problematic, since reassigning a new CMYK profile does not auto-tag placed files and reassign them with the final CMYK profile so when exporting, the original CMYK values of these files would change (including K-only based definitions, which would certainly be unwanted). The assigned ICC profiles of these files would need to be updated by first changing the global working CMYK profile using Preferences > Color to the required project specific CMYK profile, and then physically replacing these files (removing them from the canvas and placing them again) -- in version 1 files updating could be done by "replacing" the files via Resource Manager while still keeping the files placed on canvas. A trick where linked source folder is renamed and the document is then reopened, letting the app to re-read linked files in their new location at document open time, would probably help automating the profile update process. But because of the complexity of reassignment, it is best to avoid placing CMYK images in publications, at all, if the final target is not known and it is likely that the default CMYK profile needs to be changed. As for CMYK files with embedded profiles, they should be changed in their source apps to avoid "inadvertent" conversion of CMYK values when producing the final PDF exports.

    This is precisely why InDesign by default discards embedded CMYK profiles of placed content and just passes through original CMYK values -- one would need to see much trouble and image-based manual tweaking to get InDesign behave like Affinity apps now do in regards of handling of placed CMYK content. As it is, the best way to avoid these problems is to avoid placing CMYK content, and when necessary (e.g., when a publication contains illustrations or charts where K-only based values need to be specified or retained) (re)import them after the final CMYK profile is known. Because of complexity of reassigning images with no embedded CMYK profiles, it would be best to use image formats that allow embedding ICC profiles. Having CMYK content placed in PDF files that can be passed through (basically ignoring profiles) would be otherwise the best method, but then there are complexities involved in Affinity PDF compatibility rules.

    It is unfortunate that the situation has not improved along version 2 apps (but rather got even worse in some respects), so we can only hope that color management related features and especially PDF production will improve in the future. Much of what is said above does not probably apply to most users of Affinity apps, though. The complexity related to production primarily affects projects where lots of 3rd party CMYK content needs to be placed and the files either contain mixed or conflicting ICC profiles as regards the target CMYK profile (i.e., profiles that InDesign just discards and uses native CMYK values), or do not contain profile at all, and the import-time assigned working CMYK profile would conflict with the final CMYK target. For these kinds of workflows, using a dedicated prepress tool for PDF post-processing would probably be a wise investment.

    Many users, and especially ones who print just their own content, can safely use workflow where all placed content is in RGB color mode and CMYK definitions are only used in text, which is defined as K100 (CMY 0), or created with in-app tools without needing to import anything. Often these kinds of jobs are also printed on self-publishing print shops which specifically request either Device-CMYK transparency flattened content produced with methods like PDF/X-1a (and who would simply just discard all embedded profiles if included), or who want to have plain RGB content (including text, that is defined as RGB 0, 0, 0). When using these kinds of workflows, there is generally no need to worry about TAC limits and ink control. If there is really need to include a specific CMYK profile and it differs from the currently used, it can normally be done just by using File > Document Setup > Color and then Assign the correct CMYK profile. This would keep the existing color values (specifically K100 of text) and apply the requested TAC at export time.

  22. 1 hour ago, Ldina said:

    @lacerto can probably provide more in-depth information, since he seems to be quite familiar with all things PDF

    I have personal experience also only on Adobe Acrobat Pro in tasks like these. Acrobat Pro 2020 can also still be purchased as a perpetual license both on Windows and macOS (but the platform needs to be chosen). Other tools that I know that can perform these kinds of tasks are pdfToolbox by Callas software (both platforms, a bit pricier than Adobe Acrobat Pro), and on Windows only, PDF Tools by Tracker Software, which is more limited but quite robust and does not cost much (and comes with a very capable PDF editor and RGB based virtual printer in the same package, as well). It can also convert to PDF/X and PDF/A standards, check compatibility issues, and do Acrobat Pro kind of user-defined color conversions, like this:

    pdftools.thumb.png.cbee671c9a523c0401c7be24f92f53c4.png

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.