Jump to content
You must now use your email address to sign in [click for more info] ×

lacerto

Members
  • Posts

    5,782
  • Joined

Posts posted by lacerto

  1. I think there are issues with latest versions of PackzView:

    packzview_issue.thumb.png.c385f9ad11520f7647756bbc7c5c22d2.png

    As you can see, it fails to show the shadow, and the black plate. The files show identical in Adobe Acrobat Pro 2020. Neither file has overprints or transparencies, and both are DeviceCMYK.

    EDIT: Something has changed, though, because the file produced by an earlier version still shows correctly. 

    EDIT2: I cannot reproduce the issue from macOS version of Designer 2.2.1. Perhaps there is some specific setting that causes this? OR: Hardware (display), OS specific issue: I am running macOS Sonoma 14.1.1 in native Apple Silicon mode, no disabled performance settings.

  2. As suggested by @kenmcd, this is likely a result of having multiple fonts with identical names installed on the system. Especially legacy Type 1 Gill Sans (from Adobe Font Folio 8, supplied by "Monotype"), and Apple supplemental TrueType versions (supplied by "Agfa Monotype") have fully identical names:

    Without a name conflict, embedded TrueTypes with sub sets (Apple version) should appear in Acrobat font list with their trademark names (specifically with PostScript names, e.g. GillSans-Bold), instead of generated names. Using trademark names in their supplemental fonts, and thus preventing installation of [or more accurately, proper use of] commercial versions of these fonts (often from the same manufacturer), even when Apple supplementals are hidden and cannot be accessed, is a common issue on Macs.  Gill Sans, however, can still be accessed up to Sonoma, and commercial OpenType versions now have additional IDs like Gill Sans MT (basically still the same font from Monotype).

     

  3. Sorry, I did not check the results. It seems that to achieve what I tried, a specific fixup including embedded files should be used, instead. But as confirmed by your printer, there is no reason for this, as the job is basically DeviceCMYK, despite the apparent conflict with the sRGB blending mode shown at the top of the job. I could not see any significant difference between the files provided so if there is any, it is probably just a viewer-related difference and not significant in production. 

  4. 22 hours ago, Hayz said:

    When I check the pdf in Output Preview in Acrobat Pro, the 'drop shadow/outer shadow' on the new image is showing as "Not Device CMYK" and "ICC CMYK", while everything else is DeviceCMYK.

    Yes, this is what Affinity apps do. The only way that I know that can deal with this from within Affinity apps, and have the shadow effect, too, in DeviceCMYK color space, is exporting using PDF/X-1a, which both forces CMYK and flattens live transparencies (Affinity apps basically always flatten by using rasterizing).

    Forcing PDF/X-1a, however, is not necessarily a good idea as that easily causes rasterization of placed PDF images. As you seemed to have random RGB based images in the example projects you included, it does not seem that the printer requires strictly CMYK only. Basically your job also IS CMYK-only even if the added image causes sRGB-based blend-color space on top of the job. I do not think that this causes problems, if you choose to NOT embed any ICC profiles (embedding them is the default), as then there is no risk that false readings of translated colors (caused by including ICC profiles in situations where they are not needed, as here, when you want to pass through native color values without any ICC translations, and have everything in CMYK) confuses the printer.

    Btw, I think that it is ultimately the blur of the shadow effect that causes sRGB blending color space on top of the image. I thought it could have been the transparent background of the new PNG image, but I replaced it with a CMYK TIFF with white background, assigned with the document US Newsprint profile, clipped to a circle using a native ellipse shape, but that did not avoid the issue. I also tried if turning off Layer FX Outer Shadow and producing the blur with Live filter Gaussian Blur borrowed from Photo Persona, using K100 (instead of RGB based black), to make the shadow similar as the other shadows in the ad, but that did not have the desired effect, either. It is precisely this effect (RGB based blending color space covering the whole job), combined with the default setting of embedding ICC profiles, that causes much confusion, and false readings in apps like Adobe Acrobat Pro.

    Affinity apps do not allow user-defined definition of blending color space (like e.g. InDesign does), which would avoid the issue. You could fix this by using Adobe Acrobat Pro (see below), using Flattener Preview and choosing DeviceCMYK as the blending color space.

    2. Aff rast unsupp to show drop shadow WT SNAP2007 INL_557 - TYM TS25 WT.pdf

  5. It might be that what the printer wants is to have a DeviceCMYK print file with explicit color intent. The fact that the job seems to be based on PDF/X-4 standard additionally suggests, that possible live transparencies are wanted to be retained.

    To have a similar output using Affinity Publisher, you would first need to ensure that you have the document CMYK profile set to one suggested by the printer. So for coated paper, PSO Coated v3 and for uncoated paper, PSO Uncoated v3. If you need to change the profile, use File > Document Setup > Color, and either the Convert option (if your current definitions are for clearly different media), and then ensure that you have text that you intend to print in mere black ink changed back to K-only (as they would be four-color black after conversion); or, use Assign method to keep all current color values.

    After you have the correct CMYK document color profile, use the following export settings:

    image.png.0cdfe65b45fa13102946441fdca857ca.png

    a) Compatibility PDF/X-4 forces use of color intent (which the printer obviously wants). It also retains all live transparencies of the document.

    b) Color Space As document, keeps the PDF color space as CMYK (meaning that all native shapes no matter whether defined in RGB or CMYK will be converted to CMYK). PDF/X-4 does not require this, but as the printer instructed this, it is obvious that they want to have CMYK output.

    c) ICC profile: Use document profile means that the PDF target profile will be the same as the document has: e.g. PSO Coated v3. In InDesign, you can change the color profile at this stage without causing conversion of CMYK color definitions, but in Affinity apps you cannot, so therefore you need to ensure that your document CMYK color profile is the same as that of the exported PDF.

    d) Embed ICC profiles. This is a setting which you cannot change, but it will be irrelevant, because you are going to convert everything to target CMYK, that is, make a DeviceCMYK PDF, which is no longer ICC dependent and does not need to have profiles embedded. So there will be none included.

    e) Convert image color spaces: checked. This converts the placed images to target CMYK, and leaves no alternate color spaces in the document. 

    So, you will essentially produce a Device-CMYK document that has no ICC profiles embedded (because none are needed), but the correct target CMYK color profile intent declared explicitly (for whatever reason). You can tell this by using preflight apps like Adobe Acrobat Pro:

    image.png.d285427e9b4b8266c3c85bbaa1690dee.png

    Why the printer wants specifically this kind of file, is not obvious, but as far as their InDesign-based specs are correctly given, the instructions shown above will produce what the printer expects.

  6. On 11/9/2023 at 7:19 PM, Gianni Becattini said:

    In this way, the size of the complete file shrunk from 9.1 to 3.4 G. Until now I received no complaints.

    But what happened with this? Was there some problem with images when they were made to 8-bit using this method?

    5 hours ago, Gianni Becattini said:

    You suggested me to reduce the jpeg quality and this seems have a great impact on the size: producing a CYMK file with the settings below, with 85% the size went down to 2.38 GB. To my unexperienced eyes, it seems perfect, even enlarging much on the screen, but they refuse to follow this way.

    Without knowing the standard workflows of the printer, I cannot understand why they do this, but generally converting to target CMYK would be a standard procedure. I assume that they are trying to resolve the problem by manipulating the production PDF file (that is, using e.g. Acrobat Pro to convert to 8-bit RGB, or reducing the file size, etc.), which are both standard procedures in Adobe Acrobat Pro, so perhaps there is some more fundamental issue that continues to be a problem? If you wish, you can tell more about the project and the printing specs using a private message (available by clicking a poster's icon) if you do not want to share this kind of information in public.

  7. 10 hours ago, Gianni Becattini said:

    In the end of the day, they are getting again big files. In my opinion, the problem is without solution, i.e., if you want a certain amount of information, you need space to store it and also compression has its limits...

    Ok, but good that they are willing to find a solution. If one ends up in a situation where the job is simply just rejected (possibly by an automated flight checker), the worst scenario is that the whole project needs to be prepared (at least partially) again. 

    IMO Publisher does offer means to resolve the issue (without needing to edit/switch images) as it would be possible to convert images to correct CMYK space provided that the appropriate CMYK profile (and media) is known, or keep the RGB images and convert them to 8-bit by using PDF 1.4. But without knowing details and printer requirements, this is just an opinion.

    I hope they manage to resolve this in a way that is fully satisfactory to you... 

  8. 5 hours ago, Meorge said:

    This gives me mostly what I'm looking for, except the actual printed page doesn't extend all the way to the edges of the sheet. This sounds like it might be an issue with my printer, rather than with Publisher, but maybe there's something else I'm missing?

    This is totally dependent on the printer. I do not know whether it is possible for e.g. a laser printer to print to the edges of the paper, but it certainly is for e.g. many inkjet printers:

    image.png.39ad9204019b4e5c0dd59516ef346fd3.png

    For commercial printing, one would need to add bleeds (typically with crop marks) and then the job would be printed on oversized paper and trimmed to the desired width and height.

  9. On 11/11/2023 at 8:11 PM, MikeTO said:

    Acrobat Pro and PDF Expert can compare two PDFs so you could use one of them to help communicate changes to clients.

    I am not sure if PDF Expert has any more functionality in the Premium version of the app, but the regular version that I have only "compares" without any intelligence two PDF documents by placing them side by side.

    The only other app, besides Adobe Acrobat Pro, I know that can do useful comparisons, is PDF/X-Change (Windows only), but e.g. its image comparison does not cover attribute changes:

    image.png.dae200d506bb40b21850ef5d25067e5b.png

    Otherwise its comparison features are pretty much copied from Adobe Acrobat Pro:

    image.png.7ea9d328286919ce8ed612df8a4e3f11.png

    Whether PDF comparisons are useful, much depends on the kind of a document that is compared. For long texts I still use the method available in InDesign, which allows story-wise export to formatted RTF documents. That includes e.g. footnotes, as well, and when having e.g. captions also in one story, this method makes comparison of long documents using Word or LibreOffice document compare feature fast and effective. For very complex documents having text parts in independent stories, I use a script to collect page-by page in user-defined order style-tagged texts which are copied with page references into a new document. This would then make it possible to extract captions, table headings, footnotes, etc. as separate text elements and compare them to original texts (including local formatting like use of italics). 

  10. An alternative option being of course a confused reading (especially when using Adobe Acrobat Pro), caused by embedding an ICC profile in a job where CMYK values are already resolved (DeviceCMYK).

    Your screenshot implies that you have (correctly) used the document color profile (and NOT an export time profile deviating from the document profile, which always results in recalculated CMYK values of native objects defined in CMYK, e.g. a K100 black being converted to four-color black). BUT, the same screenshot also shows that you have not disabled the default option "Embed ICC Profiles", which makes the export PDF ICC-dependent, even if there is no need for print-time calculations. This means that anyone using Adobe Acrobat Pro using an incorrect simulation profile to review the color values of the job, would get ad-hoc translated values, instead of native color values of inspected objects. 

    I have no clear understanding on whether this kind of confusion might actually result in having recalculated color values printed on paper, but it might, either by causing confusion in print personnel (as might have happened in your case), or resulting in wrong interpretation in RIP software (confused by mixed color spaces included, without explicit output intent profile). 

    Therefore I would recommend checking the "Convert image color spaces" option, and unchecking the "Embed ICC Profiles" option, as this would produce pure DeviceCMYK output (similar as InDesign and QuarkXPress do when using standard press exports), not needing any embedded ICC profiles. If it is important to leave placed images in RGB color space, it would be a good idea to use PDF/X-3 or PDF/X-4 export methods, instead (which would embed all relevant ICC profiles, and intent profile, as required, for print-time resolution). 

  11. 9 minutes ago, thomaso said:

    Curios: What was the reason to do a clean install of Ventura first if you wanted to get Sonoma actually?

    For some reason the reinstaller did not offer Sonoma. I'd have chosen that if offered. But after a direct upgrade from Ventura, no disk space was lost for the upgrade. I do not know where I had lost all that disk space but now I am basically back in a situation where I was three years ago, having about 150GB free disk space. It took me 3 years to waste it, pretty much the time Apple thinks one should spend money for a new mac 🙂 Really, no clue on what the now saved 140GB were wasted, but there is no way a cleaner could have done the same! Just to encourage someone in doubt to do the same -- just a few hours spent on the job... 

  12. 15 hours ago, Andreas Scherer said:

    I guess my gist is: Don't listen to the fearmongers! 😇

    It of course depends on the workload, and personal needs and preferences, but I tend to agree... I have had MacBook Air entry level M1 2020 with 8GB RAM and 256GB SSD now for 3 years. I recently got into a situation (after having upgraded to Sonoma) where I had only 10GB free disk space left and did not want to install a cleaner to mess around my system, even cache files. I backed up all on a portable SSD with Time Machine, did a clean reinstall using Ventura, which I immediately upgraded to Sonoma 14.1.1, and had about 220GB free disk space after having finished, just a couple of hours later. Now I have installed all apps back (including Affinity and Adobe apps, a bunch of other graphic design apps, VisualStudio and Xcode), and the most important documents, and have about 150GB free disk space.

    I already checked what I'd get for this "toy" if I trade in to a new Apple laptop: EUR512 (EUR 362 for the computer, and EUR 150 as an Apple trade in offer), so I'd basically only need to pay about EUR 600 to get a new M2, or about 800 more for an entry level M3 Pro. I am tempted, but might be able to resist 🙂 I am just writing this to let you know that you can get a well performing computer for graphic design at about EUR1,100 that you would need to pay for an entry level M2 laptop (or Mini) and keep it for years, or trade it in after a few years and still get a good price. If you really do not need a high-end super computer, settle with something more reasonable and more suitable to your budget!

  13. 2 hours ago, Gianni Becattini said:

    Thanks again!

    You're welcome! Good to hear that this seems to work for you. As mentioned, if the size continues to be a problem, you could probably afford increasing compression without issues. It is interesting to hear about different print/press workflows.

    I have been following your posts for a while and thought that your project, with lots of large images, and texts often white on black background and over images, might be one where traditional K-only black text, or even CMYK-based job, might not be important, so quite the opposite, producing an all-RGB print job, even without converting to 8-bit images, appears to work well and without requiring intermediary steps (big steps, because the sheer amount of images). I suppose many printers could accept RGB 0,0,0 (body) text, as well, and just let prepress software convert it to K100, while similar black in graphic objects and larger areas would be converted to rich black (with optimum results, because the conversion was left to printer).    

  14. As Affinity apps basically always convert native elements (text, shapes, etc.) to CMYK whenever a CMYK PDF is created (even when this is not necessary), it is strange that RGB output is created. Therefore delivering a sample page (preferably of both an Affinity document and the created PDF) would save time, simply to double-check the readings, and see that there is just not some confusion. There are multiple reasons why production of a press PDF might fail and why original color values change, as proven by countless posts on the forum trying to explain why expected output is not achieved (or why native object color values get incorrectly interpreted or read).  

  15. 5 hours ago, Gianni Becattini said:

    Is there a way to obtain this reduction in an automatic way? I tried many setup but with no success. Probably I am not finding the correct profile?

    I tried with the settings you used (producing RGB based PDF using PDF 1.7, which does not convert placed 16-bit RGB APhoto images to 8-bit) with the exactly same settings which just used PDF 1.4 as compatibility setting (which does this conversion), and that reduced the size of the produced PDF from  6,317KB to  2,685KB just for a document with one such image. 

    If your printer accepts everything else in your production file (having basically all in RGB), then this is the method of reducing the file size. You might also try adjusting the JPG compression (quality), especially if you have big images, since if you have plenty of pixels and good quality images, you can typically afford to increase compression significantly without noticeable (visible) deterioration in image quality. 

    EDIT: Note, too, that APhoto format of a 32-bit CMYK image is not comparable to JPG compressed version of 32-bit CMYK image that you get as a result of export. When I exported the same image, mentioned above, converting it to CMYK, the resulting file size was 2,555KB, so even slightly smaller than a 24-bit RGB image.

  16. it is because your PDF export is probably based in PDF (for print), which uses the document color mode (in your case RGB) and not specifically CMYK based export. Unless your printer has asked you to create specifically an RGB-only production file, you should choose a "PDF (press-ready)" based export method, or at least explicitly specify "CMYK" as the Color Space, and "Use document profile" under ICC profile:

    image.png.b5178e64eb382602a2937db09d963013.png

    This kind of export would use the document CMYK profile, which, if you have not explicitly defined it at any time during the document history, would be the one that you had in force at the time the document was created, under File > Preferences > Color. The default in Affinity apps is U.S. Web Coated v2, which works ok with regular coated media. You should ask your printer, which CMYK profile they prefer to use.

    Anyway, if your images are really all Affinity Photo documents, what this setting means is that all your RGB/16-bit placed images will be converted to CMYK (32-bit images, that is, 8 bits per channel) using the document CMYK color profile, even if you do not explicitly check "Convert image color spaces", For other kinds of placed images (JPG, PNG, TIFF, etc.) the color space would stay RGB. For these kinds of images to become converted from 16-bit to 8-bit RGB images, you should choose PDF/X-3 export method (which would as a side effect cause flattening of transparencies). EDIT: To keep transparencies and RGB images, but force them to 8-bit, you could also choose PDF 1.4 (Acrobat 5) from the Compatibility list shown above (so basically keep your current settings but just change Compatibility to PDF 1.4).

    I recommend that you ask your printer their recommendation for production method. What you need to do much depends on this. [E.g., some printers may prefer to have all RGB production file, in which case there is no point in converting anything to CMYK.]

  17. 7 hours ago, Gianni Becattini said:

    Is there a solution different from editing about 800 images by hand? (even with 3.rd party tools)

    As I tested this with 16-bit APhoto documents placed in a Publisher file, these images (placed linked in a Publisher document) were always converted to 8-bit images and CMYK when exported to press-based CMYK. However, e.g. TIFF based 16-bit RGB images were exported as 16-bit images (even if the Publisher document is 8-bit).

    The easiest method to fix this is probably use "Convert image color space" option when exporting a CMYK production PDF, that converts them to 8-bit CMYK images. Just make sure that you have the correct target CMYK document profile active in the document, and if not, that when you apply the correct target CMYK profile (via File > Document Setup > Color), you use the Assign option to not cause conversion of existing CMYK color values to a new target profile.

    Another possibility would be using e.g. PDF/X-3 export method (without forcing image color space conversion), which would keep the images (e.g. JPGs, PNGs, and TIFFs) in RGB format but reduce the bit-depth to 8-bit/channel (max supported in PDF/X-3). But this option would flatten transparencies.

    Using external editor like Adobe Acrobat Pro would also allow you to convert 16-bit images to 8-bit as a standard single fixup. 

  18. 8 hours ago, thomaso said:

    n your FontLab screenshot the "Line gap" = 0. How come that descender and ascender overlap in your example? Wouldn't this value rather mean they slightly touch each other only, without overlapping?

    Difficult to say. This specific version is taken from Apple-provided supplemental fonts, and might be in some ways defective, or crippled in purpose. There are 10 stylistic sets that have widely varied swash lines, some of which extend clearly beyond the specified ascenders and descenders, and basically this is not an issue since line spacing of this font should ideally be set manually anyway, seeing how descenders and ascenders overlap and whether this creates visually disturbing artifacts.

    I have no idea why Units Per eM value is 400 in this font, far from being sum of ascender minus descender, but as this is not required in typography specs, it can be expected that these kinds of anomalies exist.

    Interestingly, the stylistic sets (and swashes) are not exposed in the macOS version of this font, but e.g. Pages can use them pretty cleverly.

  19. Considering that your end product is a sprite, it may well be that what you need is to have a non-antialiased bitmap, rather than anti-aliased. If so, you can turn off antialiasing for bitmap exports by using the Layer Blend options (accessed by clicking the Cog icon at the top of the Layers panel).

    antialiasing_turned_off.thumb.png.2f48444352cf0b42499be9fa1e785e2b.png'

    Above the View mode is changed to Pixels (Retina) to see the effect already on canvas. Instead of just forcing antiallasing off, you can use the Coverage Map to have effect on how exactly the object will be rendered (considering whether an edge pixel will be turned on or off), but in your case forcing off is probably what is needed.

  20. On 11/5/2023 at 11:35 PM, MikeTO said:

    Affinity is doing it the way apps are supposed to be doing it. But the problem is we live in a word dominated by Adobe and they just hardcoded it to 120% so font designers don't bother setting the default values correctly. Until the font design world catches up and uses the font settings properly, I hope that Serif will change it.

    I have understood that line spacing can be calculated in multiple ways, Apple apps often using hhea table's ascender, descender and lineGap values, while Windows apps the now generally recommended (and a required) OS/2 table's  sTypoAscender, sTypeDescender, and sTypoLineGap values (while the legacy apps can still be using usWinAscent and usWinDescent).

    While em is normally equal to sTypoAscender - sTypoDescender, the OpenType specs clearly advise that line spacing should not be calculated directly on em, but by using sTypeAscender, sTypoDescender and sTypoLineGap: 

    Quote

    The sTypoAscender, sTypoDescender and sTypoLineGap fields are used to specify the recommended default line spacing for single-spaced horizontal text. The baseline-to-baseline distance is calculated as follows:

    OS/2.sTypoAscender - OS/2.sTypoDescender + OS/2.sTypoLineGap

    It is often appropriate to set the sTypoAscender and sTypoDescender values such that the distance (sTypoAscender - sTypoDescender) is equal to one em. This is not a requirement, however, and may not be suitable in some situations. For example, if a font is designed for a script that (in horizontal layout) requires greater vertical extent relative to Latin script but also needs to support Latin script, and needs to have the visual size of Latin glyphs be similar to other fonts when set at the same text size, then the (sTypoAscender - sTypoDescender) distance for that font would likely need to be greater than one em.

    https://learn.microsoft.com/en-us/typography/opentype/spec/recom#tad

    E.g. FontLab 8 correctly cites this practice in its documentation:

    image.png.a6a35dfb197a9bbfab27baf6852eb5ed.png

    Considering the variation in real world, I would rather say that Adobe and Quark chose the practical and more user-friendly method of determining the default line spacing than letting it be based on a way it "normally is" or "should be" calculated. But saying that Affinity apps calculate the line spacing in a way that "other apps are supposed to be doing it" (and specifically stating that Affinity apps do it according to OpenType specs), or that the reason why there is variation is because Adobe and Quark chose to ignore the typographic specs, is simply just wrong.

    UPDATE: In Glyphs documentation there is good discussion on complexities and different "strategies" a typographic designer needs to consider when determining "vertical metrics" of fonts they design:

    https://glyphsapp.com/learn/vertical-metrics

  21. Your problems are probably caused by having non-PDF/X-based or PDF/X-1a-based PDFs placed for passthrough, and exporting using any PDF/X method. These kinds exports are typically rasterized by Affinity apps. If you have non-PDF/X-based placed PDFs that you want to pass through, the best choice to succeed is choosing "PDF (press-ready)" that by default uses PDF version 1.7, convert color spaces to CMYK, and deselect profile embedding. This will create a Device-CMYK PDF with native color values and original fonts embedded, and respects overprint attributes, etc. [EDIT: and passes through all kinds of PDFs]. If your placed PDFs are all PDF/X-based [EDIT: and you need to use PDF/X-based export method], you should choose PDF/X-4, to make sure that export PDF uses the same or later PDF version as placed PDFs. 

    Converting to EPS might work, but then it is good to know that overprint attributes are not honored, and spot colors are not supported. These are Affinity-specific limitations (and fonts left embedded would automatically be converted to curves, too). Letting Affinity interpret placed PDFs, especially if fonts have been converted to outlines, might work, but then you need to pay attention to possible color profile conflicts, because interpreted (CMYK) PDFs will be assigned the working color (CMYK) profile, not the document color profile (and never discarded and just passed through, as by default happens when working with Adobe apps). And if there is a profile conflict at export time, all CMYK color values will be recalculated (causing e.g. K100 blacks becoming four-color blacks).

  22. If you select "Black" from the Apple palette, it will be RGB 0, 0, 0. If you select the black well (one of the four "factory" wells in the Swatches panel), then its color value will depend on your document color mode. For CMYK, it will be C0 M0 Y0 K100, for RGB and Gray, it will be R0 G0 B0. 

    Why you see RGB hex #231F20 as your color value in the Color panel, is caused by viewing the ad-hoc translation of the C0 M0 Y0 K100 black to RGB Hex, according to your current color profile environment (probably from U.S. Web Coated to sRGB). As long as the lock on the left of the sliders is turned on, this is just a calculational translation of the CMYK value, but if the lock is turned off, then each time you switch the color model within the Color panel, the app actually converts the color value of selected objects into another color space, according to your profile environment.

    Unfortunately there is a bug in macOS versions which causes that the true color values are not immediately refreshed when you release the lock, so you need to de- and reselect an object to display its true color value in the Color panel. On Windows the true value is shown immediately as the lock is released. 

    As for the "best black" to be used, it really depends. If your project is a four-color job, and the object where black color is applied is not small-size text, you would be good to use RGB 0, 0, 0, which would be converted to "rich black" (four-color black) at export time or when printed, which would be the darkest that you can get. You would want to use rich black also when needing to overlap areas of black on top of objects with other colors, since just overlapping K100 typically results in overprinting where the black color looks uneven in parts where it overlaps different colors, and where it is printed alone on the paper.

    On the other hand,  if it is small text, or something where there is a risk that small misregistration of CMYK colors causes slightly but distinctively blurred edges, it is best to use K ink only. The same is of course true if you are going to use commercial press and you have agreed on using black ink only (e.g. to save in costs).

×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.