Jump to content
You must now use your email address to sign in [click for more info] ×

lacerto

Members
  • Posts

    5,782
  • Joined

Everything posted by lacerto

  1. Two standard methods for me for trying to save time in sorting out Affinity curve anomalies has been exporting to PDF and opening for editing (sometimes usable for simple designs, did not work in this case, though), and using VectorStyler for doing the same job (Expand), and pasting back: did work for this one. @toreador -- is there any chance that the letters were at some point based on TrueType character shapes? Some of the nodes (and their positions) show traces of having been converted from quadratic segments, a segment type not supported in Affinity apps, though in this case the node that was problematic was not in any way non-standard. Also, exporting the shapes to EPS or PDF and opening them back for editing, cleaned the curves so that all nodes were "regular" kind, yet expanding the stroke still failed. Simplifying the shape in Illustrator (actually resulting over doubling the number of strokesnodes, to be able to retain the curvature of the shape), and copying this shape into Designer, resolved the problem. It would be interesting to know if there was any history of that kind in this design.
  2. I noticed that the ICC profile extracted by Exiftool was corrupted (none of the Adobe apps could recognize it as a valid ICC profile, even if Affinity apps did) -- it seems that Exiftool has problems extracting correctly profiles from ZIP packed TIFFs, so I embedded it in a JPG file, and re-extracted, and this time the resulting ICC profile was accepted by Adobe apps, as well. So here is a demonstration on applying the correct, matching simulation profile on an Affinity created default press PDF which has a document CMYK profile embedded in the created file: correct_simulation.mp4 And here is the corrected ICC profile packed in a zip file: USWebCoatedSWOPd30.zip
  3. The ICC profile would be important if it is embedded in the PDF (as it is when using the default press ready CMYK PDF export preset of Affinity apps), and when using a viewer like Adobe Acrobat Pro, which in the Output Preview dialog allows the user to pick an ICC profile to simulate when interpreting the shown (calculated) color values. However when using a custom color profile, like the one I used in my demo, which is based on US Web Coated SWOP v2 but just has Dot Gain value increased to 30% using Photoshop, and the profile then extracted from a TIFF file where it was embedded using EXIF tool, the resulting ICC profile is not one that Adobe Acrobat Pro lists (I thought that it lists all that are placed in the system color profile folder, but obviously it does not). That means that I cannot select the correct simulation profile to match with the ICC profile embedded in the file, and would accordingly see misleading color values in the Output Preview window, as demonstrated in the video below: icc_preview.mp4 The video also shows (in the PDF viewed on top right) how selection of simulation profile is irrelevant (does not have any effect) on a job that it "DeviceCMYK" (not ICC-dependent), and how PDF/X-based job has an obligatory output intent indicating the target CMYK profile of the job, and automatically selected as the simulation profile. Here the actual ICC profile used in the job can be used by Acrobat even if it cannot list it in jobs where the color intent is not included. The files used in the video also demonstrate how grayscale 0 and RGB 0, 0, 0 black (virtually identical color definitions) will result in the kinds of even builds of CMY values, and 100K when converted to a CMYK-based PDF, so one possible explanation to what OP might have experienced is having originally defined black as a grayscale value, and expecting it to be handled as K100 values, similarly as e.g. in InDesign, while in Affinity apps gray values in a CMYK color document are treated as RGB values that always result in four-color blacks. [UPDATE: Note though that grayscale images that are placed in a CMYK document have the "K-Only" option automatically applied on them, forcing the gray values being handled as K-only values; so if the unexpected color values are only related to TIFF and other placed black images, this does not explain what happened; so it might have been "view-only" kind of issue.] blacks_pressready_icc_embedded.pdf blacks_pressready_icc_not_embedded.pdf blacks_pdfx1a.pdf Here is the ICC profile that I created, perhaps somebody can have Adobe Acrobat Pro show it in its list of simulation profiles USWebCoatedSWOPd30.zip
  4. Hello @thoy, My first question is: which version of Publisher are you using? I have been reviewing Affinity color management related issues for a few years, and do not expect rapid changes, but it seems (and I hope I have not just made some stupid mistake) that the recent 2.x versions finally show some development and improvement -- at least in perspective of users with some experience of using other professional page layout apps. Namely: it seems that Affinity apps (at least Publisher) now, similarly as e.g. InDesign, discard embedded CMYK profiles from TIFF and JPG files, meaning that placing a raster image with an embedded CMYK profile that conflicts with document CMYK color profile, or with the CMYK profile selected at export time no longer results in conversion of CMYK values of these images. I am not sure if this is optional (e.g. is or will become a color profile policy setting, but possibly -- and hopefully -- so, since needs are different in the three apps of the suite). Native CMYK color values (of shapes and text) will, as they basically should be (without an option available that would explicitly prevent this), as will be color values of .aphoto and .PSD files with a conflicting embedded CMYK profile, probably because of their potential complexity, as they can hold vectors/text/etc., be complex jobs. Anyway, to keep things simple when placing CMYK images, it is advisable, if at all possible, to use TIFF or JPG images with no color profiles embedded, in which case they will be assigned with the document CMYK profile and their native colors will be passed through, also when changing the CMYK profile at export time (something that normally should not be done if CMYK definitions are used in text and other native objects). As mentioned, you can now (at least in recent 2.x versions) also place CMYK TIFF and JPG images with embedded and conflicting profiles because they will be ignored and native colors of images will be passed through. Note though that in version 1 apps JPG and TIFF CMYK images with conflicting profiles will have their native colors recalculated. Along with this change many issues with K100 definitions becoming inadvertently converted to four-color blacks at PDF export will be avoided. The attached PDFs demonstrate how K100 definitions are translated and retained in different situations. The first PDF is exported using the document color profile, the second is exported to a different CMYK profile. webcoated_dg30.pdf isouncoated_yellowish.pdf Note that the Publisher document in these examples uses pretty much the kind of a CMYK profile you mentioned, based on SWOP Coated with max 280% TAC and 30% dot gain, as will some of the placed K100 images. As can be seen, the only problematic images are .aphoto files, which are recalculated when there is a profile conflict. RGB black and K100 and 400% black in native shapes and text, when there is conflict, will also be converted, as expected. The second big question is: which app are you using to verify the color values of exported PDFs? The 30% 30% 30% 100% build especially sounds dubious, not something that would appear as a result of an ICC profile based conversion of CMYK values (but rather a CMYK representation of an RGB/Grayscale black). Anyway, it is good to notice that Affinity apps by default embed the document ICC profile when exporting to press-related PDFs, which is in many ways problematic, because it often makes an all CMYK (already fully resolved) file ICC-dependent, and when such a file is viewed in e.g. Adobe Acrobat Pro, the displayed color values in Output preview depend on the selected simulation profile, and the correct profile will not be automatically selected (unless in PDF/X based exports). This means that inherent values are recalculated ad-hoc on display if the correct document color profile is not activated (in Output Preview). Most PDF viewers do not support selection of a simulation profile and do not suffer from this issue, but this confusing default export method (which in other page layout apps can be done, too, in case the default and recommended settings are deliberately rejected and changed) continues to trouble both users of Affinity apps and production personnel, and may actually result in inadvertently converted (or unnecessarily changed) print jobs. There are also other related issues -- though not relevant in your situation -- in Affinity produced PDF exports (e.g. so called PDF "version incompatibility rules" dealing basically with transparency flattening), which may cause inadvertent rasterization, which in context of K100 objects results in translation of mere black to four-color blacks. These kinds of problems cause much insecurity, and in lack of proper preflight / prepress tool, it might be a good idea to open exported PDFs in Publisher (or another Affinity app) and examine the color values. In most situations (but not always) an opened PDF file shows colors accurately and at least helps making correct production decisions.
  5. They did not ask a Photoshop EPS file, which is a specific press-specific, nowadays probably more or less legacy format allowing extended features like color management, similarly as .DCS EPS files allowing color separations, and as mentioned, things like duotones. My guess is that they might have asked an EPS file once hearing about an unspecified vector job, and might have first tried to open it in Illustrator, and when not being happy, might have taken it in Photoshop for rendering, but not getting expected results there, either. Photoshop is still a kind of a last resort for getting prepared a messy vector job that cannot be satisfactorily processed for print as vectors, and the job needs to be in press already. But OP's situation might of course have been different... A properly created PDF might nevertheless be what could work.
  6. I suppose that's the way film emulation "packages" typically work: they are essentially film grain emulators (and I suppose normally based on scans and curves built on real film data) and then apply HSL etc. adjustments to get the tones and other factors right to create a good emulation. So they are basically presets of multiple settings available in the package, whether exposing the preset values or not, and allow the user to finetune the image and combine in possible other features. E.g. FllmPack 7 is an independent app but comes also with Photoshop plugin, and can additionally be integrated with DxO PhotoLab so that it can be used in combination of everything else that is available in that app in one go.
  7. They might actually want to have an Illustrator EPS file to be rasterized in Photoshop (kind of foolproof), and you cannot produce such file. Try/ask if a PDF will do (and perhaps ask also, which kind of a PDF they want to have, CMYK or RGB, etc.)
  8. One possibility: a) Configuration: b) When merged: How functional this could really be, much depends on how varying sizes the images are.
  9. Do you mean placed at 100% in relation to their dimensions determined by internal PPI and document dimensions, meaning that there will be no scaling in context of placement? If so, I suppose that just creating large enough document and picture control and placing merged images with scaling property of "None" should do what you ask. a) In configuration: b) When merged:
  10. I suppose most of the film emulation plug-ins have additional controls, certainly the two I mentioned above do. As for G'MIC, there is a short article about creation of film presets and going into the "rabbit-hole" of making endless adjustments written by the author of this plug-in, that might interest anyone using these kinds of filters, at https://patdavid.net/2013/08/film-emulation-presets-in-gmic-gimp/
  11. Of these, I can tell that DxO FilmPack v7 (both for Windows and macOS) works fine as a Photoshop plug-in with at least Photo 2.3.1:
  12. For free ones, if you are on Windows, you could try G-Mic plug-in that works with Affinity Photo. It has a number of presets for b&w, negative, print, and slide films:
  13. These two PDF files exported from the .aphoto file shown above, comparing pixel-selection based and vector-based simulated feathering, are related to the post above, but for some reason could not be attached to the post unpacked. feathering_compared.zip
  14. Here is yet another take, a bit more complex, of an attempt to simulate pixel selection kind of feathering with shapes and text, applied on a mask. feathering_compared.afphoto I might be wrong but I think that rasterization happens whenever trying to export this kind of an effect involving a mask to vectors. Anyway, the components involved stay editable so it is a live effect, definitely useful. But I have not been able to produce something like this InDesign based feathering where all elements involved stay at export as vectors, and the text is still text. feathering_id.pdf feathering_id_transparent.pdf But as mentioned above, this was not specifically what OP asked.
  15. Note that my understanding of feathering is not based on (mere) blurring, which in general I think is one of the fx that survives rasterization. I had wrongly assumed that transparency applied in context of gradient does result in rasterization, but it does not (necessarily). Blurring is something that is applied on edges and may be used to finish a feathering simulation, but I think that transparency is more essential in feathering simulation (so that larger part of an object where the effect is applied is affected than just edges). [In addition, they allow similar directional and radial applications as feathering features in e.g. InDesign.]
  16. The text actually does convert to curves (and is not rasterized) when the transparency gradient is applied directly on text and not a separate object. feathering_simulation.afphoto feathering_simulation_on_vectores_improved.pdf
  17. Yes, in that respect the simulated feathering works fine: shapes and strokes they are applied on stay as vectors and editable, as does text.
  18. Yes, you're (partially) right. In the vector version the feathered shapes do not cause rasterization of the underlying ellipse (similarly as flattening of transparencies do), and both objects can be moved freely on other parts of the image while keeping their transparency effect. But the text is rasterized (I wrongly assumed that it was only converted to curves). But it is better than I initially assumed.
  19. Sorry, my judgement was premature. Earlier when I tried this with Gradient tool, shapes and text, my PDF exports were rasterized, but not when I just tested this again (it is only if transparencies are flattened using PDF/X-1a or PDF/X-3). feathering_simulation.afphoto feathering_simulation_on_bitmap.pdf feathering_simulation_on_vectors.pdf
  20. Selecting all and exporting to PDF then opening (in Designer 2), lets add the objects to a curves object in one go. Perhaps PDF export somehow reorganizes the job and does away the reason why the Boolean add initially fails.
  21. Yes, it is insane that you can basically apply e.g. all feathering and other transparency related effects (including blend modes) and flatten them to PDF/X-1a [or just plain flattened with PDF 1.3 in either CMYK or RGB color mode] without causing rasterization.
  22. No, it is not possible in the sense that is possible e.g. in InDesign where you can apply feathering on shapes, strokes and text and retain editability, without causing rasterization. feathered.pdf
  23. I think the trigger is if leading gets so small that the "drop cap" of the next paragraph hits the baseline of the preceding line, so it depends also on the character that acts as a "drop cap". UPDATE: If it is a continuing line (instead of multiple drop caps on one-line paragraphs), the trigger is if any of the character shapes on the next line hits the bounding box of the "drop cap", then the wrap occurs [in my demo above it is the character "h" that hits the baseline of the drop cap]. This does not happen when using the Initial Word trick. UPDATE2: The actual baseline can be shifted up or down so it seems to be the zero baseline that counts!
  24. It appears as if a placed PDF will always be rasterized/shown at max 300dpi no matter in what kind of a Photo (e.g. 1200dpi) it is placed on. You get a better, less blurred rendering by opening the PDF and then scaling to desired size, and rasterizing only after that (this is different than in PS where a placed image will be rasterized at full resolution but you cannot e.g. choose whether to use antialiasing or not similarly as you can when you open a PDF in PS). a) Rasterized from directly placed PDF on a 2400dpi image: b) Rasterized from a an opened and scaled PDF on the same image: UPDATE: If you have all resources of the placed PDF available, then you can choose to "Interpret" the placed PDF instead of using "passthrough" and the rasterized preview of the file content, in which case you will have fully rendered PDF content displayed. If you rasterize the image at this stage, it will be rendered at document DPI. Photoshop will do this purely from file contents so even if you do not have a font embedded in the placed PDF installed on the system, the rasterized text will be rendered at full resolution using the embedded content. Affinity apps cannot do this, so embedded but not installed fonts would be replaced with another font installed on the system, but vectors would be rendered at full resolution, so if your landscape plans are vectors, showing the PDFs as interpreted would render them sharp when you rasterize the placed and interpreted PDFs on your canvas. PDFs placed for "passthrough" will only have their already rasterized preview image resampled on a pixel layer at document DPI, and as for fonts, installed versions would not be used for optimal rendering even if they were installed.
  25. If consulting the referred PDF creation guide, e.g. at https://assets.lulu.com/media/guides/en/lulu-book-creation-guide.pdf it appears that sRGB color mode is recommended, and as for CMYK, the recommended CMYK color profile is Coated GRACoL 2006 -- but the recommended maximum ink coverage is max 270%, which is something to look out since this CMYK profile allows max 330% coverage so excess ink is not automatically decreased when exporting. This is why it would be ideal to be able to produce a pure RGB based PDF export since color production would then be performed by Lulu. The only way to automatically produce flattened PDF with Affinity Publisher (in addition to rasterization which you want to avoid) is to export using PDF/X-1a or PDF/X-3. But whether to try either, much depends on what your document contains (just text and photos, or e.g. placed PDF files), in which color mode (RGB or CMYK/8) it is at the moment, since using either with these export methods is going to convert native elements (text and shapes) to CMYK [the former will also convert images to CMYK]. And to avoid problems with excess ink when converting RGB objects to CMYK is to make sure that text and native shapes have already been defined in CMYK color mode and use max 270% ink for colored objects and 100% K (CMY 0) for black text. Another option to flatten transparencies is doing it manually in the layout. If the transparencies only consist of e.g. PNG bitmaps with transparent background and your document is in RGB color mode, you can rasterize them on the canvas, and then continue producing your PDF in the way you have done so far, because Lulu appeared to accept the export (except of the transparency). If there are other kinds of transparencies (layers with <100% opacity, etc.) it can be trickier, but still successful, especially if your document color mode is RGB. EDIT: I later tried to produce a manually transparency flattened RGB PDF with poor results (trying in vain to flatten e.g. PNG files with transparent background), so it really seems that creating a PDF/X-3 based PDF (with possible photos in RGB color space) is your best choice. In this case it is a good idea to make sure that all black text is defined as C0 M0 Y0 K100, and that all shapes have CMYK color definitions, total ink coverage of which is below 270%.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.