
lacerto
-
Posts
6,419 -
Joined
Reputation Activity
-
lacerto got a reaction from Nightjar in Publisher: Why file size is bigger when exporting from RGB --› 8 bit sRGB
I am not sure if I understood correctly your goal but if it is ok that the PDF stays in RGB color mode and placed images as small as possible, I would make sure that the document color mode is RGB/8 and not CMYK. Then you could export forcing rasterization of "Everything", and specifying downsampling to desired DPI (unless you can have placed images already at the desired resolution). This would reduce the bit depth to 24-bit (8-bits per channel), and avoid image-specific (typically redundant) inclusion of ICC profiles. You would not be able to extract anything image-wise from the PDF but you should have the end result at optimized file size and quality.
-
lacerto got a reaction from LionelD in Color Profile and Metadata
Well, as demonstrated above, Affinity apps often bloat the file size when meta data is omitted. And practically, it is enough to have this simple tag included, to have information on the color profile included and have the file properly color managed:
Certainly no need to drop this because saving some disk space.
But it is a different issue when full ICC profile is redundantly (and inadvertently and unnecessarily) saved just because the app cannot properly create a lean, DeviceCMYK PDF output (unless told so, using PDF/X-1a:2003, with all unwanted side effects). These are file size differences when exporting four different RGB TIFFs to press PDF using InDesign and Affinity Publisher, differences in practice consisting of (unnecessary) ICC inclusions:
...something confused like this:
UPDATE: The forum software is not color managed, so it only just takes color values, without conversion, which can be seen below, where two images, the left one having Pro Photo RGB ICC embedded, and the right one, with sRGB ICC embedded, would closely match if color managed:
First managing, then dropping is of course useful when there is no longer use for the profile (e.g. when creating DeviceCMYK files that need no further processing at RIP), to save disk space, but for general purpose there is no much point in being this frugal, since this may result in heavily distorted visual appearance when the image is used again in varied contexts...
-
lacerto got a reaction from blackbird9 in White ink layer workflow
In addition to Packzview (which is free but selectively available, and needs to be begged to be "reactivated" when forced updated), and Adobe apps, there are affordable tools like PhotoLine (available for both Windows and macOS), that let you check the overprint and spot status of PDF document colors and individual objects:
Proper prepress software like callas pdfToolbox (expensive) would of course also expose and allow editing all press related properties of a PDF file, but in addition to Adobe apps, also e.g. CorelDRAW (also both for Windows and macOS) would allow to verify spot and overprint status of objects within imported PDFs. The value of apps like Adobe Acrobat Pro is that they allow both viewing / analyzing PDF properties, and changing them, without actually opening and reinterpreting the whole content. There is really no reasonably priced replacement for this tool.
The problem with Affinity apps is that even if they can produce from native objects a PDF that has an overprint attribute set (applied via a swatch attribute), they not only cannot simulate overprinting, but they also cannot read the overprint attribute within an opened (interpreted) PDF (or EPS) file, nor can they pass through it within a placed PDF or EPS file. They also lose a spot color status of a placed PDF if there is a PDF compatibility issue, and do not recognize spot colors or overprint attributes within standard EPS files.
-
lacerto got a reaction from sfriedberg in Printing black text on colour produces unwanted white stroke/noise
As mentioned, the initial cause of this behavior is that you have black text that knocks out other inks (Cyan, Magenta and Yellow), causing "holes" in them, and if you have any misalignment issue with print heads, it shows like this, minor white gaps between the ink edges.
In commercial press this is called misregistration, and the ways these issues are tried to be prevented is overprinting small black text on lighter colors, or adding trapping (e.g. specifying an outline that overlaps a bit with an underlying ink), or specifying a common ink for bordering objects so that there will be no paper blank between the objects when separating the job). Trapping is partially automatic in some apps (not within Affinity apps), or it can be applied by the printer or by the RIP software, but often it needs to be applied manually.
You might have been able avoid the problem without print head alignment routine also by manually applying overprint attribute for your text, like this:
Simply just having text as K100 would semiautomatically work when exporting to PDF, because Affinity apps by default overprint objects that have K100 applied on them. You had RGB 0,0,0 applied for text objects, which would typically be output as four-color black when printing or exportin [in which case you would not have experienced the problem in the first place], but as I mentioned, printing routine might result in the RGB black being converted to (near) K100 ink (in my test print to a virtual printer, it was rendered as containing 1% Cyan and 98% Black, and knocking out other inks, i.e., not overprinting), which is why I suggested that you try if defining the text as K100 would automatically make it overprint (as it does by default when exporting). But it did not work as expected when printing.
One step further would be adding C0M0Y0K100 as a swatch in a document palette, then making the swatch global, and finally applying it "Overprint" attribute, as shown in the screenshot above. This SHOULD guarantee that the text overprints also in context of printing so that any registration issues could be avoided (and it had been interesting to see if overprint attribute is truly noted in Affinity printing routines, since it might be that it is not, if Affinity apps send the printer driver converted and interpreted color data anyway).
Keeping the printer in good condition is of course always a proper thing to do, and helps avoiding all kinds of quality issues. The primary reason for the gaps might still be there in your job, so making sure that it does not occur when printing with another printer, I'd recommend making the above mentioned change (at least making sure that the text is K100 and other inks 0, which, as default, will cause that PDF exports will have black text as overprinting).
-
lacerto got a reaction from sfriedberg in Printing black text on colour produces unwanted white stroke/noise
This is just guessing, but when I print this on Windows on to a virtual printer (Adobe PDF), the default color management is set for the app, and this results in RGB 0,0,0 black text being converted and rendered something like C1M0Y0K99, and the text knocks out. If print heads are not not fully aligned this would result in the kind of registration error described.
Currently out of office, I cannot test this on a physical laser printer, and when directed to a PDF on macOS, the text black is converted (pretty much expected) to four-color-black (in which case this issue would not happen).
It is probably much a printer specific workflow to determine how the document colors are printed, but if there are no options in the app user interface, or in the printer driver, to try with to have the current behavior changed (e.g., determining, whether colors should be managed by the app or the printer, or turning off ink saving, or direct option to force text to overprint, etc.), I would try if changing the text to K100 in the document (which you have in CMYK mode) would automatically result in black text to overprint (instead of knocking out).
-
lacerto got a reaction from matisso in Printing black text on colour produces unwanted white stroke/noise
As mentioned, the initial cause of this behavior is that you have black text that knocks out other inks (Cyan, Magenta and Yellow), causing "holes" in them, and if you have any misalignment issue with print heads, it shows like this, minor white gaps between the ink edges.
In commercial press this is called misregistration, and the ways these issues are tried to be prevented is overprinting small black text on lighter colors, or adding trapping (e.g. specifying an outline that overlaps a bit with an underlying ink), or specifying a common ink for bordering objects so that there will be no paper blank between the objects when separating the job). Trapping is partially automatic in some apps (not within Affinity apps), or it can be applied by the printer or by the RIP software, but often it needs to be applied manually.
You might have been able avoid the problem without print head alignment routine also by manually applying overprint attribute for your text, like this:
Simply just having text as K100 would semiautomatically work when exporting to PDF, because Affinity apps by default overprint objects that have K100 applied on them. You had RGB 0,0,0 applied for text objects, which would typically be output as four-color black when printing or exportin [in which case you would not have experienced the problem in the first place], but as I mentioned, printing routine might result in the RGB black being converted to (near) K100 ink (in my test print to a virtual printer, it was rendered as containing 1% Cyan and 98% Black, and knocking out other inks, i.e., not overprinting), which is why I suggested that you try if defining the text as K100 would automatically make it overprint (as it does by default when exporting). But it did not work as expected when printing.
One step further would be adding C0M0Y0K100 as a swatch in a document palette, then making the swatch global, and finally applying it "Overprint" attribute, as shown in the screenshot above. This SHOULD guarantee that the text overprints also in context of printing so that any registration issues could be avoided (and it had been interesting to see if overprint attribute is truly noted in Affinity printing routines, since it might be that it is not, if Affinity apps send the printer driver converted and interpreted color data anyway).
Keeping the printer in good condition is of course always a proper thing to do, and helps avoiding all kinds of quality issues. The primary reason for the gaps might still be there in your job, so making sure that it does not occur when printing with another printer, I'd recommend making the above mentioned change (at least making sure that the text is K100 and other inks 0, which, as default, will cause that PDF exports will have black text as overprinting).
-
lacerto got a reaction from Nightjar in Publisher: Why file size is bigger when exporting from RGB --› 8 bit sRGB
I am not sure if images get actually embedded as specific filetypes, or just as e.g. clip groups consisting of transparent or non-transparent images with specific width and height, bit depth, overprint fill/stroke state, ICC in case their color spaces have not been resolved, and compression method. When a PDF is opened and interpreted, transparent images get interpreted within Affinity apps as embedded TIFFs (as you cannot have transparent JPGs, nor CMYK PNGs, should the image be or interpreted in CMYK mode), while flattened images get interpreted as embedded JPGs (even if originally RGB TIFFs or PNGs). Another app, e.g. Illustrator, would unembed the same images as PSDs or TIFFs.
As noted above, the way images get handled (whether converted to CMYK, flattened, and their ICCs included) depends on the PDF export method. The lowest PDF version number Affinity apps supports is PDF1.4 which already allows mixed color modes so placed raster images can be left in RGB color mode even if interacting with native CMYK objects, and when this is done, in color managed environment the source ICCs need to be included. The only way that Affinity apps can automatically resolve mixed color spaces is using PDF/X-1a:2003, which forces CMYK and flattens transparencies and blend modes. Even in those cases, when you open and interpret a PDF, you can choose to interpret its elements in RGB or CMYK mode, and might get a mixed bag, depending on how they were treated when they were exported. Interpreting a PDF is a kind of an artform in itself, and Affinity apps are actually quite skillful in doing it and making PDFs editable again...
-
lacerto got a reaction from Ldina in Publisher: Why file size is bigger when exporting from RGB --› 8 bit sRGB
I am not sure if images get actually embedded as specific filetypes, or just as e.g. clip groups consisting of transparent or non-transparent images with specific width and height, bit depth, overprint fill/stroke state, ICC in case their color spaces have not been resolved, and compression method. When a PDF is opened and interpreted, transparent images get interpreted within Affinity apps as embedded TIFFs (as you cannot have transparent JPGs, nor CMYK PNGs, should the image be or interpreted in CMYK mode), while flattened images get interpreted as embedded JPGs (even if originally RGB TIFFs or PNGs). Another app, e.g. Illustrator, would unembed the same images as PSDs or TIFFs.
As noted above, the way images get handled (whether converted to CMYK, flattened, and their ICCs included) depends on the PDF export method. The lowest PDF version number Affinity apps supports is PDF1.4 which already allows mixed color modes so placed raster images can be left in RGB color mode even if interacting with native CMYK objects, and when this is done, in color managed environment the source ICCs need to be included. The only way that Affinity apps can automatically resolve mixed color spaces is using PDF/X-1a:2003, which forces CMYK and flattens transparencies and blend modes. Even in those cases, when you open and interpret a PDF, you can choose to interpret its elements in RGB or CMYK mode, and might get a mixed bag, depending on how they were treated when they were exported. Interpreting a PDF is a kind of an artform in itself, and Affinity apps are actually quite skillful in doing it and making PDFs editable again...
-
lacerto got a reaction from Ldina in Printing black text on colour produces unwanted white stroke/noise
As mentioned, the initial cause of this behavior is that you have black text that knocks out other inks (Cyan, Magenta and Yellow), causing "holes" in them, and if you have any misalignment issue with print heads, it shows like this, minor white gaps between the ink edges.
In commercial press this is called misregistration, and the ways these issues are tried to be prevented is overprinting small black text on lighter colors, or adding trapping (e.g. specifying an outline that overlaps a bit with an underlying ink), or specifying a common ink for bordering objects so that there will be no paper blank between the objects when separating the job). Trapping is partially automatic in some apps (not within Affinity apps), or it can be applied by the printer or by the RIP software, but often it needs to be applied manually.
You might have been able avoid the problem without print head alignment routine also by manually applying overprint attribute for your text, like this:
Simply just having text as K100 would semiautomatically work when exporting to PDF, because Affinity apps by default overprint objects that have K100 applied on them. You had RGB 0,0,0 applied for text objects, which would typically be output as four-color black when printing or exportin [in which case you would not have experienced the problem in the first place], but as I mentioned, printing routine might result in the RGB black being converted to (near) K100 ink (in my test print to a virtual printer, it was rendered as containing 1% Cyan and 98% Black, and knocking out other inks, i.e., not overprinting), which is why I suggested that you try if defining the text as K100 would automatically make it overprint (as it does by default when exporting). But it did not work as expected when printing.
One step further would be adding C0M0Y0K100 as a swatch in a document palette, then making the swatch global, and finally applying it "Overprint" attribute, as shown in the screenshot above. This SHOULD guarantee that the text overprints also in context of printing so that any registration issues could be avoided (and it had been interesting to see if overprint attribute is truly noted in Affinity printing routines, since it might be that it is not, if Affinity apps send the printer driver converted and interpreted color data anyway).
Keeping the printer in good condition is of course always a proper thing to do, and helps avoiding all kinds of quality issues. The primary reason for the gaps might still be there in your job, so making sure that it does not occur when printing with another printer, I'd recommend making the above mentioned change (at least making sure that the text is K100 and other inks 0, which, as default, will cause that PDF exports will have black text as overprinting).
-
lacerto got a reaction from NotMyFault in Printing black text on colour produces unwanted white stroke/noise
As mentioned, the initial cause of this behavior is that you have black text that knocks out other inks (Cyan, Magenta and Yellow), causing "holes" in them, and if you have any misalignment issue with print heads, it shows like this, minor white gaps between the ink edges.
In commercial press this is called misregistration, and the ways these issues are tried to be prevented is overprinting small black text on lighter colors, or adding trapping (e.g. specifying an outline that overlaps a bit with an underlying ink), or specifying a common ink for bordering objects so that there will be no paper blank between the objects when separating the job). Trapping is partially automatic in some apps (not within Affinity apps), or it can be applied by the printer or by the RIP software, but often it needs to be applied manually.
You might have been able avoid the problem without print head alignment routine also by manually applying overprint attribute for your text, like this:
Simply just having text as K100 would semiautomatically work when exporting to PDF, because Affinity apps by default overprint objects that have K100 applied on them. You had RGB 0,0,0 applied for text objects, which would typically be output as four-color black when printing or exportin [in which case you would not have experienced the problem in the first place], but as I mentioned, printing routine might result in the RGB black being converted to (near) K100 ink (in my test print to a virtual printer, it was rendered as containing 1% Cyan and 98% Black, and knocking out other inks, i.e., not overprinting), which is why I suggested that you try if defining the text as K100 would automatically make it overprint (as it does by default when exporting). But it did not work as expected when printing.
One step further would be adding C0M0Y0K100 as a swatch in a document palette, then making the swatch global, and finally applying it "Overprint" attribute, as shown in the screenshot above. This SHOULD guarantee that the text overprints also in context of printing so that any registration issues could be avoided (and it had been interesting to see if overprint attribute is truly noted in Affinity printing routines, since it might be that it is not, if Affinity apps send the printer driver converted and interpreted color data anyway).
Keeping the printer in good condition is of course always a proper thing to do, and helps avoiding all kinds of quality issues. The primary reason for the gaps might still be there in your job, so making sure that it does not occur when printing with another printer, I'd recommend making the above mentioned change (at least making sure that the text is K100 and other inks 0, which, as default, will cause that PDF exports will have black text as overprinting).
-
lacerto got a reaction from boelens218 in Color Profile and Metadata
You can do this to any number of source files by using the File > New Batch Job command of APhoto, choosing to save into the original folder:
In format options, you need to leave both "Embed ICC profiles" and "Embed metadata" unchecked. The former drops the full ICC profile, and the latter will not add photoshop XMP tag, which otherwise would include information of the color profile of the file.
But as mentioned by @lepr, stripping metadata this way would in practice often result in increase of file size. E.g., below a JPG file exported from Photoshop and containing meta data, ICC profile and saved with maximum quality, results in increase of file size after having been saved/exported from Affinity Photo by using the above method, not embedding the ICC profile nor metadata:
UPDATE: Bloat may happen also when the source file is Affinity-saved, using otherwise identical settings when saving / exporting. When I first tested this with a small JPG file (around 60K), dropping the profile and metadata decreased the file size by a few kilos, but when testing with the file shown above, initially saved with Photoshop, the following happened:
-
lacerto got a reaction from boelens218 in Color Profile and Metadata
Checking "ICC Profile" will include ICC Profile (full profile) property of ExifTool data, like this:
Checking "Embed metadata" will only include "photoshop" property of XMP data, like this:
UPDATE: Note that including this information will result in e.g. Photoshop considering the ICC is embedded (even if the full ICC profile is not)
Leaving both empty will include neither.
Both Affinity apps and Photoshop behave identically when assigning a working space profile to a file that has no embedded profile. You cannot have "empty" profile in a file that is being color managed. You might have been fooled by the Photoshop dialog box shown when opening a file that does not have an embedded color profile:
If you choose "Leave as is (don't color manage)", the file will be assigned the default working space RGB profile, which you would see e.g. if you choose Edit > Convert to Profile (here Adobe RGB (1998) because that is the default working space RGB profile in Photoshop, similarly as sRGB is in APhoto, and I have not changed it):
-
lacerto reacted to lepr in Color Profile and Metadata
I would not use Affinity for stripping metadata from a JPEG. Affinity will decompress then differently recompress the image data with new artefacts and possible bloating of file size, in addition to removing the metadata.
-
lacerto got a reaction from Ldina in Convert Format / ICC Profile
It seems this really explains it, as when I now tested this on macOS (APhoto 2.6.3) with the same source image, I could not reproduce this. [UPDATE: On the other hand, choosing Apple CMM as the color engine in PS would result in similar differences than on Windows using Microsoft ICM.]
In respect of OS and color engine dependent differences, it seems that leaving rendering intent and other conversion options available (non-grayed-out) is meaningful, after all:
Anyway, considering that these options DO have significant effect in context of many if not most profile-based conversions (especially print-involved, not just RGB to CMYK but RGB to RGB-based media-specific conversions, for whatever purpose this might be useful...), perhaps graying out controls profile-wise would just be too much information and causing confusion, and much frustrated questions... Having the controls available, and seeing whether their use is meaningful or not, is in many ways more user-friendly (although it must be remembered that effect of all options cannot always be previewed realistically; within Affinity apps, though, preview is not even available).
-
lacerto reacted to Ldina in Convert Format / ICC Profile
Thanks, @lacerto. It could well be the result of different color engines, how different Operating Systems handle things, and rounding.
My understanding is that any colors that don't fit into the destination color space just get clipped if they don't fit. Version 2, matrix based ICC Profiles don't support anything other than Colorimetric. All colors that DO FIT will be redefined (in RGB terms) using the profile data and color engine. BPC shouldn't really be required when converting from one RGB editing space to another, since all standard RGB editing spaces encompass the same 0–100 Luminosity range (in Lab terms), so remapping blacks to preserve shadow detail shouldn't be required. 0/0/0 black in DCI-P3 = 0/0/0 black in sRGB, etc. Same with monitor white 255/255/255. In-gamut Colors in between black and white need new RGB numbers, depending on the destination color space, to preserve Lab color values. In the attached graphic, all P3 colors that lay outside of smaller sRGB 'triangle' during conversion, will be clipped and redefined by the numbers corresponding to the closest sRGB border. Those clipped, remapped colors will be the same as any in-gamut colors that push the same borders, which is why Colorimetric conversions lose detail in that border area (as in a bright red rose on a sunny day). 255R, 0G, 0B in-gamut red, will be the same 255R, 0G, 0B as a brighter Lab out-of-gamut red color that is brighter and more saturated than sRGB can contain. Result = a blob of bright red without any detail or differentiation.
Converting to table-based print output spaces is very different...paper black is usually pretty weak compared to monitor black, especially on uncoated papers, so without BPC, you end up with 'mud' and no detail using RC without BPC. Adobe invented BPC to deal with plugged, muddy blacks when using colorimetric intent, since this was not addressed in ICC Specs. (BPC is automatically built into Perceptual conversion tables, scaling everything between paper white and the paper's Dmax, so it usually doesn't matter if BPC is on or off with Perceptual Intent, at least it shouldn't).
-
lacerto reacted to lepr in How do I add a node at the exact center of a segment?
@anxpara Using Node Tool, right-click on the start node of the segment and pick 'Split curve after node'. (There's also a button for the command in the context toolbar of Node Tool.)
-
lacerto reacted to carl123 in Convert Format / ICC Profile
What if you specified in the batch job not to embed it?
-
lacerto got a reaction from Ldina in Correct Approach to Printing (ICC Profiles)
Thanks for the update. I did not get any more of the above, even if I have some developer background... Adobe defaults all conversions to Relative Colorimetic rendering intent in context of the US and European printing, but "Perceptual" in context of Japanese printing. More on all four rendering intents in the linked text below:
https://helpx.adobe.com/acrobat/using/color-settings.html
I mentioned RGB-to-RGB conversions as my personal candidate for perceptual rendering intent (at least to be tested for specific cases) mainly because the gamut differences between the most common RGB color spaces (P3, AdobeRGB, sRGB) are less radical and abrupt than RGB to CMYK device based conversions (even if the amount of out-of-gamut colors can be large in both situations), and because converting from such (typical) photo color spaces to photo printers happens within RGB color space, even if subsequently resolved into CMYK + n extra inks on the printer. I have no idea what actually happens when sending RGB data to a photo printer driver that uses RGB based device-and-media-specific ICC profile, but since this kind of printing typically involves photographs (lots of continuous tones), the gamut differences would be better resolved using Perceptual method, especially as the print gamut can typically be enhanced with extra inks (normally not available in context of commercial press).
But our whole career has involved commercial press, and I have rather modest experience on photography and printing to photo printers so I may well be totally lost here. But I can imagine that it takes lots of experimenting with specific devices to get these kind of printing routines spot on so I do not wonder why printer manufactures provide printing software of their own. So far, unfortunately, I cannot say that e.g. Canon's tools have made local printing any less challenging experience, so I have continued to use (legacy?) PS printing routines, typically app-based. [And as mentioned, on Windows, APhoto does not behave here identically, even if it has look-alike controls...]
-
lacerto reacted to Ldina in Correct Approach to Printing (ICC Profiles)
@lacerto Affinity uses the Mac print driver, and that driver does not give you access to rendering intents at all, even with a printer hooked up. (Note that I print over WIFI to another Mac, which has a USB connected printer, so there is no physical printer connected to my MBP, which is what I use with Affinity).
I started with a very bright, saturated, pumped up, DCI-P3 file that I KNEW would greatly exceed both sRGB and what could be duplicated in print (Red River Polar Matte Paper....which I will refer to as RR henceforth), to help make things more obvious. I was too lazy and cheap to do a bunch of physical prints, so I did a few print tests directly from Affinity Photo v2.6.3 using PDF as output. I think your observations are correct.
First, I exported 3 DCI-P3 files from AP to JPG, using DCI-P3, sRGB, and the RR printer profile. Exporting seems to properly apply the chosen export profile, which reduces the color gamut to the destination color space. My Rendering Intent in Affinity Color Settings is Relative Colorimetric with BPC, so Affinity is probably using RC w/ BPC on Export. All three files look different, as I would expect.
Next, I printed to PDF (no rendering intents are available in the Apple Print dialog box). To me, it looks like AP is sending unmodified DCI-P3 file data to the Apple Print Driver, and letting the Apple Print Driver decide what to do. It looks like Affinity is also passing the printer profile off to Apple as well, otherwise prints would look terrible, with a major color case, which is not the case. This is using App color management (ColorSync on Mac) instead of Printer color management.
I wanted to have to access to Rendering Intents on my Mac, I installed the ColorSync Utility into my Mac PDF Services Folder (using a ColorSync alias), which makes Colorsync available from the PDF Dropdown box in the Affinity Print Dialog (near the bottom). This allows me to pass the file from Affinity Print dialog to ColorSync for final printing, and ColorSync DOES allow you to set rendering intents. Generating PDF output via AP -> ColorSync looks identical when printing to PDF directly from AP (without using ColorSync Utility). So, again, it looks like all the DCI-P3 data in the file was sent without modification. Whether setting ColorSync to Relative Colorimetric or Perceptual, the resulting PDF files looked identical. They shouldn't be.
To me, it looks like AP passes original file data to the Apple Print Engine unmodified, letting the Apple Print Engine do whatever it does. Over the years, Apple has drifted away from the print & graphics world, and they've 'dumbed down' their previously excellent printing controls. I think they are probably a part of the problem. I don't know what rendering intent Apple uses natively for printing. PDFs generated via Affinity Print, or passing it off to ColorSync, all look identical and retain full P3 data, the same as a JPG Export using P3 profile.
Exporting seems to work more as expected. Exporting to sRGB reduces color gamut to fit into the smaller sRGB color space and is clearly visible. Exporting to the RR Paper Printer profile also greatly reduces the color gamut, which is smaller overall than even sRGB. This is visible in the RR Export. I tried changing the Affinity Color Settings Rendering Intent from RC to Perceptual, but the exported file looks identical (I didn't shut down and restart Affinity, since I wasn't prompted to do so). I am guessing that RC Intent is used during export. Honestly, I'm not sure what the Rendering Intent in Affinity Color Settings even does.
When I do print, I usually print using LightRoom and a TIFF or high quality JPG , for anything important (I do print from Affinity when sending simple birthday and anniversary cards, etc, and they look good to me, but I'm not overly picky about them). Adobe handles output as expected with their custom-written print module, including rendering intents. Most of the Rendering Intent information in this discussion comes from my experience with Photoshop, InDesign, Lightroom, etc, and is correct. I haven't tried printing from other utilities (XnView MP, Graphic Converter, etc). It's still not abundantly clear to me what is happening to the print stream behind the scenes when using Affinity apps for printing.
For best control and the most accurate output, it is probably best to open and print a TIFF, PNG, JPG directly from Photoshop, LightRoom, or even ColorSync, (and perhaps other apps), all of which allow the user to select rendering intents and apply them properly.
I'm apologize if this is confusing or unclear, but it still isn't 100% clear to me what is happening when printing directly from Affinity Apps.
-
lacerto reacted to nickbatz in Rendering intent - display only?
Yes, I do what you say - let PP&L handle the color management, and I use Adobe RGB in Affinity Photo. However, because I use Affinity Photo to create abstract art rather than as a regular photo editor, I've taken to working with the ICC profile in a Soft Proof adjustment layer at the top of the stack (which I turn off when exporting).
Whether or not it's perfectly accurate, it's close enough to stop me from falling in love with colors that the printer doesn't understand. And in fact it's very close, as is the soft preview in PP&L.
The reason for my question is that the rendering intent is a global setting in the program as well as in the Soft Proof layer.
-
lacerto got a reaction from nickbatz in Rendering intent - display only?
Ok, it seems that I failed to understand your question. I think that if you are going to use Canon’s Professional Print & Layout tool anyway, then you should handle all color management, including media specific (ICC based) conversions and selection of rendering intent, there – especially if you have achieved good results using this workflow – and have the native (best and widest color input) as the source. Using the app’s Print dialog box also has controls for color management (including selection of media specific ICC profile and rendering intent) for both app- and printer-defined color management (similarly as e.g. Photoshop has), but I think that it is much app and printer driver dependent and subject to experiments to see how these output workflows operate (it is especially confusing to leave color management to printer while still allowing user-defined media ICC and rendering intent).
UPDATE: Some second thoughts... Having tested export time color conversions for a while in Affinity Photo (and also within Publisher), it seems that the rendering intent setting made in Settings > Color does not any have effect when exporting images and color conversion is involved, even when trying to force conversion of image color spaces (like when exporting to PDF). This is odd, since I had assumed that this setting governs the default of rendering intent in situations where this cannot be determined with an on-spot overriding setting, but no, it does not seem to have effect neither on native objects, or placed image objects, when their color space is converted. This is yet another example of a fallacy of assuming that color management happens similarly within Affinity ecosystem as in Adobe ecosystem (which Affinity apps often simulate in UI and controls). I also noticed now that while Affinity Photo has rendering intent controls in context of profile conversion (under Document > Convert Format / ICC Profile), they are missing within Publisher and Designer (under Document Setup > Color) [while e.g. InDesign and Illustrator both have them]. This means that as for color conversions, ability to export to diverse formats (like TIFF) within Affinity apps (including operations that involve ICC based color conversions), while handy, is also inaccurate. So if you want to have control of (at least) image color conversions, you'd better do it destructively using Document > Convert Format / ICC Profile command of Photo, rather than leaving anything to export. In this respect even the Soft Proof Adjustment, resulting in export-time rasterization of native objects, becomes a viable option for selective, user-defined color conversions.
-
lacerto got a reaction from matisso in Rendering intent - display only?
Color Intent is a setting related to conversion of colors from one color space to another, according to color profiles involved. Rendering intents determine, which factors are wanted to be important when color values are recalculated. Typical press setting would be Relative Colorimetric and Black point compensation turned on, while other intents may focus on retaining saturation or perceptual likeness.
Corresponding settings would be meaningful in context of actual color conversion (in Publisher e.g. when exporting and the settings defined under Preferences > Color are applied, or in Photo when doing color conversion via Document / Convert Format / ICC Profile).
In context of color proofing the settings are intended for simulation, and the Soft Proof Adjustment layer is meant to be turned off at the time of export (no matter to which format). If you leave it on, it first of all causes rasterization of all affected objects, and it would also mess up things like overprint settings in the document. So no matter to what format you export, the soft proof settings would no longer be meta data (something embedded along with the ICC) in the exported format, but a factor in calculation of actually converted color values. In non-conflicting (lucky) situations, it can be that there is no harm done leaving the setting (inadvertently) on, but in other, the results could be highly “unexpected”.
UPDATE: To recap, and replying to the actual question so NO: the Soft proof adjustment would not be limited to display only, but would be a crucial factor when you export objects affected by it, and YES: it is intended for display only purposes (simulation). On the other hand, it could also be used "creatively" to achieve (export-time rasterized) color conversions of e.g. only a group of objects and use a color profile and rendering intent that deviate from the document color profile and rendering intent.
-
lacerto reacted to Ldina in focus
Yeah...I'd not be interested in merging 450 exposures!
f16 and f22 have always been less sharp than f8 and f11 (on average, due to refraction, etc), even with film. No big deal either way (at least to me), since we have easy to use, effective sharpening tools in our digital editors. I typically shoot macros at f11 if I have enough light, which seems a good balance between lens sharpness and a slight improvement in DOF (very slight). I wouldn't worry about any of those "warnings". The resolving power of higher end, modern digital cameras is very good, so our lenses are often the limitation. Experiment and do whatever works. Rules are meant to be broken.
Yes, one can overdo it, and too much IS often too much. As I recall, the focus merge feature allows you to preview each image individually for sharpness when the clone tool is active, and optionally, clone portions of a selected image into your final merged result. But, I don't think it allows you to "hide or disable" one of your source images (I might be wrong about that). You can "delete" an image from the stack before merging, but I wish they had an On/Off toggle, like in the layers panel, to include or exclude individual frames. I don't use it that much, so maybe that capability is there. I've found that Focus merge usually struggles with specular highlights and reflections.
OK, I've bludgeoned this topic to death...haha. Have fun, Andy.
-
lacerto reacted to Ldina in focus
Thanks, Andy.
Damn photo gave me a headache!! haha. To me, it looks like the area inside the white rectangle, more or less, is where it's sharpest. 1/250s ought to be more than fast enough for an 80mm lens at that distance. That's a tough image to assess, so I can see why you are questioning your eyes.
If you rasterize the image in the Photo Persona, (or better yet, a duplicate layer) you can run a "Find Edges" or a "Frequency Separation" filter and it shows the greatest detail in the area of the rectangle. ISO 3200 results in a fair amount of noise, which would show up in smooth, flat areas with not much detail, but the texture of the rock hides it fairly well. Still, that noise does degrade the image. If you're photographing rocks, that's one thing. But, if you're just testing your camera, lens, aperture, shutter speed and ISO for sharpness, noise, best settings, etc, I'd definitely pick a more suitable subject with sharper, high contrast edges, like a lens testing chart. It's worth doing some testing if you plan on doing a lot of detailed work with a given camera/lens combo. You might find that above a certain ISO setting, things look bad, or that a given f-stop looks sharpest (which is often around f8-f11, depending on the lens). Some lenses are much better than others. Here's a typical lens testing chart example...https://www.lensrentals.com/blog/2014/02/setting-up-an-optical-testing-station/
To find the sharpest areas, you can also "temporarily" pump up the contrast significantly, which will make sharper and fuzzier areas easier to spot. Then undo those exaggerated settings and develop normally.
For macro work (or even medium close work), I prefer to use a tripod on stationery subjects. I'll typically shoot at f8 or f11, since depth of field is so narrow at close focus distances. Manual focus using 10X Live View allows me to get a very accurate focus on the most important area. No need to worry about AF focus points when focusing manually. Shooting with a tripod will allow you to pick the sharpest aperture, a much lower ISO speed for quality (preferably ISO 100 or 200), and use a slow shutter speed, as long as your subject isn't moving.
I thought your RAW file sharpened okay, but again, the nature of this image makes it hard to assess. I'm not sure this helps much.
-
lacerto reacted to Ldina in focus
@AndyV I guess everyone has their preferences when shooting. I usually activate ONLY the center AF point in my DSLR, so the other focus points are inactive. Using autofocus, I aim at what I want to be in sharp focus, make sure the center AF point is on that item, depress my shutter release halfway to activate a focus-lock, recompose and shoot. That way, I get to choose what is going to be sharpest. I found that having all my AF points active tripped me up too often and ended grabbing focus on something other than what I wanted. I ruined too many good shots in the past with all the AF points active. I've been using my center AF focus point ONLY for years and it is now a part of muscle memory. That works well for stationery or slow moving subjects.
For higher speed, motion shots, I typically work differently, but I don't do all that much of that sort of thing. Then, I activate all the focus points and shoot in AI Servo mode (Canon jargon for moving subjects). Those subjects are usually at a greater distance, so multiple AF points didn't usually trip me up. When doing , closeup, macro or very critical work, I often use a tripod and use 10X Live View on my camera LCD, focusing manually.
What I meant is that the Develop Persona doesn't automatically apply any sharpening "by default". The user has to apply the sharpening manually, themselves. LightRoom, for example, usually applies mild sharpening by default, even if you do not "add" any "extra" sharpening. Then again, you can create presets in LR to apply whatever you want by default, including aggressive sharpening. Affinity Develop Persona doesn't offer automatically applied presets.
A print needn't be a surprise when it comes to sharpness. You should be able to see that on your monitor. I always assess and apply sharpening at 100% magnification, which should always be the most accurate. If I need another view (larger or smaller) that is at least fairly accurate, I divide or multiple by factors of 2....e.g., 25%, 50%, 100%, 200%, 400% magnification, etc. Assessing sharpening at 19%, 33%, 86%, 138%, etc, isn't a great idea and will give you a false impression of sharpness. I've done a lot of 24x36 inch and larger prints and haven't experienced a problem with sharpening problems that weren't visible before printing. Even the process of converting pixels to ink dots loses a bit of sharpness, so many people apply "print sharpening" as a final step.
Also, you don't normally view a 36x48 inch print from 12 inches away, like you would a 4x6 or 5x7 inch print. The bigger the print, the greater the observer-to-print viewing distance. I used to design trade show graphics for my company and relied on that fact. The goal was to attract customers into our trade show booth, so the images needed to look great from 20 or 30 feet away. If the resolution wasn't adequate, they might look a bit softer when viewed from a distance of 2 feet.