Jump to content

Recommended Posts

Posted

Client's printer has told them to ensure all images are 300dpi minimum so they have been checking all their images and finding many at 72dpi and they have become rather preoccupied by the subject. My question is a broader one though.

APub has a tick option at the pdf create stage to use a fixed dpi (lets say 300) or to use the document dpi. I assume that the former will 'down-sample' where necessary and the latter will send them 'as they are'. So, is there any disadvantage to ticking 300 or any benefit from sending 'as is'. The book will be self-published with an online printer on to decent silk paper using inkjet. Many (almost all) images are vintage 1930s/1940s and betray their history and method of storage, with very many containing white dots/scratches and a good number showing what looks like water marks (though maybe something else causes that). Almost all are scans at 600dpi and the client is now providing them at 1200dpi (which highlights all the imperfections wonderfully). In the proof-reading notes I am prompted to check whether the dpi of selected images is sufficient, despite having established in Resource Manage that the lowest is 350 and the highest is 1,600.

I'd be grateful for either reassurance or words of caution.

Thank you.

Clive

 

Posted

Oh dear. Scanning low-res (printed) images in high resolution just scales up issues, unless you have a high class negative slide which actually contains good details at that resolution. The focus should be more on restoring contrast and colors (if available) by using best possible lighting and adjusting settings specifically to every image. And use a scanner which is actually capable to focus on a e.g. curved paper image, most scanners can’t adjust focus and rely on perfectly flat sources which is never the case.

Current gen viewers are so use to artificial „improvements“ like sharpening, denoise, heavy highpass filter, maximized global contrast.

it kind of make sense to request min 300dpi images, so any upscaling is done during the edit process where you can identify artifacts and images to bad for usage, instead of throwing low-res c**p in the file and hoping for wonders.

I found that bad b/w images sometimes benefit from blur filter and adding a mild highpass filter to regain some edge contrast.

if you are able to upload one or two example images we can advise if it makes sense to quickly edit the images. In most cases it is not worth the effort, especially if you need to edit many.

 

Mac mini M1 A2348 | MBP M3 

Windows 11 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5 | Dell 27“ 4K

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

I use iPad screenshots and videos even in the Desktop section of the forum when I expect no relevant difference.

 

Posted

Up or downscaling images is a delicate topic.

Affinity is „medium“ in its capabilities. There are more specialist apps and tools, and even print drivers may deliver better results (specifically if they factor in the parameters of the used paper and utilizing 6 or more inks).

In case of inkjet printers, the best result is often achieved by letting the print drivers do the job (of rescaling, rasterizing, color format and color profile conversion), and keep the document in RGB format.

Mac mini M1 A2348 | MBP M3 

Windows 11 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5 | Dell 27“ 4K

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

I use iPad screenshots and videos even in the Desktop section of the forum when I expect no relevant difference.

 

Posted
2 hours ago, roadcone said:

Client's printer has told them to ensure all images are 300dpi minimum so they have been checking all their images and finding many at 72dpi and they have become rather preoccupied by the subject. My question is a broader one though.

The dpi of the image as it is once placed is what the print shop has to deal with. Not the dpi of the original document.

Regarding the images at 72 dpi; these may be 72 dpi that are placed into a document and scaled so the effective dpi is much higher, 300 or 400 dpi. For example I have several images that are at 72 dpi, I place them in a Designer document (8 1/2 x 11 inches) and the effective dpi is over 600 dpi. The original images at 72 dpi are  just over 6 feet wide.

Use the resource manager to see what the placed or effective dpi is. That will be what the Print shop has to work with.

ScreenShot2025-04-12at7_41_52AM.thumb.png.3bfb45acb0b74f216dd506d429ac813e.png

Mac Pro (Late 2013) Mac OS 12.7.6 
Affinity Designer 2.6.0 | Affinity Photo 2.6.0 | Affinity Publisher 2.6.0 | Beta versions as they appear.

I have never mastered color management, period, so I cannot help with that.

Posted

@roadcone

Bruce is right.

What the client’s printer wants are images with high enough pixel resolution to be printed at the desired quality at 300 DPI. Photos don’t have DPI—they only have pixels. Cameras include DPI information (just a number!) in JPEG files as a printing reference. The image itself doesn’t “know” its size. DPI is a hint for printers to calculate how large the pixel dimensions should be spread on paper. Some photographers include a DPI value as a courtesy, but since the final print size is unknown, it’s really just a placeholder. Print size is determined by pixel dimensions and how the image is scaled, so DPI in image files is more of a pseudo-setting unless used deliberately in a print workflow.

That means, for example, a 16-megapixel image at, say, 4920 x 3264 pixels would give you around 400 DPI when printed at A4 size—plenty.

However, it wouldn’t have enough detail for A2, where the same 16 MP image printed at A2 size would only deliver around 200 DPI. You’re asking the printer to spread the same number of pixels over a much larger area. Since there aren’t enough pixels to maintain fine detail at that size (A2), the result can look blurry or pixelated up close.

So it’s about calculating whether each image’s pixel dimensions are sufficient for the size it’s placed at in the publication. What Bruce showed was an example where Publisher displays the DPI of a placed image, which ended up being well over 300 DPI.

You can also select each individual image and check the toolbar to see what DPI it gets at its current size. In this example, I’ve scaled a small image to 122%, and then it only reaches 246 DPI.

image.png.6978d9cf7a67210ac7a738a0ebabe4f7.png

In other words, not enough pixels for high-quality print at that size, so I reduce the size of Publisher’s picture frame to 75%, and now it’s enough:

image.png.4e88ad3a688b8882bac3c7b5bf8e4011.png

Posted

Thank you, the three of you, for your responses. My client continues to tell me that we are coming from different directions and please place the latest 1200dpi images in the document. So the 60+year old, scratched images faithfully reproduce white dots and scratches, odd black dots and parallel lines from, I guess, some piece of printing machinery. It is now stressing me.

Meanwhile NotMyFault has suggested that the printer and his printer driver knows best (and will probably do a better job than Affinity). I think that is sound advice so I will leave the 'fixed' box unticked and let Affinity send images 'as-is'. I've looked at the Resource Manager and sorted by dpi and the lowest as placed is 350 with the highest at around 1,600. If I am sent any more 1200 I must place them but the work involved to clean each image of damage is great. 

What I won't be doing again is a favour for a friend (at this rate they might not be a friend anyway).

Ta muchly.

Clive

PS I don't have rights to to the images so I can't upload any without client approval, and that will not be forthcoming and asking would just open another worm can. 

 

x.jpg

Posted

Thank you for sharing the file. The quality is not too bad.

adding only 3 filters can dramatically improve the quality:

  • dust and scratches removal
  • unsharp mask
  • highpass filter

But you will start to notice strong jpeg compression artifacts (8 x8 px patterns and false details). For best results, the images must be stored (after scanning) in a lossless compressed format, or webp or jpegxl which show far less artifacts.

PS  the Image was taken with backfocus on trees, not the women, and maybe the exposure time was not fast enough so motion blurriness is possible, too. This issues (from date the image was captured, not from scanning) are almost impossible to correct digitally.

never the less, the image comes out really well for its age.

i left some grain shine through to keep an authentic style. I don’t like those over-smoothed images where all noise is replaced by AI generated content.

 

IMG_2619.png

Mac mini M1 A2348 | MBP M3 

Windows 11 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5 | Dell 27“ 4K

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

I use iPad screenshots and videos even in the Desktop section of the forum when I expect no relevant difference.

 

Posted

NotMyFault

Well, that was quick. I would be grateful if you could reveal your technique as I am used to taking modern digital RAW that already have sufficient base quality and sharpness.

What software did you use - I would tend to use Affinity Photo and my RAW processors do not do pixels. Then, what process and what settings. 

Yes, the images are coming jpeg and I have no idea what export setting is used and yes, even at high export settings there will be jpeg loss.

Clive

 

Posted

Using a combination of free online tools and Affinity Photo is currently the best way to approach old images like this. Affinity is great but it lacks some of the advanced features you can now find online to speed up this sort of workflow

parklife.jpg

To save time I am currently using an automated AI to reply to some posts on this forum. If any of "my" posts are wrong or appear to be total b*ll*cks they are the ones generated by the AI. If correct they were probably mine. I apologise for any mistakes made by my AI - I'm sure it will improve with time.

Posted

Carl123: Thanks for the image and it shows what can be achieved; but it is too much for me (and I don't intend any criticism to you of what you have done). That one is too smooth/sharp for my tastes. The dirt is smooth, the coat looks like it is just out of the wrapping and the facial features look like botox. She is beautiful and doubtless looked like that in person, but not on an 80-year-old photo. And the face is hugely sharper than the neck and dress. Now, if AI could also tackle the dress, maybe it would look more natural.

Sounds like I am ungrateful - I'm not so thank you.

Clive

Posted

Im busy the next hours but provide a template later.

add just 3 filters layers in photo or photo persona. Those layers can be duplicated to other images, parameters like radius changed even in publisher.

Mac mini M1 A2348 | MBP M3 

Windows 11 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080

LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5 | Dell 27“ 4K

iPad Air Gen 5 (2022) A2589

Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps.

I use iPad screenshots and videos even in the Desktop section of the forum when I expect no relevant difference.

 

Posted

AI can tend to over smooth things but you can specify how much smoothness etc in some of the more sophisticated (paid for) apps. You can also add texture back and adjust areas you are not happy with. AI tends just to be a starting point. You can spend a lot longer refining what AI produces until you are happy with an image 

To save time I am currently using an automated AI to reply to some posts on this forum. If any of "my" posts are wrong or appear to be total b*ll*cks they are the ones generated by the AI. If correct they were probably mine. I apologise for any mistakes made by my AI - I'm sure it will improve with time.

Posted
2 minutes ago, carl123 said:

AI can tend to over smooth things but you can specify how much smoothness etc in some of the more sophisticated (paid for) apps. You can also add texture back and adjust areas you are not happy with. AI tends just to be a starting point. You can spend a lot longer refining what AI produces until you are happy with an image

You're correct Carl, I accept the comments. I've been trying out the trial ON1 Resize and noticed the 0% no effect ... 100% oh that awful slider on the face. The same client commented "WOW, you've taken ten years off my father' and that was with the slider at around 70%.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.