ygoe Posted April 28, 2024 Posted April 28, 2024 Hello, I'm following tutorials to achieve superresolution with a number of exposures of the same scene. Basically it goes like so: Take around 20 photos of the same scene Open them all in a photo editor in layers Resize them to 200% with nearest neighbour (don't make the software guess more details just from a single frame) Align all layers Average all layers (this happens automatically already) So I could align all image files on loading as a stack, but I need to resize them first. I could load the stack without aligning and then resize, but it seems I can't align them anymore now. And ideas? Using Affinity Photo 2.4 on Windows, the source images come from Lightroom 6. Quote
walt.farrell Posted April 28, 2024 Posted April 28, 2024 Resize them first? Then load into a stack? Quote -- Walt Designer, Photo, and Publisher V1 and V2 at latest retail and beta releases PC: Desktop: Windows 11 Pro 23H2, 64GB memory, AMD Ryzen 9 5900 12-Core @ 3.00 GHz, NVIDIA GeForce RTX 3090 Laptop: Windows 11 Pro 23H2, 32GB memory, Intel Core i7-10750H @ 2.60GHz, Intel UHD Graphics Comet Lake GT2 and NVIDIA GeForce RTX 3070 Laptop GPU. Laptop 2: Windows 11 Pro 24H2, 16GB memory, Snapdragon(R) X Elite - X1E80100 - Qualcomm(R) Oryon(TM) 12 Core CPU 4.01 GHz, Qualcomm(R) Adreno(TM) X1-85 GPU iPad: iPad Pro M1, 12.9": iPadOS 18.5, Apple Pencil 2, Magic Keyboard Mac: 2023 M2 MacBook Air 15", 16GB memory, macOS Sequoia 15.5
ygoe Posted April 28, 2024 Author Posted April 28, 2024 Yes. Edit: I guess you wanted to give me some kind of answer, I just can't see it underneath your questions. Here's one of the more comprehensive tutorials: https://www.dpreview.com/articles/0727694641/here-s-how-to-pixel-shift-with-any-camera Quote
NotMyFault Posted April 28, 2024 Posted April 28, 2024 I can only say that this method of “super resolution” / pixel shift is mostly snake oil. There are many Cameras which support SR by sub pixel shift micro-adjustment of sensor image stabilization which works to some degree (requiring tripod) SR from handheld shots (less usable). In case you have high iso you get the benefit of inherent noise reduction by stacking which improves sharpness a bit. Taking multiple images handheld introduces perspective distortions which cause blurry edges, even if perspective is partially corrected by “align image” during stacking. And do not speak about ghosting or fast moving clouds in the sky. It is a fun experiment, but nothing which can compensate for higher resolution sensor/lens except in totally static scenes. https://www.dpreview.com/opinion/6915548723/a-load-of-old-pixel-shift-why-i-just-don-t-care-for-high-res-modes Quote Mac mini M1 A2348 | MBP M3 Windows 11 - AMD Ryzen 9 5900x - 32 GB RAM - Nvidia GTX 1080 LG34WK950U-W, calibrated to DCI-P3 with LG Calibration Studio / Spider 5 | Dell 27“ 4K iPad Air Gen 5 (2022) A2589 Special interest into procedural texture filter, edit alpha channel, RGB/16 and RGB/32 color formats, stacking, finding root causes for misbehaving files, finding creative solutions for unsolvable tasks, finding bugs in Apps. I use iPad screenshots and videos even in the Desktop section of the forum when I expect no relevant difference.
ygoe Posted May 5, 2024 Author Posted May 5, 2024 Hm, I did some more experimenting with this and must say that I could achieve the expected result quite well. I did it in two ways to compare them. First the harder and longer one: Export all source images in Lightroom as TIFF in their original resolution Batch-process them with Affinity Photo (or any other software like IrfanView) to scale them to 200% with nearest neighbour (this takes forever because it goes sequentially – using the parallel option likely runs out of memory sooner than out of CPU) Load all upscaled images as a stack in Photo, with perspective alignment enabled Flatten the layers Sharpen with a radius of 2px and 250% Export as TIFF in superresolution Resize down to the original resolution (50% now) with bicubic setting Export as TIFF in original resolution And the quicker one: Export all source images in Lightroom as TIFF resizing them to 200% Continue above with step 3 The difference between the two is not noticeable for me. So I'll just use the upscaling in Lightroom instead. The next comparison was whether it's worth it. So first I compared the original resolution result (second export above) with a single source image. And yes, it's considerably cleaner and has more details. Some small items and subtle patterns that could hardly be recognised suddenly become visible. Maybe I also have a bit of a special case here. I'm using a DJI Mini 3 drone that has a 48 MP sensor. By default it produces 12 MP images that look decent. But the full 48 MP resolution has a lot of colour fringes in it. So I guess that's 48 mega-subpixels and the colour interpolation is just not good enough. Anyway, a direct comparison between a 12 MP and 48 MP image already shows more details, e.g. cobblestone patterns that are a blurry mud in 12 MP become apparent with 48 MP. So there is a use in that. And no, I cannot simply upgrade the drone's camera in any way. It is like it is and if I wanted to get more out of it, I need to use technical tricks. Like this one. (Sure there are better drones but they cost a lot more money and have much stricter regulations due to their higher weight.) Another comparison was with the superresolution image vs. an upscaled (bicubic) source image. And here there are also more details visible. So my final comparison was between the superresolution result and the scaled down original resolution result. Both were merged from 20 source images. Maybe less would also do, but I just had them all. The superresolution image was slightly sharper. But after sharpening the original resolution result with 1px and 0.8, no difference was noticeable anymore. So I'd guess that the superresolution result has no more details than the original resolution result. It's probably not worth keeping the big one (first export above). But the process itself definitely works. My test subject was mostly static. Trees were slightly moving in the wind, a cyclist was visible in the last 3 captures. Treetops are a bit blurry, the cyclist is completely gone. I might try this again in a more dynamic scene with people or cars moving, and try to cut them out in each source image before the averaging, if that works. Here are three sample regions from my test image. Please view them in full size to compare them. First comes the source, then the result, of each area. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.