Graham Posted May 6, 2019 Share Posted May 6, 2019 Could someone please explain the difference between merging 3/5/7 exposures of RAW files into Affinity and just one RAW file. Thanks selenita 1 Quote Link to comment Share on other sites More sharing options...
John Rostron Posted May 6, 2019 Share Posted May 6, 2019 A RAW file typically holds a 12- or 14-bit-per channel image. For a single RAW, the HDR algorithm will do its best to map these onto a 32-bit image, which can then be tone-mapped to a 16-bit image. The range of exposure values in your final image will not really be much different from those in the original RAW so tone-mapping may not be needed. For a well-exposed single image, there may well not be much difference from a non-HRD image. For a moderately under- or over-exposed image the HDR image might be better. If you have several images they can encompass a wider range of exposure values, maybe up to 24 or more. These can readily be mapped to a 32-bit image but will require tone-mapping to fit into a 16-bit or 8-bit final image. This can encompass both very dark areas and very light areas that a single-image HDR would not be able to cope with. John Quote Windows 10, Affinity Photo 1.10.5 Designer 1.10.5 and Publisher 1.10.5 (mainly Photo), now ex-Adobe CC CPU: AMD A6-3670. RAM: 16 GB DDR3 @ 666MHz, Graphics: 2047MB NVIDIA GeForce GT 630 Link to comment Share on other sites More sharing options...
Graham Posted May 6, 2019 Author Share Posted May 6, 2019 Thank you for you reply, I noticed that from a single Nikon RAW file, I am getting similar results as from bracketed exposures. I will experiment more... Quote Link to comment Share on other sites More sharing options...
John Rostron Posted May 6, 2019 Share Posted May 6, 2019 1 hour ago, Graham said: Thank you for you reply, I noticed that from a single Nikon RAW file, I am getting similar results as from bracketed exposures. I will experiment more... It will really depend on the brightness range in the original source. If there is less than 14 (12) stops range, then HDR will make little difference. John Quote Windows 10, Affinity Photo 1.10.5 Designer 1.10.5 and Publisher 1.10.5 (mainly Photo), now ex-Adobe CC CPU: AMD A6-3670. RAM: 16 GB DDR3 @ 666MHz, Graphics: 2047MB NVIDIA GeForce GT 630 Link to comment Share on other sites More sharing options...
Fixx Posted May 7, 2019 Share Posted May 7, 2019 9 hours ago, John Rostron said: It will really depend on the brightness range in the original source. If there is less than 14 (12) stops range, then HDR will make little difference. Depends also on camera specs. When I moved to Nikon fullframe I practically stopped making HDR... as single image usually had enough dynamic range, unlike earlier crop frames. Quote Link to comment Share on other sites More sharing options...
John Rostron Posted May 7, 2019 Share Posted May 7, 2019 6 hours ago, Fixx said: Depends also on camera specs. When I moved to Nikon fullframe I practically stopped making HDR... as single image usually had enough dynamic range, unlike earlier crop frames. An interesting observation. I can see how a camera with an increased bit depth can avoid the need for HDR, but I cannot see how the crop factor could affect it. John I Quote Windows 10, Affinity Photo 1.10.5 Designer 1.10.5 and Publisher 1.10.5 (mainly Photo), now ex-Adobe CC CPU: AMD A6-3670. RAM: 16 GB DDR3 @ 666MHz, Graphics: 2047MB NVIDIA GeForce GT 630 Link to comment Share on other sites More sharing options...
R C-R Posted May 7, 2019 Share Posted May 7, 2019 2 hours ago, John Rostron said: An interesting observation. I can see how a camera with an increased bit depth can avoid the need for HDR, but I cannot see how the crop factor could affect it. I think he means that DSLR cameras with so-called full frame image sensors can capture more dynamic range than smaller ones due to lower noise levels & more light gathering capability. John Rostron 1 Quote All 3 1.10.8, & all 3 V2.4.1 Mac apps; 2020 iMac 27"; 3.8GHz i7, Radeon Pro 5700, 32GB RAM; macOS 10.15.7 Affinity Photo 1.10.8; Affinity Designer 1.108; & all 3 V2 apps for iPad; 6th Generation iPad 32 GB; Apple Pencil; iPadOS 15.7 Link to comment Share on other sites More sharing options...
IanSG Posted May 7, 2019 Share Posted May 7, 2019 2 hours ago, R C-R said: I think he means that DSLR cameras with so-called full frame image sensors can capture more dynamic range than smaller ones due to lower noise levels & more light gathering capability. Is that due to sensor size or pixel size? Quote AP, AD & APub user, running Win10 Link to comment Share on other sites More sharing options...
John Rostron Posted May 7, 2019 Share Posted May 7, 2019 8 minutes ago, IanSG said: Is that due to sensor size or pixel size? It is more likely to be the fancy electronics associated with each pixel. It will need to be able to distinguish between many millions of light levels. I would guess that to do this the physical size of each pixel might need to be bigger. However, if you have bigger pixels, then you cannot get so many on the full-frame sensor, so it rather defeats the object. I am not an expert on micro-electronics. This is just my guesses from reading around the subject. John Quote Windows 10, Affinity Photo 1.10.5 Designer 1.10.5 and Publisher 1.10.5 (mainly Photo), now ex-Adobe CC CPU: AMD A6-3670. RAM: 16 GB DDR3 @ 666MHz, Graphics: 2047MB NVIDIA GeForce GT 630 Link to comment Share on other sites More sharing options...
Fixx Posted May 8, 2019 Share Posted May 8, 2019 18 hours ago, John Rostron said: An interesting observation. I can see how a camera with an increased bit depth can avoid the need for HDR, but I cannot see how the crop factor could affect it. Full frame sensors did have bigger dynamic range than crop size sensors. Possibly it is because they tend to be more technically advanced, or possibly there are some other technical reasons (like pixel size), I have not followed topic. And possibly current crop frame sensors do have similar dynamic range; that was not the case a few years ago. Quote Link to comment Share on other sites More sharing options...
John Rostron Posted May 8, 2019 Share Posted May 8, 2019 44 minutes ago, Fixx said: Full frame sensors did have bigger dynamic range than crop size sensors. Possibly it is because they tend to be more technically advanced, or possibly there are some other technical reasons (like pixel size), I have not followed topic. And possibly current crop frame sensors do have similar dynamic range; that was not the case a few years ago. I would guess that manufacturers put their efforts initially into the full-frame sensors to increase bit-depth/dynamic range. Only later would they apply this technology to the crop-size sensors. John Quote Windows 10, Affinity Photo 1.10.5 Designer 1.10.5 and Publisher 1.10.5 (mainly Photo), now ex-Adobe CC CPU: AMD A6-3670. RAM: 16 GB DDR3 @ 666MHz, Graphics: 2047MB NVIDIA GeForce GT 630 Link to comment Share on other sites More sharing options...
Pebal Posted June 9, 2019 Share Posted June 9, 2019 HDR technique is useful when you need to lighten the deep shadows. You can use one RAW file, but the gradients will be worse and the noise larger. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.