azx Posted November 9, 2017 Share Posted November 9, 2017 Hi, I'm a new user, so I'm not quite sure if it's a missing feature, or just me being unable to find the relevant option, but a cursory glance at the documentation and a quick forum search didn't really help, so I've decided to create this post. Apologies in advance, if you consider this to be a trivial question. In short: is there any way for me to inspect the individual 'raw' pixel values in an HDR image using Affinity Photo? Currently (1.6), if I load an HDR image, I can inspect the RGB values that individual pixels have, but they all appear to have been normalised (either to [0-1] for floating point format or [0-255] for integers). What I would like to be able to do is to load an HDR image and look at any given pixel and see the 'raw' floating point value for each component, before any tone mapping or any other kind of transform applied... Quote Link to comment Share on other sites More sharing options...
John Rostron Posted November 9, 2017 Share Posted November 9, 2017 Why don't you just look at the component images that made up the HDR? Quote Windows 10, Affinity Photo 1.10.5 Designer 1.10.5 and Publisher 1.10.5 (mainly Photo), now ex-Adobe CC CPU: AMD A6-3670. RAM: 16 GB DDR3 @ 666MHz, Graphics: 2047MB NVIDIA GeForce GT 630 Link to comment Share on other sites More sharing options...
azx Posted November 9, 2017 Author Share Posted November 9, 2017 I'm afraid, I don't quite follow... What component images? Do you mean individual channels? If I try to select a single channel and then use the colour picker tool, I still get normalised/tone-mapped values. Quote Link to comment Share on other sites More sharing options...
John Rostron Posted November 9, 2017 Share Posted November 9, 2017 A High Dynamic Range image is formed from merging two or more images which have been taken with different exposure values. Often these images will be -1EV (one stop underexposed), 0EV (correctly exposed) and +1EV (one stop overexposed). These are then merged with judicious masking into a composite image, typically with 32 bits per pixel. This 32-bit image is then tone-mapped to a 16- or 8-bit final image. In Affinity, you would load your component images into a HDR stack. If you have created the HDR image yourself, then you will have these component images. If you did not, then you won't! Where has your HDR image come from? Quote Windows 10, Affinity Photo 1.10.5 Designer 1.10.5 and Publisher 1.10.5 (mainly Photo), now ex-Adobe CC CPU: AMD A6-3670. RAM: 16 GB DDR3 @ 666MHz, Graphics: 2047MB NVIDIA GeForce GT 630 Link to comment Share on other sites More sharing options...
azx Posted November 10, 2017 Author Share Posted November 10, 2017 Perhaps I should clarify that by 'raw' values I meant the untransformed pixel values as stored in the .exr (or .hdr) file -- nothing to do with the RAW images produced by cameras. I just want to be able to click or hover over a pixel and have the editor show me that the pixel at (say) 4, 20 has value [1400.0, 1400.0, 701.0] instead of (or in addition to) [255, 255, 180]. Most HDR images I work with are not created by merging multiple LDR images (and for those that are, I do not have the access to the source images). In my case most of the HDR images are generated by a lighting simulation or a rendering system (some of those images contain various radiometric or photometric data rather than just 'colour'). Other images represent physical measurements of actual scenes -- individual pixels already represent the quantities I'm interested in and the input images may not exist, be inaccessible or otherwise impractical to use. I already have tools that let me inspect such images, but they are somewhat lacking when it comes to other aspects of image editing. It's just that having that simple functionality available in a good image editor (with support for layers, channels,a nice UI and so on...) would significantly improve my workflow. Another use case one could think of, are vector displacement maps -- a bit more general than normal maps commonly used in 3D rendering. In this case the RGB value doesn't just encode the direction of the perturbed normal but direction in which the surface gets displaced and the magnitude of that displacement (and if it's absolute displacement, you're interested in the numerical value, so you don't want for those values to be normalised). Quote Link to comment Share on other sites More sharing options...
lepr Posted November 10, 2017 Share Posted November 10, 2017 . Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.