Hi,
Recently I became a fan of the panorama functionality in Affinity Photo. Unfortunately I came across a problem I don't know how to address and it seems to me a problem with the design of the panorama rendering algorithm.
Here is the problem:
I stitch sone drone photos of quite different exposures, please see an example below.
AP does a great job of stitching the images and to align the non-matching exposures. However, during that process it looses a large part of the dynamic range, especially in the bright areas (in the examples below: around the sun). I have no control whatsoever on the levels applied by the rendering process - the image I receive after completing the panorama lacks most of the information in the bright area and the sunrays, clouds etc. seen in the middle picture are gone and I see no way to recover them in the "photo" nor the "develop" personae.
I would expect one of the two:
either
- a set of colorist tools in the panorama "persona", so that color corrections can be done before the panorama is rendered and becomes a "final" object in the photo persona
or
- that the output of the panorama "persona" is not a final image, but a set of appropriately aligned/attributed layers in the photo/develop personae, so that the user can make further corrections to these layers (something similar happens in the "stack" functionality, where the stacked images are actually layers that one could individually modify if desired).
Or perhaps I miss something? This would actually be awesome!
Best regards,
Pawel