Guillermo Espertino
Members-
Posts
82 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
-
cgidesign reacted to a post in a topic: Halo around sun during HDR merge
-
Here I tried a quick and dirty processing of the images you uploaded. First the HDR stacking (I did it manually, didn't use the AFPhoto dedicated tool) and then the tonemapping, one using OCIO with the AgX Kraken transform as a baseline, and the other using the Tonemap Persona. I tried to produce a natural-looking image in both cases. Of course it's a matter of taste and goals how much to force stuff like saturation and contrast. In both cases, what you see on screen when editing is what you get when you export to an 8-bit image file, like a jpeg.
-
@Max89 Hi, I know it's been a while since this thread, but I'll try to offer some insights that may help you understand the problem and find solutions. First of all, it's not a bug. A high dynamic range image created from multiple exposures is supposed to have more dynamic range than your screen can display. Dynamic range is the "distance" between the darker and the lighter pixels in your image. The brighter the highlights (the sun, a window when you're shooting interiors) co-existing with dim areas, the wider the dynamic range. Your camera has a limited dynamic range (that's why a single exposure won't cut it and those hightlights will be clipped by the camera sensor, that can't handle so much brightness in one shot). Similarly, your monitor has a limited dynamic range too. An HDR image can store dynamic ranges larger than your monitor's, so you need some way to accomodate that dynamic range into the limited monitor range. The simplest method is to clip. This means that everything above 1.0 (in the 32-bit linear HDR image) will be discarded and replaced by the maximum channel value of 1.0. That's what most softwares do when exporting from a 32-bit HDR image to an 8-bit or 16-bit image directly. Of course that's not great for image quality, so a method that compresses the wider dynamic range into the monitor's DR in an elegant way is needed. This method is often called "tonemapping" (as it maps the tones of your image to the smaller range). The Tonemap Persona in Affinity Photo offers some presets and tools to compress highlights and boost shadows in order to accomodate the extra dynamic range into a displayable range. If that works for you, you're good to go. It's a quick and easy way to work around the problem. But If you find those methods too artificial or harsh with your image, and want something with a more "photographic" or "filmic" response, then you can use other methods. Affinity has the right tools in the shape of the 32-bit preview using OCIO and the OCIO tools. There are freely available OCIO transforms that mimic filmic responses and are designed specifically to deal with HDR images. The latter might sound like rocket science, but I promiss it's not: Just think about it as recipes that transform your HDR image into display-ready images you can choose from a menu. If you are interested about this method let me know, I can give you some pointers.
-
Ron P. reacted to a post in a topic: Halo around sun during HDR merge
-
R C-R reacted to a post in a topic: Halo around sun during HDR merge
-
Ldina reacted to a post in a topic: Halo around sun during HDR merge
-
Dynamic range has little to do with a rich, full colour palette. The chrominance of your image is defined by the photosites of your sensor and the demosaicing process. Your camera has a colorspace and you can't escape it, it won't capture richer colour because you took more exposures of your subject. The *only* thing you're going to get from an HDR workflow is detail. Detail in the shadows that doesn't end up eaten by noise, detail in the highlights that doesn't get clipped by sensor saturation. So long story short: Just analize your subject, see what's in it: If there are dim areas with little light you want to make visible, if there are bright spots like the sun, chrome reflections, etc. Then HDR is a viable tool for the task. You only need to take pictures aligning your exposure to those detail areas, in order to avoid clipping or noise. Once you get "clean" zones, with good gradations and as little noise as possible, you combine them in a single image. It's just about that, nothing else. A clean HDR picture gives you room for editing. You can make the shadow areas brighter, you can turn the intensity of reflections down to make details more visible, etc. A clean image also gives you room to play with color grading, changing your image colours of saturation with less artifacts. But that's it. That's all that HDR images can give you. If you happen to have an HDR capable monitor, you will be able to see your images in a contrast ratio and intensity closer to the ones captured. But it's never about better colour, at all. It's just that expensive HDR monitors have a wider color gamut than regular monitors, but that's another story. If you're creating HDR images for different reasons than the ones I described above, then I'm afraid you're doing it wrong. *) Note that what I commented above applies to the context of creating images for final delivery. The VFX industry uses HDR images in a different way that takes advantage of extended dynamic ranges, as lighting CG or compositing effects and more recently shooting "volumes". I intentionally left those specific uses out for the sake of clarity, as this thread discusses HDR as a tool to produce images intended to be displayed as final pieces of art.
-
Guillermo Espertino reacted to a post in a topic: Halo around sun during HDR merge
-
Guillermo Espertino reacted to a post in a topic: Halo around sun during HDR merge
-
R C-R reacted to a post in a topic: Halo around sun during HDR merge
-
@bololoco: exposure stops are f-stops. You can't reinvent terms used by all the photographic community. Your images don't have 30-40 stops of dynamic range. Just answer this simple question: What's the highest RGB value present in your HDR file? What's the lowest? If you consider the photographic standard of reflective middle gray as 0,18 in your linear file, then every time you double up that value it's a stop up. Every time you halve it it's a stop down. Notice that this is exponential: a value of 1 is barely 2.5 stops above middle gray, but going "only" 5 stops up you get a value of around 22. If your base exposure was pegged to 0.18, then you can consider the total dynamic range of your image as the amount of f-stops below and above middle gray. So the amount between minimum value recorded (actually recorded and not the noise floor) and the maximum value recorded tells you how many stops of dynamic range your file has. The ranges of stops you suggest your files have would be in the order o billions of units of linear light intensity. And I bet they are not.
-
That's quite remarkable. Probably way too remarkable? If you use the eyedropper on the sun in your 32-bit file and it reads less than 1 BILLION, maybe you'd want to check your numbers. Anyway, I said above that your process begins with a HDR image, but what you showed here are tonemapped images. Those images don't have a higher dynamic range than a regular jpeg, so call them HDR is a misnomer. If your files have 30 stops of dynamic range there's no monitor on earth that will help you. I think 4 stops over what you have now is probably the most you can get. Also, the process of stacking exposures into an HDR image is not a race where who piles up more stops wins. Producing HDR images from stacked exposures can give you excellent results (with little noise, as the process consists on extracting zones from the sweet spot of each exposure) without the need of going insane with the dynamic range. Ideally you'll only need the dynamic range where you can get detail from. Capturing 30 or more stops seems quite excessive. What do you want to achieve?
-
Ldina reacted to a post in a topic: Halo around sun during HDR merge
-
Terminology is important, so we should agree on what the term HDR means in this context. @Bololoco seems to mean a tonemapped image, while others like @Ldina mean an image with wider dynamic range than a regular display-referred image. Since the term HDR is so loose and ambiguous, both uses can be either right or wrong, depending on the context. It's a mess, really. You may have a 32-bit EXR with an image that only takes 5 stops of dynamic range, so calling it HDR might be wrong as well. In that case, a regular jpeg using the 8 stops it can store, technically has wider dynamic range. It seems that HDR has only meaning when it is directly compared to SDR, so it has to be an image that encodes more than 8 stops of dynamic range. What we CAN say is that an 8 bit integer image can't be other than SDR, as encoding more stops than its bitdepth would result in posterization, so it's probaly safe to say that an 8-bit jpeg/png can't be "HDR" (whatever that means). So I tend to agree that calling a Tonemapped HDR just "HDR" is probably wrong, but many people do it anyway. For the sake of clarity, I think that in this context, saying "Tonemapped HDR" is probably clearer.
-
One of the difficult (and crucial) aspects of making software is defining an audience. It's not just if Canva has (or Serif had) the right coders for the task. Sometimes, some specific features fall outside the scope of the software and there are design decisions about how far they get with some specific features. HDR merging is probably a good example of that case. The problem described is probably not an issue for most of the users of HDR merging, as they tend to use jpegs as the source. A display-referred image has a well defined domain, everything is in the 0,1 range (well, using integers in that case). This means in practical terms that no pixel in the image will be brighter than jpg white. This also means that there is a well defined range for shadows, midtones and highlights, you can easily defined zones based on luminosity (which is what you use to isolate the different ranges of exposures for stacking the HDR). When you bring a raw image developed from a camera with a wider dynamic range that scenario changes. In your developed 32-bit linear raw image there are certainly pixel that can go beyond the 1.0 value. You have no ceiling. The brightest pixel might be a white wall, a lightbulb, the sun... So if your HDR merging algorithm was developed for jpegs, the type of data that it expects is bound to that range. It's expected that out of bounds data behaves unexpectedly as it's basically undefined. So you can see now that it's not that easy to fix this, and furthermore, the question that the software maker probably will ask themselves is whether this is really an issue for a considerable part of their userbase. And sadly, probably it is not. Since this problem is so niche, I wouldn't hold my breath waiting for a fix, but rather try to understand the issue and learn how to work around this limitation/change of scenario. It's certainly possible to merge HDR images using luminosity ranges, you have to learn how, instead of resourcing to the HDR stacking tool that seems to be designed for a different use case.
-
@MKBastler I can confirm that the problem seems to be the HDR Merging tool, as my manual exposure stack seems to hold up when processed through the Tonemap Persona. Here's the link to my version (the manual HDR Merge, no post production): https://drive.google.com/file/d/1ytZZDvyJdNJXNmbQ7KlGbFG5YXZNTPvn/view?usp=drive_link
-
Hi, I was curious about this problem and I tried the stacking tool in Affinity, and it's true that it's not great for this kind of images. So I tried doing it manually, and I think the result is pretty good. I don't have any complaints about raw development in Affinity, I think it's just fine. Here's the process: - Developed the three exposures as 32-bit HDR images, default settings. - Stacked in layers, adjusted the top and bottom exposures to match the reference (around 1.6 stops, check the waveform). - Added luminosity masks to both upper and lower exposures to leave the clean, noise free areas and hide the noisy ones. - Used AgX (OCIO) for 32 bit preview as a starting point. - When the image looked right, recreated the display transform in the layer stack with OCIO adjustment layers - Then the fun part, some local adjustments, saturation, curves to make the foreground more visible, etc. It was a pretty quick job, probably 15/20 minutes. It's not an automated solution, but I think it sort of shows that there is no problem creating a decent looking HDR on Affinity Photo. The moral of the story is DO NOT USE HDR Tonemapping algorithms, take a transform with a good highlight rolloff that doesn't suffer channel clipping and the problem is mostly gone. What do you think?
-
I probably fall into the "elderly yelling at the cloud" bunch, and I have to admit that I don't like those kinds of images at all, but for the sake of the argument, let's say you are trying to experiment with methods for producing a non-photorealistic image that compresses high dynamic range within the displayable range of a regular screen, which is perfectly fine. In that perfectly valid case, you need to address some problems to produce a good quality image: The first and more glaring one is channel clipping, posterization. It's challenging but you can try these ideas: - Avoid processing your images in 8-bit precision and integers. Move to 32-bit floating point early in your process. This is, make sure your stacked exposures produce a true high dynamic range image where RGB values can go way above 1.0. - Experiment with the 32-bit preview. Go find a good OCIO config so you can experiment with good transforms like AgX or even Blender Filmic. (try Joe Genco's "PixelManager", it's great). - Mind the difference between data and display. Your "scene referred" image must stay HDR, try to play non-destructively with operations pre and post transform (in other words, put an OCIO colorspace adjustment layer, and experiment adjusting your image before and after that OCIO operation). I'm pretty sure you can produce great images that way, avoiding the posterization and clipping problems you had in your samples. Stacking so many exposure steps should get rid of most of the noise and artifacts, so with a smooth, high-dynamic range original to start from, your result should have smooth and clean gradations. Oh, and btw. It's not clear to me whether you stacked display-referred images (like jpegs) or 32 bit developed raw images. If you used the former, keep in mind that doing the latter should give you best results with much less exposure steps, as developed raw images are already sort of HDR (depending on how many steps of DR your camera has, of course). EDIT: I just posted an example image of HDR editing doing the steps suggested above in the sun halo thread from the previous comment.
-
Alfred reacted to a post in a topic: OpenColourIO (OCIO) v2 Support
-
@phantascy and anyone else having troubles with their OCIO config. Keep in mind that an OCIO Config file has different roles and sections that different software tend to use in a different way. For instance, Blender exposes looks, while most of the other software don't so it's necessary to create new colorspace definitions including the contrast looks, for instance, as part of the transform chain. If you don't have any experience editing OCIO configs and/or you don't find yourself inclined to learn how it works, I recommend using a well mantained, generic configuration that suits your needs. For that purpuse I think this one is probably the best option for general use: https://github.com/Joegenco/PixelManager It works well with affinity out of the box, but if you want to have access to all the variations of view transforms and looks available you have to do a minor editing to the config file, commenting out all the inactive_colorspaces (or choosing what to disable according to your needs). The config documentation explains what to do, it's quite easy. This configuration is updated regularly and reflects the changes released new versions of Blender, so it works for stuff made with vanilla Blender, but it also includes other neat options, like ACES among others. Using this config and commenting out the inactive colorspaces, you get the same combinations you get in the 32bit preview in the OCIO layer effects, allowing to replicate the appearance of the view transform in the layer stack, allowing to export the appearance to display-referred formats like JPG, PNG, TIFF, etc. It's also a good idea to add this config to your system's environment variables, so every OCIO enabled software picks it up and make the same transforms available.
-
Hi, sorry for the delayed response. I tried to upload the example file to your dropbox but the link no longer works. Here's a minimal file I could produce, showing the bug. https://drive.google.com/file/d/1K7estgDhqX3YlipSlOQJQrlAtHNvOHLa/view?usp=drivesdk (Please note that the linked stock image is not copyright free. I only included it to illustrate the aforementioned behavior and it shouldn't be used for other purposes.) About the bug: I start from a downloaded image from a stock library, usually in jpeg format. I link the original jpeg in a blank Designer document, and then move to Photo, in order to produce a lightweight, non-destructive document that only stores the transparency mask and edits. I's much more convenient and efficient than having a huge flattened PNG mostly duplicating the pixels of the original jpg stored with non-destructive compression. I link that .afdesign document as a design asset in my work documents. Everything works great, performance is fine, documents are lightweight and I can use the linked assets several times without significant impact on performance and filesize. The only problem is that the low resolution proxy used when the imported asset is scaled down or zoomed out doesn't seem to be refreshed to the full resolution asset when exported. It seems to depend on the scale or zoom level, so when the low resolution proxy is used for the display, it is also used for exporting (check the export included in the zip file). Digging deeper into the file and trying to figure out what's going on, it seems that the original jpeg isn't loaded in the linked afdesign file when it should, so the lowres proxy is passed instead. The export process should force a reload and replace the proxy if it was possible.
-
Guillermo Espertino reacted to a post in a topic: Designer 2.5.0 - Font Size Changes When Pasting to Different Document Units
-
Guillermo Espertino reacted to a post in a topic: Designer 2.5.0 - Font Size Changes When Pasting to Different Document Units
-
Guillermo Espertino reacted to a post in a topic: Designer 2.5.0 - Font Size Changes When Pasting to Different Document Units
-
Oh god yes. This is driving me nuts. I have to move several designs from a 300 dpi original to a 1200 dpi template and 2.5 is RESIZING every text element to 4x the original point size. This is a horrible bug and also a regression, as 2.4 worked fine. Font size shouldn't be connected to the resolution of the document. Typographic points are NOT pixels. If a point is 1/72th of an inch, it should be 1/72th of an inch no matter what resolution is set in the document. It's a physical size. Please, take a look at this problem asap, it's a serious bug and it looks there's no workaround other than manually changing the font size, or having to change the resolution of one of the documents (which also means you have to rescale everything in designer, as there is no way to change the resolution mantaining the original pyshical size)