Jump to content


  • Content count

  • Joined

  • Last visited

About Jeroen

  • Rank

Recent Profile Visitors

268 profile views
  1. Only now saw this reply... thank you for taking notice. I just made a reply to >|<, and you might refer to that. Following is a brief excerpt. In that post, I take the position that (a) a gentle renaming would help, (b) an information panel would be excellent, and (c) clamping should perhaps better be made optional, rather than doing it automatically. Coming to think of it, (b) in itself might obviate the need for (a). Speaking for myself, being primarily a landscape photographer, I am not normally into special effects, but when I experimented with 32 bit overlay blending I achieved some spectacular effects which I liked a lot. I would not block that road forever. Another issue is that clamping destroys a lot of information. The reason for 32 bit is just to have that information available when and where you want it. Pity to lose it. One idea might be to optionally "unclamp" after your operation, let's say blending. All pixels that are outside the visible range after blending would then revert to their original value, at least if that also was out of bounds. The ramifications of that can be difficult to grasp intuitively, but in my view, so are many mathematical operations anyway. I can think of variations on this theme. Thanks again, and congratulations with all the fine work you are doing.
  2. I now agree. With help of your tip on viewing floating point values I gained a better understanding of 32 bit unbounded editing, and this is indeed how it should be done. Understood. So, the value range in 32 bit is unbounded and the software displays from that range whatever lies between 0.0 and 1.0. The rest is visually mapped either to black (0,0,0) or to white (255,255,255). But internally everything retains its proper value, so that when you shift the values, as you can do in the preview panel, they come to life again if they then fall in the displayable range. I hope I get this right now. Also, "invisible" values still influence all kinds of mathematical operations, such as layer blending. This can make for surprising, but very interesting effects. So, 32 bit editing is actually a very different beast from 16 bit editing (which, by the way, according to James Ritson (https://www.youtube.com/watch?v=UOM_MmM4rvk, 0:22) is based on integer representation of values, not floating point. Perhaps intermediate calculations are done in floating point to avoid cascading of rounding errors, I don't know). Naively, one might think that 32 bit is about better precision, which might be useful in some cases. But much more importantly, other than 16 bit it keeps lots of information that is invisible but still, in principle, could influence operations you perform on what is visible. Weird but interesting. Having said that, I still find it very misleading that Serif has chosen the same name for operations in 16 bit or 32 bit, respectively, that actually are very different in their visual outcome. I can see that one can take the position "the formulas are equal, therefore we call it the same". But is it unreasonable that a user, seeing the same name of the operation in 16 bit and 32 bit UI's, virtually indistinguishable, expects the results of applying it also to be close to one another? Especially where this operation has a generally (also outside of AP) recognised purpose, namely, to achieve a certain visual effect (a contrast enhancement in the case of overlay blending). I would not want to argue that for example, overlay blending should clamp the values beforehand in every situation. Keeping them free can create very interesting effects, and why forbid those? But (a) it can confuse the user who might expect an effect he is familiar with and (b) it forces a work around such as your extra white layer if the user wants to achieve that effect in the first place. if I could make a suggestion, it would be along the lines: - Amend the name of the 32 bit operation to, for example, "(un)bounded overlay" instead of just "overlay". This communicates a gentle warning to the user. An optional information panel to explain the situation might also be nice. - Create a setting on the operation to choose for "clamp to visible", as per your suggestion. There could be a global default setting, but it should in any case be settable per individual operation. An extra white clamping layer would then not be necessary. Clamping may have undersirable side effects. The invisible information is gone forever. (Actually, not necessarily. In theory, after calculation of the new visible range using the clamped values, the invisibles could be "unclamped" again. That could have interesting but difficult to grasp ramifications, I think.) There could be ways to deal better with it. One of the things I wished there were in 32 bit, when I was experimenting, is a filter to apply a value shift to a layer. At the moment, I can use preview to see the effect of value shift, and thus change what is displayed, but that seems only to impact the visual and not to have a lasting effect. I would like to be able to apply it as a filter in the layer stack. Thinking further, there could even be a curve to map values to values. Whatever comes out between 0.0 and 1.0, will be displayed with that value. Outside of that, it will be displayed either black or white. Then your white "clamp" layer could be replace by a "clamp to visible" filter. Looks interesting to me, but needs further thought. BTW, you do not know of such a value shift filter, do you? I could not find it. Thanks for listening to theses ramblings, and I would be interested in your views.
  3. Thanks, that clarifies a lot. I did not realise the answer was in your file all the time. Clever to use a regular white filled mask with Darken blend mode to clamp to regular white. And it confirms that, indeed, 1.6.7 has exactly the same problem in 32 bit development. But I think you apply the formula too hastily. The formula expects all values to lie between 0 and 1 to start with. From the same Wikipedia page: "In the formulas shown on this page, values go from 0.0 (black) to 1.0 (white)". This means that all values should first be normalised to lie between 0.0 and 1.0, before the formula may be applied. A value like 1.73 should not be used. No negative values can then ever be generated, and, in fact, white remains white: f(1.0, 1.0) = 1.0 - 2(1.0-1.0)(.01-1.0) = 1.0. Now, I have no idea how Serif maps the internal 32 bit floating point pixel representations to normalised values between 0.0 and 1.0. Nor do I know how the demosaicing algorithm assigns those floating point values to pixels in the first place. I assume all raw development programs have their own way of doing that, with slightly different results. A trade secret, if you will, and very camera dependent. I am doing some experimentation to get some grip on what is really going on behind the screens. For that I need some kind of idea what the floating point representation of a pixel is. You might help me with that: Where do you see that my white has RGB value 1.73,1.73,1.73? That must be related to its floating point value. In the Info panel I see only the regular 8 bit RGB values, which are 255,255,255. And those values do not change after application of the fill layer. So that tells me nothing. But this must be too coarse a way of looking at it, since the floating point representations must be different or your approach would not work. So, where do you find that my white RGB pixels are considered 1.73,1.73,1.73 over-white, instead of 1.0,1.0,1.0 regular white? Thanks for you help.
  4. For the record: Problem still present with AP 1.7 beta on MacOS Mojave 10.14.4.
  5. Are you sure about 1.6.7? I don't see it with my file on 1.6.7, 32 bits (MacOS). All concerned pixels are white, as they should be. Before I log a bug with 1.6, based on your statement, could you provide an image where you see the problem with 1.6.7?
  6. My apologies if I angered you. I myself felt annoyed that you seemed to keep denying my position that there is a real problem in the software, without addressing my arguments. This might have coloured my replies in turn. Sorry again if that angered you. Perhaps it was all miscommunication. To be clear, my posting was not so much to ask for a way around the practical problem how to deal with a faulty implementation, but to alert Serif to a possible issue with their beta software. After all, that’s what beta’s are for. That I now, thanks to you, know that the same issue exists with 1.6 makes it all the more relevant. You suggested a practical way for me around the problem for now, and thank you for that. For myself, the simplest solution is to stick to 16 bits until the problem is solved. I can live without 32 bits. Thanks again, and no hard feelings I hope.
  7. A workaround to me is not a solution. There is, I think, nothing I can further say that might convince you. So unless something new comes up, I will close this discussion, file the bug, and move on. Thank you for your particiiation.
  8. Let's hope it's that simple. Anyway, I will log it and we'll see what happens.
  9. Sorry >|<, but Casterle is right. You keep saying that the behaviour is a consequence of the way the function is implemented. I get that. But the point is, that this implementation does not result in a proper overlay blending as is generally defined (and as is realized in the 16 bit implementation). The only conclusion from this, is that the 32 bit implementation is faulty. There is no other way of looking at it. To be very specific: if you look at the formula I quoted, you find that if you overlay-blend an image with itself, all white pixels come out white again. In my 32 bit example, they come out black. That is wrong, period. Since you tell me that this also occurs in 1.6 with 32 bit, and thank you for that, apparently this is a bug in 1.6 as well. I will log it as such in the proper forum.
  10. You are right, my formulation was imprecise. And you are correct in your response. But I stay by my opinion: overlay blending is done incorrectly in 32 bit photo persona. Let me try again. Suppose I have a raw file and develop it twice: once to a 16 bit image, and also to a 32 bit image, These images will be very close in appearance. From you, I understand that in the 32 bit photo persona, the values are internally represented as floating points, not 32 bit integers. Fine. With 16 bit, they are certainly integers. Now suppose I perform the same user action on both, similar, images. In my case, applying overlay blending. This rests on a well defined mathematical formula, which is independent from precision or from integer vs. float considerations. I therefore expect that the results will look similar as well. But that is apparently not the case. Which is what I consider a bug. As argued below, the 32 bit implementation simply does not achieve its goal: overlay blending. It is in error. By extension, I would like to formulate the following design principle: in all cases the user should be aware that choosing between 16 bit or 32 bit in the Develop Assistant might impact precision in the final image. But s/he should also be confident that other than that any operations on the image will have similar results. This is an opinion, but a very reasonable one in my view. I would be interested to hear arguments to the contrary. In any case, the current implementation violates this design principle. === The following elaborates my statement that "algorithms used in 32 bits should be a close approximation to those used in 16 bits, only more precise". Which does not really identifies the problem, as you point out. But that does not mean that all is well. Let me summarise conceptually the steps taken during editing with overlay blending, as I understand them. - in 16 bits blending this translates into a transformation from an array of 16 bits integer values to another array of 16 bit integer values. These arrays in turn are rendered on a screen or on paper. This will normally go via an intermediate step where an 8 bit RGB array is generated. - in 32 bits it is different. Values are represented as floating point, not as integers. Very well. It gives much more precision, especially with many operations in a chain. At the point of rendering, there will be a conversion to an integer array. Whether these are internally 32 bit or 16 bit I don't know, but that is immaterial to the discussion. In either case, in the end 8 bit RGB is again generated for display or printing. For precision, calculations take place as long as possible in the floating point realm. Now to the problem at hand. It is the task of the implementation to realise overlay blend mode. There is a mathematical formula for that. Taken from Wikipedia (https://en.wikipedia.org/wiki/Blend_modes#Overlay): f(a, b) = 2ab if a < 0.5 f(a, b) = 1 - 2(1-a)(1-b) otherwise where a is the base layer value, b the top layer value. These formulae are independent of representation as integer or floating point values, or of bit size. Any real implementation will be an approximation. A good implementation makes the approximation mathematically as close as possible given the circumstances. If, therefore, the 32 bit implementation of overlay blending gives very different results from the 16 bit implementation, one of the two must be at fault. And it is obvious in this case that it is the 32 bit implementation. Technically, the problem appears at the point where conversion to integer takes place. Either that conversion must take into account what floating point has done to values that is significantly different from what integer does, or the implementation of floating point overlay transformation must be adapted to the integer conversion algorithm. Or both. Bottom line: Overlay blending is done wrong in 32 bit photo persona. This is not necessarily due to either the way floating points calculations are done for overlay blending, or to the way floating point is converted to integer, but to the interplay of the two.
  11. I understand this explanation in the sense that the mathematical formulae for overlay blending mode, when applied to floating point values, can yield negative values that then become zeros when converted to integer values, thus leading to black pixels. In that sense, all is well. Mathematically, that is. But I find it surprising, to say the least, that the apparent behaviour of an action like applying a layer blending mode could be radically different depending on whether I work in 16 bits or in 32 bits. Of course there would be differences, but I would expect those to be related to precision (banding, etc.). Not to white pixels to suddenly becoming black. Mathematically speaking, the algorithms used in 32 bits should be a close approximation to those used in 16 bits, only more precise. And the conversion from floating point to integer should also be as perceptually faithful as possible. So, regardless of the explanation how it happens, unless someone convinces me otherwise, I consider this a bug.
  12. Attached beta project has one pixel layer with overexposed areas. These show as white. If I duplicate the layer and set overlay blend mode, the overexposed areas are shown as black. The info panel shows them as 0,0,0,0. Upon export to jpg, these areas remain black. Same does not happen with 1.6. overexposed.afphoto
  13. Jeroen

    Layer mask issue - suspected bug

    Neither can I! It must have disappeared just between 122 and 123. All is fine now. Thanks!