Jump to content

Jeroen

Members
  • Content count

    60
  • Joined

  • Last visited

Everything posted by Jeroen

  1. Hi Justin, Interesting.I can see that NaN values might certainly cause erratic behaviour. The question of course is where they come from. You are correct in that I developed from a raw file, and I did it in AP. I do not keep records of develop adjustments. Normally I tend to keep development settings simple and leave most of the work to the Photo persona, but I do not remember exactly what I did here. I can provide the raw file though, if that might be of use. It is a Canon .CR2 file. Let me know if you would like me to. Best regards, Jeroen.
  2. In attached project, the following shows artifacts appearing when applying a live denoise filter, even after resetting the filter: With the denoise filter deactivated the artifacts disappear: Exporting to jpg of the project with the filter active also shows artifacts, although different ones: Exporting with the filter deactivated gives a clean picture. The project can be closed and reopened, and the behaviour remains. Jeroen. IMG_2333_Golf.afphoto
  3. See attached image. The HSL adjustments and WB adjustment suddenly stopped working. The Levels adjustment still worked.I made a recording of the "HSL lucht" adjustment not reacting, also attached. Maybe it is relevant that I have layer masks on the HSL and WB adjustments, and not on the levels adjustment. After saving and reloading the project, the problem had gone away. I therefore suppose the project file would be of little value in diagnosing the issue, but I kept it safe and can upload it if requested. For the record, I had Metal compute activated and the project is in 32 bit HDR. Schermopname 2019-07-12 om 11.08.02.mov
  4. I could not replicate it, but I recall having noticed similar behaviour at other times. If and when it reoccurs I will report again, with as much information as I can.
  5. I uploaded "About this Mac' to the Dropbox link you provided. Please let me know if I can be of further help.
  6. Uploaded. Please let me know if I can be of further help.
  7. Uploaded. Please let me know if I can be of further help.
  8. Thanks for the encouragement. I now have a workaround. Yet, I may misunderstand you here... but it seems to me there is definitely something wrong at the moment. I have a 2019 Mac with all software up to date. You are certainly not suggesting that Metal support is fine as it is, and thus needs no fixing? Then, why is it not working as expected?
  9. Disabling Metal compute does indeed fix the issue. I have uploaded the file to the link you sent me.
  10. Attached is a screen shot of AP 1.7, with a picture loaded. It displays with horizontal stripes: I am not aware of having created these, nor do they show when I export the file to jpeg: I have not been able to reproduce the issue, but I have kept the history saved with the file. One unusual thing - for me - is that I applied tone mapping. The problem appeared after I cropped. Affinity Photo 1.7.0, iMac (Retina 5K, 27-inch, 2019), Radeon Pro 580X 8 GB, MacOS 10.14.5 Mojave
  11. Quite right. I now also understand why the lines appeared only after cropping. This has to do with the display threshold value for the baselines. Issue completely solved. Thanks.
  12. I changed the zoom level up and down. The striping changes with it. Attached is a picture of zoom level 4. If I zoom further out, it does not look like pixels: So it is not in the image, which explains why it is not in the jpeg. It must be a sort of display issue, but one that scales with the zoom level. The lines are very neat. The thickness of the lines remains constant.
  13. Hi James, Thanks for your ongoing educational efforts! Your videos are an indispensable resource for me. I do love the format & quality of the new videos. Will eventually all old videos be adapted? I have a particular wish to review a video explaining how to select the sky from a foreground with a tree, using (if I remember correctly) temporarily converting to BW and enhancing contrast, then selecting by tonal value. I hope this one will be available soon, since I have a question on it. Alternatively, same question as above: are the old ones still available? Cheers, Jeroen.
  14. I am trying to understand what is going on under the hood when I invert a selection. Situation: I have an image with a selection. The selection consists of two parts, one (at the left hand side) done with the freehand selection tool, the other (to the right) with the elliptical marquee tool. The freehand part has a blurry boundary, the elliptical part has a sharp boundary. I now want to create two layer masks, one from the selection and the other from whatever is outside of it. I use Invert Selection for that. The idea is to stack two layers with the original image, with one of the masks applied to each of them. Hopefully, each mask will let through just enough so that together they will blend into the original image. I expect normal blending mode should do the trick. My way of working is the following: - the original selection I save into a channel that I call "base selection". - I then invert the selection, and save the result in a channel called "base selection inverted". - I create a pixel layer and fill it with pink colour. - I duplicate that layer. - For each of the two layers I create a mask. The "base selection" channel I load into alpha of the first mask, the "base selection inverted" channel into alpha of the second mask. - I stack the two layers with normal blending mode. - I activate both layers and their masks. - Finally, to see clearly what is happening I create an extra blue pixel layer that I use as background. This is the resulting layer panel: I would now expect, perhaps naively, that I would see the original pink image back, since the masks are complementary and both let their own part of the original through. Instead, I see the following: Whereas the elliptical selection to the right is invisible, the blurry freehand selection to the left lets part of the blue background shine through. I now have the following questions: - how exactly is the selection converted into a mask through the channel? As I understand it, a mask is a mapping that tells what opacity to assign to each pixel of the (pixel) layer it is assigned to. White has 100% opacity, black is fully transparant. Fine. For black and white positions in the mask, this works. But if there is a position in the mask with "grey tonal value", an opacity between 0 and 1 is assigned. What is the formula for that? How is this "gray tonal value" determined, and once you have it, how does it translate to an opacity number? - How do the opacity numbers relate between a mask derived from a selection, and a mask derived from its inverse? Are they complementary, i.e., add up to 1 in every case? - From what I am seeing, it looks like inverting a selection does not necessarily lead to masks that are complementary in the sense that they can work together to restore a full image. They may leave spots with less than full opacity, if selections have blurry edges. This leads to the following question: is there a way to achieve what I am aiming for? Or am I missing the obvious here? Thanks to everyone who read through all this. I do hope you can shed some light. For reference, I attach the project (done with AP 1.7.0.128). Jeroen. selection inversion test.afphoto
  15. Thanks for pointing this out. It is simpler than I did. Using channels was actually an attempt to keep the selections around, in case I need to revisit them later. But for this demonstration not needed.
  16. Thanks for pointing this out. It explains why things do not work out as I wished. Next question: How to achieve that goal? You seem to be explaining how normal blend mode is supposed to work on the alpha channel, and to be arguing that there is no bug in the way it is implemented in AP. Fine. I accept that. But my point is not to seek for a supposed bug, but to understand what goes wrong and from that to achieve my goal. As I found out experimenting, and as you explain in terms of mathematical operations, my original approach does not work. Applying normal blend mode to the image, masked by a selection, and the same image masked by the inverse selection, does not recreate the original image if there are partially opaque pixels in the selection. That is not saying that normal blending is implemented incorrectly ("bug"), only that this particular blending mode does not achieve what I am looking for. And what is, in my view (but also in that of others if I understand correctly from the discussion), a natural thing to ask for in the context of selections representing parts of images. Now for a possible solution for the problem: a selection and its inverse from the same image, to be combined into the original image. In fact, for presentation purposes, I created this simplified version of a more general situation: an image with a number of selections, derived from the full image by repeatedly creating a selection, setting it apart, and continuing in the same fashion with its inverse. Eventually the whole image is "covered" by a number of selections that are mutually exclusive, except that theyt may overlap on their boundaries, where opacity on each of them is below 1. At all places, all opacities together will add up to 1, however. So, now what would for the simple case composing the two masked images need to come down to? - color: unchanged from the underlying image. - alpha: the sum of the alphas of the parts. In the two selection case, always 1. So, I need a blend mode where alpha comes out as the sum of alphas from the selections being combined. Also, on the colour channels one can have any reasonable blend mode as long as it leaves identical pixels alone. A simple example would be a blend mode that acts as Normal in the colour channels, but as Add in the alpha channel. I tested my small example by taking the result, with two selections with a partially transparant boundary, and setting alpha to 1 everywhere (using Filter/Apply Image on the combined selections layer and setting formula DA=1). The boundary, and with it the problem, was gone. So, in summary, there certainly seems to be a working solution to the simplified problem and it is not difficult to achieve by hand. I now am looking for an elegant way to set alpha to 1 everywhere without touching color information. And to generalize to more than two selections, which is my real use case.
  17. Sorry, cannot duplicate to 1.6. Option to load channel to mask is not available, and I do not know of any other way.
  18. Thanks for passing this to the dev team, and for coming back to me. For clarity, I was not saying it is a bug. I was asking how it worked. And still am curious. Maybe once I know, I can use it to my advantage. And I would like to understand what the program is doing for me anyway. See my more detailed question in my posting. But now you that mention it, I wish it were different. I wonder if there is a reason to have it this way? It seems so logical that inverting a selection would also "invert" the mask. And I cannot think of a compelling reason to do it like this. I would like to know what I am missing here, if anything. Also, I am wondering if there is another way to achieve what I was trying? Seems like a natural thing to wish for. Again, I would like to understand where I may be wrong. Thanks, Jeroen.
  19. Attached beta project has one pixel layer with overexposed areas. These show as white. If I duplicate the layer and set overlay blend mode, the overexposed areas are shown as black. The info panel shows them as 0,0,0,0. Upon export to jpg, these areas remain black. Same does not happen with 1.6. overexposed.afphoto
  20. Only now saw this reply... thank you for taking notice. I just made a reply to >|<, and you might refer to that. Following is a brief excerpt. In that post, I take the position that (a) a gentle renaming would help, (b) an information panel would be excellent, and (c) clamping should perhaps better be made optional, rather than doing it automatically. Coming to think of it, (b) in itself might obviate the need for (a). Speaking for myself, being primarily a landscape photographer, I am not normally into special effects, but when I experimented with 32 bit overlay blending I achieved some spectacular effects which I liked a lot. I would not block that road forever. Another issue is that clamping destroys a lot of information. The reason for 32 bit is just to have that information available when and where you want it. Pity to lose it. One idea might be to optionally "unclamp" after your operation, let's say blending. All pixels that are outside the visible range after blending would then revert to their original value, at least if that also was out of bounds. The ramifications of that can be difficult to grasp intuitively, but in my view, so are many mathematical operations anyway. I can think of variations on this theme. Thanks again, and congratulations with all the fine work you are doing.
  21. I now agree. With help of your tip on viewing floating point values I gained a better understanding of 32 bit unbounded editing, and this is indeed how it should be done. Understood. So, the value range in 32 bit is unbounded and the software displays from that range whatever lies between 0.0 and 1.0. The rest is visually mapped either to black (0,0,0) or to white (255,255,255). But internally everything retains its proper value, so that when you shift the values, as you can do in the preview panel, they come to life again if they then fall in the displayable range. I hope I get this right now. Also, "invisible" values still influence all kinds of mathematical operations, such as layer blending. This can make for surprising, but very interesting effects. So, 32 bit editing is actually a very different beast from 16 bit editing (which, by the way, according to James Ritson (https://www.youtube.com/watch?v=UOM_MmM4rvk, 0:22) is based on integer representation of values, not floating point. Perhaps intermediate calculations are done in floating point to avoid cascading of rounding errors, I don't know). Naively, one might think that 32 bit is about better precision, which might be useful in some cases. But much more importantly, other than 16 bit it keeps lots of information that is invisible but still, in principle, could influence operations you perform on what is visible. Weird but interesting. Having said that, I still find it very misleading that Serif has chosen the same name for operations in 16 bit or 32 bit, respectively, that actually are very different in their visual outcome. I can see that one can take the position "the formulas are equal, therefore we call it the same". But is it unreasonable that a user, seeing the same name of the operation in 16 bit and 32 bit UI's, virtually indistinguishable, expects the results of applying it also to be close to one another? Especially where this operation has a generally (also outside of AP) recognised purpose, namely, to achieve a certain visual effect (a contrast enhancement in the case of overlay blending). I would not want to argue that for example, overlay blending should clamp the values beforehand in every situation. Keeping them free can create very interesting effects, and why forbid those? But (a) it can confuse the user who might expect an effect he is familiar with and (b) it forces a work around such as your extra white layer if the user wants to achieve that effect in the first place. if I could make a suggestion, it would be along the lines: - Amend the name of the 32 bit operation to, for example, "(un)bounded overlay" instead of just "overlay". This communicates a gentle warning to the user. An optional information panel to explain the situation might also be nice. - Create a setting on the operation to choose for "clamp to visible", as per your suggestion. There could be a global default setting, but it should in any case be settable per individual operation. An extra white clamping layer would then not be necessary. Clamping may have undersirable side effects. The invisible information is gone forever. (Actually, not necessarily. In theory, after calculation of the new visible range using the clamped values, the invisibles could be "unclamped" again. That could have interesting but difficult to grasp ramifications, I think.) There could be ways to deal better with it. One of the things I wished there were in 32 bit, when I was experimenting, is a filter to apply a value shift to a layer. At the moment, I can use preview to see the effect of value shift, and thus change what is displayed, but that seems only to impact the visual and not to have a lasting effect. I would like to be able to apply it as a filter in the layer stack. Thinking further, there could even be a curve to map values to values. Whatever comes out between 0.0 and 1.0, will be displayed with that value. Outside of that, it will be displayed either black or white. Then your white "clamp" layer could be replace by a "clamp to visible" filter. Looks interesting to me, but needs further thought. BTW, you do not know of such a value shift filter, do you? I could not find it. Thanks for listening to theses ramblings, and I would be interested in your views.
×