Jump to content
Our response time is longer than usual currently. We're working to answer users as quickly as possible and thank you for your continued patience.

shojtsy

Members
  • Posts

    194
  • Joined

  • Last visited

Everything posted by shojtsy

  1. Also please make the displacement map viewable / editable similar to a mask.
  2. Sounds like a re-occurring issue with understanding terminology used by many people. "Non-destructive" is often understood as having the ability to do steps A, B, C, then return to whatever changes/settings were applied at step A and being able to modify them withing having to redo steps B and C, and still get the same image as if you would have re-done steps B and C. You might say that "non-destructive" is not a good name for this ability, but you should understand that it is viewed as a benefit by many. Since Affinity obviously aspires to be non-destructive in this sense with the many live adjustments, so it makes sense to point out that neither of the solutions presented can achieve that. Consider step A to be the creation of the shape, step B being the creation of blur layer, and step C being the copy of the shape to be also a mask of the blur layer. You can not change the parameters or type of shape (step A) and get a consistent result, without also repeating step C and copy the new shape to be mask of the blur. Edit: Note that I am assuming that one would like to reproduce the original Background Blur, not just a blur masked to a shape. In proper background blur you have a shape/image with it's own colors and when that is blended in, the background color used for blending is blurred before blending. See the animation with the watch earlier where the layer with background blur has a darker gray color and text as well.
  3. It is already answered. The feature is there in Affinity Designer. While editing your document in AD, you can create a text on a circle, then return to Affinity Photo to continue other edits in your document. This was obviously a conscious decision on the part of the devs. In order to be able to use the features of both Photo and Designer, you need to buy both Photo and Designer (DUH).
  4. This will again recolor the whole image. Anything not completely white would take on a shade of blue. I think you misunderstand OP. He has an image with many colors, with a black grid on top, all within a single pixel layer. And would like to recolor the pixels of the black grid.
  5. Won't this make any non-white area of the image partially transparent, and this partially effected by the blue coloration?
  6. shojtsy

    LAB macros

    Hi, Here is a library of macros of various LAB operations. It can do - Selective desaturation of LAB channels - Separate LAB channels to greyscale layers without switching to LAB mode - Join back LAB greyscale layers to a composite image without switching to LAB mode - LAB color contrast boost Enjoy! As for the file size being 4MB for a few macros, I have no idea, it is probably a bug. SH LAB tools.afmacros
  7. Hi, After recording a macro you can make blend mode modifications in the macro customizable by clicking the corresponding eye icon. However, unlike the opacity adjustment customization in macros, you can not assign a label to the blend mode dropdown that is displayed when the macro is played back. This is not particularly helpful, the macro dialog may have multiple blend mode drop-downs with no way to communicate their purpose or role. Please add a modifiable text label for blend mode customization in macros the same way as for opacity.
  8. Serif says its ok to create threads for requests which were already mentioned before sometime (second paragraph)
  9. I would actually like to record making a pixel layer or even a layer group into a mask, not by Rasterize to Mask, but in the same way as dragging it into the mask position. Assistant doesn't help with this. All that would be needed is a menu item in Arrange menu, same as Mask to Below. Pretty minor request in my opinion.
  10. Hi MEB, This creates what looks like an Orton effect by varying the opacity of the blur. The middle portion is a blur with the contrasty image overlaid. If the radius of the blur would be varied that would make it possible to create a gradually out-of-focus effect. Maybe Field Blur better fits OP needs? (A further improvement of that filter would be support for blur radius driven by a pixel mask)
  11. Besides "why you need this?", I wanted to point out the answer to the OP: Affinity does not provide the feature to turn any pixel image into curves or vectors. It can only go the other way. If one has a distressed mask in curves to start with, one can do the cutting out you are looking for.
  12. Both single-image defocus estimation and neural networks are possible solutions for this. I am just saying that Affinity doesn't provide the tools for us to do either, and I think also doesn't implement either of them itself. Remove Haze filter probably uses some simple heuristic of detecting grayish gradients and make them contrasty.
  13. Maybe you are looking for defocus estimation? Or some artificial intelligence that understands the objects? I don't think either is possible in Affinty.
  14. This thread gave me an idea to create an estimated Depth mask: I have created a macro to recover some depth information from the Remove Haze filter itself. Create_depth_mask.afmacro The way you use this is to select your pixel layer, run the macro, and then drag the resulting layer group into a mask position of some adjustment. The red tint is intentional to help previewing the mask. When the layer group is used as a mask of some other layer, the tint will be ignored, and only the alpha channel will be used. See example usage with a black fill layer being masked by the depth mask. Create_depth_mask.afmacro
  15. Interesting idea. I have created a macro to recover some depth information from the Remove Haze filter itself. See in this thread:
  16. My experience is that clicking on the picker sets the brush color to what the picker memory stores. Picker memory is what you have last picked, and is application-wide, so it is transferred between documents, and is not saved inside your afphoto files.
  17. Groups of adjustment layers only (no live filters) work like this for me as well. Live filters in a group do not blend like this because they are buggy. Live filters outside group do blend like this. Do you have examples where blending works differently for a group of adjustment layers only?
  18. I had another one here https://forum.affinity.serif.com/index.php?/topic/48650-adding-pixel-layer-w-blend-mode-to-grouped-layers-messes-up-visibility/&do=findComment&comment=243403 I think it is a simplified description that a layer group ignores background for its own calculation when it contains a pixel layer. When your layer group contains adjustment layers below the pixel layer, those adjustments can only be applied on the background, because there is nothing else they can act on. Also if you have a pixel layer in a layer group and the pixel layer itself has a blend mode other than Normal, it needs the background first to calculate the image contributed by the layer group, and then uses the background again to blend the layer group with it using the blend mode of the layer group. You can experiment with this by having a layer group with Darker Color blend mode and put into it a pixel layer which itself has a Color mode for example. The resulting blend math can only be explained by the pixel layer inside the group also being blended onto the background before the layer group is blended again with the background.
  19. Now that we have established this rendering bug, please go back to post #1 in this thread, where the same thing is happening. Do you now agree that the two layer setups are supposed to produce the same result? Because they do after adding the empty layer trick.
  20. What I have meant is that the very bright image (rendering bug) is not immediately displayed after deleting the pixel layer. The reason for that I suppose is that the image area is not refreshed because there is probably some optimization in the application to avoid rendering when a change is done which is assumed to be irrelevant. And deleting a disabled layer should be an irrelevant change for the purpose of rendering.
  21. Some caching occurs, which might be reasonable feature given that deleting disabled layers should not change the result. Try to force a redraw after deleting the pixel layer, by disabling the layer group and then re-enabling it. See the screenshots here
  22. Please see affinity file attached. Experiment with deleting the empty and anyway disabled pixel layer. How do you explain the major change in the image resulting from deleting an empty and disabled pixel layer? layer_group_blending_workaround3.afphoto
  23. I came to the same conclusion of this being a bug, and my original setup should just work. We have discussed another workaround on this thread of using the adjustment and live filter layers as not child layers of the layer group but clipping masks. This fixes the blending math, but hits another bug of grid pattern in the render output if any live filters are in the clipping mask position. However your workaround does produce the correct result, thank you very much! I was not yet able to file a bug report because it seems particularly difficult to explain the problem in a way that people understand it (see this thread). But your workaround will also serve as a good way to do that. Just delete the empty and disabled pixel layer. Nothing should change, right? Yet, it does change, revealing the bogus behavior.
×
×
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.