Jump to content
Our response time is longer than usual currently. We're working to answer users as quickly as possible and thank you for your continued patience.

kirkt

Members
  • Posts

    440
  • Joined

  • Last visited

Everything posted by kirkt

  1. Here is th result of applying the above suggestions to the concrete tile the OP posted. The Affine transform makes the seams at the edges move to the middle of the tile. This makes them easy to blend/clone out. Then, when you are satisfied with the blending an removal of the seams at the edges of the original tile, you perform the same (reverse) affine transform to get back to the original tile, but now the edges are contiguous. Now you can tile this image seamlessly. Of course, it will repeat, and the repetition might be pretty obvious, but that depends on the application in which the texture is used. Kirk
  2. @kirk23 - Also - I have noticed the increased number of CG/3D artists that are suggesting more and more features, or tweaking of existing elements of AP. This is great, in my opinion. I have dabbled in 3D rendering but I am primarily a photo/image processing person. I think the input from the 32bit, multi-channel, OCIO folks who deal with these things regularly in their workflow is solid gold. This is where image processing needs to go. Thank you! From someone who shot HDR mirror ball images and used Paul Debevec's HDRShop decades ago to light his CG scenes! Kirk
  3. Exactly. This is the model I would love to see Affinity follow. I have no problem with nodes that render the output to a low-res proxy throughout the node tree. This speeds up the editing process and gets you where you need to be so that you can then put a full res output node at the end of the tree. Blender is terrific for so many reasons and is an example of how an application can evolve with feedback from an incredibly diverse user base and a bunch of really talented designers and programmers who have support. I am also a fan of Photoline, for many reasons, but the interface can be klunky and a little obtuse, which adds to the learning curve. Kirk (t, not 23 - LOL - how many times does a Kirk run into another Kirk?! I've met three in my lifetime. Now, virtually, four.)
  4. @AlejandroJ - You're welcome. Have fun! Kirk
  5. You can try duplicating the original on a new layer above the original. Then apply a very large gaussian blur* to the image to obliterate the small details and leave the very large variations in tone (i.e., uneven lighting). Then invert the result and apply it to the original in screen, overlay or soft light blend mode. Adjust the black and white points of the composite to restore contrast and tonal range. Then you can apply an Affine transformation (Filters > Distort > Affine) and offset the image 50% in X and Y. Use this as the basis for your tile, cloning and inpainting the seams that intersect in the middle of the tile for a seamless tile. Kirk * Although the Gaussian blur slider in AP only goes to 100 pixels, you can enter larger values into the numerical radius field to get blur radii larger than 100 pixels. Try something like 300 for your image.
  6. @Claire_Marie - As @Lee D notes, the Infer LUT operation compares the before and after color of an image and tries to reverse engineer the color in the after image based on the before image. Some images that are used in the Infer LUT operation may not have a very wide variety of tone or color represented in them, so when inferring a LUT from them, the inferred LUT only captures part of the toning (the toning restricted to the colors present in the image). One way that LUTs are stored in a graphical format is to use a before and after version of a special color image called an Identity (ungraded, neutral) HALD CLUT (color lookup table) image like this one: As you can see, this special image is essentially a grid of colors with a wide range of tonal and hue variation. Copy this HALD image and run it through your filter and then use the before and after versions of it as your Infer LUT base images. The Identity HALD image contains a lot of colors and will capture your filter's color transform fully. As with all LUTs, the HALD images need to be in the color space of the image you are editing for the color transform of the LUT to work as expected. Here is link to a page of technical LUTs which includes the original HALD image I posted here: https://3dlutcreator.com/3d-lut-creator---materials-and-luts.html For example, here is a webpage that contains a link to several HALD CLUTs that capture color transforms for several film simulations. You can use these in AP to apply a film look to your image with a LUT adjustment layer and the Infer LUT feature. https://patdavid.net/2015/03/film-emulation-in-rawtherapee.html Kirk
  7. I took a look at your file - the cat artwork is grayscale and you are essentially processing it to look like a bitmapped pixel line art drawing. In your file, you use a Levels adjustment layer to try to squeeze the white and black points together to get the antialiased (shades of gray) edges to go to black or white. Try using a Threshold adjustment layer instead. Also, flatten the artwork onto the white background before applying the Threshold adjustment. I was able to export a JPEG that looked identical to the preview in AP. I think the problem lies in using the Levels adjustment to force the gray around the edges of the line art to black or white. When it comes time to export, there are probably edge pixels that are not completely black or completely white, and the export file type makes decisions about what should be black or white that do not agree with your intentions. Threshold should cure the problem because it explicitly makes things either black or white. In PS, you could convert the artwork to "bitmapped", but that option does not exist in AP, so you have to use the Threshold tool to effectively do the same thing. I used a value of 72% in the threshold tool for my test. Kirk EDIT - I see @Old Bruce and I are on the same page.
  8. If you want to use the dodge and burn tools, then you need to work on a layer with pixel information (for example, a duplicate of the original background image layer, to preserve an unedited version of the original background image). This is a destructive operation on the pixels of the duplicate image layer. If you are going to dodge and burn on a new, empty pixel layer, you cannot use the dodge and burn tools, as there is no data for those tools to "see" and edit. To dodge and burn this way (non-destructively) you need to set the layer blend mode of the empty pixel layer to Overlay or Soft Light, and then use a regular brush (not the Dodge or Burn tools) set to a gray value as I posted previously. Then when you apply brush strokes, the composite image will get darker or lighter according to the gray value of the brush stroke and the opacity/flow of the brush. This operation is non-destructive (it does not affect the original pixels in the background image layer) and does not involve using the dodge and burn tools. Kirk
  9. @marciomendonsa You can model this with a Photo Filter adjustment layer placed under a Black and White adjustment layer. Set the Black and White adjustment to neutral and then dial in the color of the filter in the Photo Filter layer - uncheck "Preserve luminosity" to have the more optically dense color filter cut out more light, as a real filter would do. Kirk
  10. @AlejandroJ - Take a look at the Serif-produced tutorials listed here: They are terrific and will help orient you. Kirk
  11. Here is a split (upper left to lower right) before and after. Kirk
  12. Here is a screenshot of the resulting Spare Channel (BLUEMASK) applied to a HSL adjustment to target the blue in the tarnish and shift its Hue to the sepia of the rest of the photo. To apply the BLUEMASK spare channel to the HSL adjustment layer, add the HSL layer and make sure it is the active layer. Then right-click on the BLUEMASK spare channel in the Channel palette and select "Load to HSL Shift adjustment Alpha" - this will load your spare channel to the inherent mask of the HSL adjustment layer. I would think there are other adjustments to contrast, etc., that will need to be made to combat the effects of the tarnish, but you get the idea. Because these other adjustments probably also involve the exact same area as the blue, you can reuse the mask to target the same area. Have fun! Kirk
  13. @WG48 - In AP, you can emulate the effect of the monochrome checkbox that is in PS by: 0) - It looks like you added a HSL adjustment (A in the attached screenshot) to accentuate the blue of the tarnish. 1) Add a Channel Mixer adjustment layer (B) to the stack (this is where you will accentuate the tarnish by manipulating the R, G and B content in each channel) 2) The effects of the Channel Mixer (D) can be seen in monochrome by viewing just the targeted channel of the composite image. In this case, the blue area will be WHITE in the mask you ultimately want to create, and the rest of the image should be BLACK or dark, to avoid being affected by whatever adjustment you make. In other words, the adjustment you ultimately make will target the blue area in the image, which has been intensified by the HSL boost in step 0. In the Channels panel (C), click on the blue composite channel thumbnail to view just the blue channel, in grayscale. In the Channel Mixer adjustment, select the blue channel (D) and push the blue (dirty) channel slider up to 200% and the red (clean) channel down to 200% or so. The composite blue channel should look a lot like the mask you posted in the screenshot of the PS result. You can right-click on the blue channel and create a "Spare Channel" from the blue channel - this will be your mask (E - I renamed the Spare Channel "BLUEMASK"). Kirk
  14. Which OS and version of AP? Are you using a pressure-sensitive stylus? The controls work as expected on my Mac, Catalina, v 1.8.3. Take a look at the brush options (in the context toolbar, click on the "More" button to examine the brush settings) and make sure your brush is not configured in something like Overlay mode or some other blend mode that is causing the brush to behave differently than a "normal" brush. Also make sure you are targeting the intended tonal range (highlights, midtones, shadows) properly with that dropdown menu. You can also dodge and burn on a separate layer with a regular brush and various shades of gray. Make a new pixel layer in Overlay or Soft Light blend mode and then paint with gray on it - anything lighter than 50% will dodge (lighten), darker than 50% will burn (darken). 50% gray will provide no effect, essentially erasing the previous effect on that layer (resetting the effect to nothing, locally with the brush). Good luck, Kirk
  15. @AlejandroJ - The HSL adjustment layer does pretty much most of what the Color EQ tool appears to do (based on some ACDSee videos - I am not an ACDSee user). See if that works for you. Kirk
  16. Take a look at this tutorial as well. There are a couple of small things that might trip up the process, if what you are experiencing is not simply a bug: Kirk
  17. @Pike11 You probably need to define how you are viewing the “raw file” in the rendering that looks low contrast, like a log-encoded image typically looks. Is that a camera JPEG? The raw file should be raw, without encoding, but may have a tag that is read by some viewers or converters to render it with log encoding. I do not know specifically how Sony might do this You can create an SLog RGB file in AP by rendering the raw file to a 32 bit RGB file with no tone curve (linear output) and then using an OCIO adjustment layer to transform the linear file to SLog. That will give you the starting point It sounds like you are looking for. Kirk
  18. I assume this question is for Affinity Photo. When you click the New Mask button, a mask is created for the active layer and is, by default, filled entirely with white, meaning that the contents of the newly masked layer are fully revealed. You can then go about painting on the mask using black to conceal the masked layer and white to reveal the masked layer - shades of gray also partially conceal/reveal the masked layer. If you want to create a new mask that is all black when it is created, hold down ALT (PC) or OPT (Mac) while you click the new mask button. The newly created mask will be filled with black (concealing the active layer entirely) instead of white. Again, you can paint on the mask as usual. Adjustment Layers have their own mask built into them and, by default, they start as entirely filled with white (fully revealing the adjustment). You can click on the adjustment layer to make it active and use the keyboard shortcut CTRL-I (CMD-I on the Mac) to "I"nvert the built-in mask to all black. I hope I am understanding your question correctly. Kirk
  19. You would probably want to have a user-defined threshold (% of total pixels, for example) for that clipping icon to be activated, otherwise it might denote clipping when 5 pixels in a 6000x4000 pixel photo are clipping, something that would be misleading and tend to negate the usefulness of a single indicator with no visual context. Kirk
  20. Both versions of "Flip" (Document and Arrange) are available for assignment to keyboard shortcuts. One less mouse movement and button press. Kirk
  21. @Peter Heinrichs - you can type a value in the radius input field that is greater than the slider maximum. This is true of a lot of sliders with an input field. Kirk
  22. The Nik Perspective Efex is not compatible with Affinity Photo, so you will have to call it from outside of AP. Kirk
×
×
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.