Jump to content
You must now use your email address to sign in [click for more info] ×

smadell

Members
  • Posts

    1,151
  • Joined

  • Last visited

Everything posted by smadell

  1. After Apple’s announcement of an M2 Mac Studio, I am about to drop some cash on that upgrade. Here’s the only real question I have, and hope that someone in the forum can give some advice. I will purchase an M2 Max Mac Studio with 64 GB of RAM and 2 TB of SSD storage. There is an option to increase the GPU from 30-core to 38-core. What will this achieve, and is it something that is worth considering? For completeness sake, Affinity Photo and Capture One are the two applications I use that are most demanding, and if increasing GPU cores will make that experience better, then I consider it money well spent. What does the group think?
  2. Easiest thing that comes to mind: use a Hue Range Mask. Put the image with the bright red areas on top, and the other image beneath it. Put a Hue Range Mask onto the Top layer, and tune it to allow only the bright red areas to be visible. The net result is approximated with this screenshot: On the other hand, Affinity Photo has some pretty advanced Astrophotography stacking capabilities, and another forum member might have a better method. I know absolutely nothing about astrophotography, and cannot comment on that part of Photo's features.
  3. I agree with @fde101 wholeheartedly. There is an inherent difference between a product and a service. A product is something for which a one-time purchase is appropriate; a service could very easily warrant an ongoing contractual payment. The very phrase SaaS (Software "as" a Service) belies the fact that companies like Adobe recognize this; they are desperately trying to re-invent themselves as a "service" rather than a "product" to increase their cash flow and bump up their stock prices. Granted, they have accomplished both, but they have done so on the backs of those who are trapped in their ecosystems. I, too, have an ideological phobia about subscription software, and feel strongly about not compromising that stance. If it comes at the expense of slower development, that's a hardship I'm willing to endure.
  4. I think so, @BiffBrown… The days when I shot on film are ancient history, and to be honest I really never knew all that much about what was going on when I dropped off my film at the drug store. However, that doesn't really answer the question. There's a fundamental problem emulating film grain in digital media. Film grain was more or less randomly sized, and pixels are uniform in size and shape. Without some real fancy programming (way more than my little mind can handle) the only way to get "film grain" in a digital image is to start with Noise. it's an imperfect solution right from the start. I believe that color film uses "dye particles" rather than silver halide and, therefore, the grain ought to be colored rather than monochrome. I'm not sure how one would emulate that in a digital world and, therefore, I'm stuck with using "Add Noise" as a starting point. There's one thing you could try, but I'm not sure it will give you anything you're going to like. With the Film Grain macros in the Library, right click on the macro and choose "Edit Macro" from the drop down menu. Find the step labelled "Add Noise" and click on its gear icon (on the right side of the line). Uncheck the box labelled "Mono" and then, optionally, save the altered macro to the Library with a different name. Basically, this lets you start with Color Noise rather than with Luminance Noise. However, the color noise you introduce is entirely random, and has no connection to the image it's affecting. Does that give a result that you like? I've tried this, and I've found that the only way to make this even approach being acceptable is to lower the "Grain Intensity" when the macro is run. Let me know what kind of results you get if you should try this!
  5. I noticed that no one has addressed your question yet, @maikm. There are 2 things to note in the Plug-Ins part of the Preferences. First, you should definitely check the "Allow unknown plugins to be used" box. Second, since the Mac is "sandboxed" you have to click the button labelled "Authorize Global". This allows Affinity Photo to delve into folders which it does not "own"; not clicking on that button will definitely screw up at least some of your plugin use. That having been said, I'm not sure if DxO PureRaw can be accessed as a plug-in on Mac, but it's certainly worth a try!
  6. If a video is worth a thousand words (I agree with @KCP on that one), then watch the attached YouTube video from Serif. It's brief, and goes over most of the issues you've brought up.
  7. For the most part, I’ve resisted putting my 2 cents into this argument, but I think my view on this might be shared by others. 1) I would love it if Affinity Photo would simplify my editing with commands like Select Sky, Select Subject, and so forth. Call it AI; call it machine learning – that’s just semantics. It takes the drudgery out of my editing by automating (and speeding) the process. I’m all for that. 2) Automated selection of photo elements (like those involved in automatic sky substitution) almost always get things close, but not close enough. They always seem to need fiddling with after the fact. And that takes the amazement out of it really quickly. I bought a copy of Luminar AI because of these types of features, and fell out of love with it within days (or was it hours?) 3) I have no particular beef with AI-generated art, even though I have no real interest in pursuing it. For me telling a piece of software “show me a purple cat hanging off the Eiffel Tower” or anything similar is just silly beyond words. I’m willing to call this kind of thing art, but there comes a point where it’s no longer reasonable to call it photography. 4) Given the size-related issues associated with being a relatively small company (specifically, Serif when compared to Adobe) it seems much more reasonable to introduce any truly AI-related features by means of scripting and the associated software hooks that might allow users to create panels and so forth. Add-ins like this would presumably not rob Serif of precious resources, but would make features accessible to those who want them. 5) I would be among those who are happiest if Affinity Photo remained a “photo editing software” application, and avoided becoming an “AI art generator.” Whether this is an “old school” attitude or not, I suspect it is an opinion held by many (though obviously not by all).
  8. To follow up, I know that it is de rigeur in Photoshop to duplicate the background layer as the very first thing one does. I assume that this has to do with the notion that Photoshop's roots are mostly as a destructive editor. That automaticity about duplicating the background layer persists to this day, even though Smart Objects and Smart Filters seem to make this unnecessary. But… I am not a Photoshop user, so maybe I'm missing the bigger picture!
  9. I saw the same 2 screenshots @walt.farrell - one is from Affinity Photo and the other from Photoshop. The Photoshop screenshot looks like it was a duplicate of just the dog's face, made into a Smart Object, to which a "smart" Gaussian Blur filter was applied. The Affinity Photo screenshot seems to be a background (on the bottom) and a duplicate of the dog's face (on top), to which a destructive Gaussian blur has been applied. My question about what @KCP was trying to achieve was based on those screenshots. Obviously the same "Smart Object" functionality can be easily attained using a Live Gaussian blur, constrained to the dog's face by virtue of its built-in mask. And, although it may be a moot point, @KCP could just as easily made the original background layer in Photoshop into a Smart Object and applied smart filters (the Gaussian Blur) directly, without the need to duplicate any part of the pixel layer.
  10. Hi, @marshaj. I think that part of the issue here is that "adjustment brushes" such as one finds in Lightroom and other apps do things in the opposite order than one finds in Affinity Photo. With an adjustment brush, you create the mask first (by "painting" with the adjustment brush) and then apply the adjustment settings to the masked area (usually designated with a red overlay). In Affinity, you create the adjustment (and likely its settings) first by creating the adjustment layer and using the sliders; then, you invert the adjustment layer and "paint" the adjustment back in where you'd like it. It's the same process, really, just done in a different order.
  11. Hello @KCP. I confess up front that I am not a Photoshop user, so I am not fully conversant in everything that Smart Objects can do. However, that having been said, just what function of Smart Objects are you trying to achieve? Many of the functions of Photoshop Smart Objects are present in Affinity Photo, usually without resorting to linked files and such. What specifically are you trying to do?
  12. As to the inability to specify a color for a Fill layer, I have found that (i) there is no way to specify a pre-determined color for a Fill Layer; but (ii) you can add a Pixel Layer and fill it with a color, and you can specify that color as part of the macro. Not at my desktop right now, so I hope I get all this in the right order. (1) Add Pixel Layer, and optionally give it a name; (2) Choose Fill... from the Edit menu; (3) In the dialog box that opens, click in the color box next to "Custom Color" (or whatever it might be called - don't remember). Once the color picker dialog opens up, specify the desired color with which to fill the pixel layer; and (4) Close the color chooser and click "OK" within the Fill... dialog. What you get is a Pixel Layer filled with a single color, of your choice. The macro will remember the choice you made, and will be repeatable in the future (irrespective of what the application's Color Picker currently shows). Changing the color of the layer after the macro has been run is obviously not as simple as it would be for a Fill layer, but it's a start...
  13. I’ve recently started watching some of the YouTube videos by Blake Rudis (his channel is called f64 Academy). They are all Photoshop-centric, but just about everything he does is easily translated into Affinity-speak. There are two that might give you some ideas about dealing with blown out whites. I’ve linked them below. Having said that, the given answer to this particular photo is spot-on, and the method given by Rudis would not help this particular situation nearly as well.
  14. I agree with you on both points - the second one looks “fresher,” and every day is definitely a school day!
  15. Here's another option, which occurred to me a moment ago. Instead of adjusting the concrete with a Curves layer, I placed a concrete texture over the entire image and applied a mask (using the Concrete bricks selection). This might be the better option. Using Concrete Texture.afphoto
  16. The problem is that concrete has texture, and simply blurring will get rid of that texture. However, there is too much texture here, and smoothing that out is needed. I made two selections here: the concrete blocks and the grout. (To select the concrete, I used the "freehand select" tool and then subtracted away the signs. Also, used Refine… to edit the selection by designating the grout as "background".) After that, the grout was by and large the inverse of that. I used two Curves layers – one to diminish the contrast on the concrete and a second to de-contrast and lighten the grout. I suppose this amounts not so much to a "new coat of paint" as maybe it is a "whitewash" of the concrete wall. Adding a Recolor adjustment (with the appropriate mask) could "paint" the wall. Attached is a JPG and my .afphoto files. Repaint the Wall.afphoto
  17. Good morning, @lphilpot. Here’s a really easy variation of your original method. First, instead of a Select All, make a luminosity selection. (Choose “Select from Layer Intensity” from the Select menu.) Second, with the selection active, create a mask. That’s it. You don’t have to convert the layer to black and white beforehand. And you don’t need the Channels panel as an intermediary, unless you’ve already made the Adjustment and then want to apply the mask.
  18. Good morning, @eddietheone. I couldn't resist the chance to throw my hat in the ring. I darkened tones and corrected skin and hair colors (first and foremost) with 2 Curves layers. I gave the lips a tiny bit of extra color, and got rid of the annoying (to me, at least) halos at the edges of the girls' faces. Added a touch of Vibrance to enhance. And I darkened and blurred the background, and put a very light High Pass filter on at the end. JPG and .afphoto files attached… Overexposure Edit.afphoto
  19. Did you shoot in Raw? If so, uploading the raw file would be best!
  20. One more thing…. If you have a raw file, you have a lot more wiggle room to play with. If the whites along the left side of the lighthouse are completely burned out, there is no getting them back. But, if you can get some detail in the whites (even if it flattens or darkens the rest of the image) that gives you a far better starting point. Working with an already-developed file in which there is no detail in those whites, though, you’ll never retrieve anything. If you’re doing your raw development in Affinity, try setting the Assistant to NOT apply a tone curve automatically. This is the equivalent of using a “linear profile” and is a good first step toward recovering those whites. Also, when you’re done in the Develop persona, apply the changes to an embedded raw file rather than creating a pixel layer. This gives you the opportunity to go back to the Develop persona and actually work on the raw file instead of a developed version.
  21. Doing this with just adjustments is admittedly difficult. I am including an attempt, and it involves a mask, blend ranges, and Levels and Curves adjustments to target the darkest tones in the bottom portion of the lighthouse. (I also added a Vibrance adjustment because the colors were kind of flat.) I am including the JPG (for display) and an .afphoto file for your inspection. Not the greatest, but it does bring out the dark lines without altering the whites too much. It also avoids having to fuss with lines/vectors, although that’s not a bad approach if you can make it work. Edited Lighthouse.afphoto
  22. Hi, @Sven Ironside. If you have the original photo as a layer in an Affinity Photo document, here’s what I would do. (1) Select that layer and choose Copy from the Edit menu. (2) Select New from Clipboard from the File menu. You should now have a second document, consisting of only the original photo. (3) With that new document open, choose Export… from the File menu. Choose to create a file on your hard drive, choosing a lossless format like TIFF or PNG. You can close out the new document, as it has served its purpose. You should now have a new file on your hard drive, presumably equivalent to the original (since deleted) photo file.
  23. @arquata - Glad you got this one sorted, Jennie. The forum is almost always a quick and kind group of folks who, more often than not, can come to your rescue. They've rescued me plenty of times!
  24. Check the size of the brush when you have the Inpainting tool chosen. If it is quite small, the cursor will look like crosshairs. Also, since the brush is so very small it will appear not to be doing anything. Try increasing the size of the brush (top left, on the Context Toolbar) and see if the Inpainting tool works for you.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.