Jump to content

smadell

Members
  • Content count

    305
  • Joined

  • Last visited

About smadell

  • Rank
    Advanced Member

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

1,833 profile views
  1. Dave... This has always confused me, as well. I think I understand the difference (but, honestly, this is only a guess). I think of the Adjustment and Live Filter layers as having a value going in (that is, the RGB values presented to the adjustment layer) and then a value coming out (the RGB values after the adjustment has been applied). The graph on the right (underlying layers) tells the adjustment only to work on specific values coming IN, while the graph on the left (current layer) effctively limits the adjustment to work only on pixels that give a specific RESULT coming out. As an example, imagine I have a grid of numbers, and they are all between 1 and 10. I have an “adjustment” layer that adds 2 to every value. I can use the right graph to assure that I only allow numbers 6 or greater into the adjustment. That would result in a grid with numbers between 1 and 5 (the ones that the blend range did not allow into the adjustment), and other numbers between 8 and 12 (the ones that were allowed into the adjustment layer, and had 2 added to the original number. If I have that same grid of numbers 1 to 10, and the same “add 2” adjustment, I can use the left graph to limit the action of the adjustment based on the numbers that come OUT of the adjustment. Without the blend range setting, numbers between 1 and 10 went IN to the adjustment, and all were increased by 2, what comes OUT of the adjustment is a bunch of numbers between 3 and 12. If I use the left graph to limit the adjustment to results that are 6 or greater, then I am telling the adjustment ONLY to apply to any number that would result in a value greater than 6 as OUTPUT. In effect, this means that only the grid values between 4 and 10 are allowed into the adjustment. So, what I see with that adjustment in place, and the left graph active, is a grid of numbers between 1and 3, and also values 6 to 12. The original grid values of 4-10 were allowed to have the adjustment applied, and gave values between 6 and 12. The original grid values of 1-3 were not allowed to have the adjustment applied, and they remain unchanged. The results of the left graph and the right graph are subtly different. All that having been said, it is an awful lot of mental math to try to keep straight, and I have yet to find ANY practical use for using the left graph (current layer) on an adjustment or live filter layer. And I have also not come up with any situation where I wanted to use BOTH graphs on the same layer.
  2. Here's another take on it: 1) Select the Yellows to change the White Balance on them only. Do this by first selecting the Blues (Select > Color Range > Select Blues) and then inverting the Selection, since yellow is opposite blue on the Color Wheel. 2) With the selection active, add a White Balance adjustment layer. Move the slider to the left, to make the previously yellow areas bluer in hue, while leaving the rest of the photo alone. 3) Go back and select the Background image layer. Create a Luminosity Selection by holding down Command (Cntrl on Windows) and Shift, and clicking on the Layer thumbnail. Or do this with a macro (lookup Luminosity Masks in the Resources section, which is available through the forum). 4) With the Luminosity selection active, create another adjustment layer, using Brightness and Contrast. Turn the Brightness down (and, perhaps, make a change in contrast if needed). All of this lets you change the color of the yellows, and diminish the intensity of the most luminous areas of the photo.
  3. And, larsh - there is a White Balance Picker in the Develop Persona. It is over on the left, in the Toolbar.
  4. The advantages of RAW format are really only available prior to "development." Since a RAW file is only data - not actual RGB values that define an image - taking advantage of that fact requires that the data be manipulated, not the image. Editing a RAW file assumes that 3 things that happen in a specific order: (1) Open the RAW data in a "developer" and manipulate the data; (2) "Develop" the data into a usable image with RGB values; and (3) further manipulate the image, if needed. Once the data is developed by the RAW processor and an image is created, much of the advantage of RAW format goes away. The advantages of RAW are really only present in the RAW developer stage - that is, prior to assigning RGB values to each of the pixels that will make up the image. Don't ask me to explain the math of it - my understanding only goes so far! For me (and I only shoot RAW) I concentrate on adjusting White Balance, Exposure, and some Noise Reduction in my RAW developer, and leave the rest of the edits to the Photo Editor. To me, the advantages of RAW show up most in choosing a white balance, and in adjusting the shadows, midtones, and highlights to avoid clipping, etc. Trying to do that after RAW development is more constrained, and therefore less satisfactory. Furthermore, it is probably unreasonable to pack the extra RAW processes into the Photo Persona, since there is a point at which a RAW file must be "developed." This is, de facto, the point at which the RAW data becomes an image. To say that this duplicates your steps implies that you will try to come up with a Final Edit prior to RAW development, and then do this again in the Photo Persona. Why bother? If you've gotten a great image by doing everything in the Develop Persona, then hit the Develop button and immediately save the file. Or, selectively process the data in the Develop Persona (for instance, the White Balance and Exposure) and do the other stuff in the Photo Persona after development. If you're coming to Affinity Photo from Photoshop, this explains why RAW files always open first in Adobe Camera RAW (ACR) rather than opening in the main editing portions of Photoshop proper. ACR is the equivalent of Affinity's Develop Persona. What's really missing in Affinity Photo's RAW processing is the ability to deal with multiple images at the same time, assigning the same operations to each of several images. This can be done in ACR as well as in Lightroom. This is a feature that is rumoured to be coming in a DAM that someday may be available from Serif. For now, Affinity does RAW processing one image at a time.
  5. There are SO many ways to do this, it boggles the imagination. Most of them involve separating the walls from the rest of the photo, and for that you need to select the walls. If there is enough contrast between the "white" walls and the rest of the photo, this will make it easier. I'm going to show you just a couple of suggestions that come to mind. For this, I downloaded a photo from Google Images - I believe it's a stock photo, and is probably protected by copyright. For educational purposes (this forum) this should be considered "Fair Use," but I'm sure the photo is otherwise protected. Method 1: Use the selection brush to select the man only. Use the "Refine" button to get a cleaner selection, and choose "Apply" to get the man selected. Now, invert the selection (this leaves you with everything BUT the man selected). Copy and Paste. This will copy ONLY the selected area (i.e., the walls). Do what you will to the walls. In this example, I put a Levels adjustment in as a child layer, so that it only affects the walls layer. Method 2: Use the selection brush to select the man, refine, and invert (just as in the first example). With the walls selected, place a Curves adjustment into the Layers stack. Because you did this with the walls selected (but the man NOT selected) the Curves adjustment will have a built in mask that allows it to act only on the walls. Method 3: A little bit more complicated. Use the selection brush to select the man, refine, and invert (again in the same manner as Method 1). Now you've got the walls selected. Insert a new pixel layer into the Layers stack. WITH THE SELECTION STILL ACTIVE, choose the paint brush tool and select white. Make the opacity of the brush less than 100% (for this example, I chose 50%) and set the Brush Blend Mode (in the Context Toolbar) to OVERLAY. Now brush over the selection. The brush will whiten up the white areas, but will leave the blacker areas alone. There are probably a zillion other ways to do what you're looking for. These are just a few that I thought of.
  6. I always save the Affinity Photo files. What I discard are the TIFF files. Without getting into too many specifics, I end up with. three folders when I’m done with each batch: 1) a folder of Culled RAW files; 2) a folder of Affinity Folder files, with all the edits; and 3) a folder of JPG files, exported from Affinity Photo. The files I trash are the RAW files that I chose not to edit, and the TIFF files that only served as an intermediate, lossless file to go from DxO to AP.
  7. DxO can export to JPG, TIFF, and DNG. I export to 16-bit TIFF. I don’t use JPG, since this is not only “lossy,” but is also limited to 8-bit files. I also don’t bother with DNG, since I use the exported files only as intermediaries; typically I delete the exported TIFFs as soon as I’m done processing in Affinity Photo. After all, I can re-create the TIFF files from DxO at any time, and all they would do is chew up disk space. When I export, I tend to use the ProPhoto color space, since that affords me the widest gamut I could possibly need. I export the TIFFs into their own folder, and then open them one by one in AP for editing. After edits are finished, I delete the TIFF folder entirely. As an aside, I also use Fast Raw Viewer for culling my RAW files, prior to developing them in DxO. I find FRV to be fast and simple, and it’s much less expensive than Photo Mechanic (which many folks think of as the “standard” for culling).
  8. De-noise filters exist in the Photo Persona (as a Destructive process, or as a Non-Destructive process via Live Filter Layers). Also, noise reduction (and addition, if you're so disposed) can be done in the Develop persona.
  9. Ask 10 members, and you'll probably get at least 15 answers (none of them particularly wrong). That having been said, I am one that always shoots RAW, so the first thing to do is Raw Development. You can do this in the Develop persona of AP, or you can do this in third-party software. Personally, I use DxO PhotoLab, since it can batch process multiple photos at once, which is something AP can't do at this point. In Raw processing, I tend to take a minimalist approach. That is, I don't try to get a final result out of Raw development. I try to use my RAW developer to its strengths, which I see as setting white balance and exposure correctly. So, I try to get the exposure I like (and try to avoid getting any clipping, especially at the white end), and I try to get an acceptable white balance. After this, I export to TIFF and do the rest of my editing in Affinity Photo. If you use the Develop persona, keeping the RAW development within AP, this would mean setting exposure and white balance in Develop and then hitting the "Develop" button to move over to the Photo persona. In practice, this means that the image that comes out of the RAW development is usually pretty flat, and needs increased contrast, saturation, and so forth. This is easy in the Photo persona. Getting the color and the tonality nailed is something that your RAW developer does better, though. That's at least how I start. Many folks like to get a finished product out of Raw development, and they're not necessarily wrong about it. I don't do it that way, which is not the same as saying that I'm right to do so. I think if you look at many of the video tutorials that James (Ritson) has produced, you'll see he uses the Develop persona the way I do, aiming to get a pretty "flat" result. The rest of it is done in the Photo persona. I'll bet you'll get lots of other opinions, many of them the polar opposite of what I just told you. Play around with it and find the workflow that you like. There are no rules, you know.
  10. This is almost certainly NOT a problem with Affinity Photo, nor with your printer, nor with your edits. Chances are overwhelmingly high that your monitor is too bright. The "proper" answer is to profile your monitor, but sometimes the simplest solution is just to lower the brightness on your monitor (a lot). If you have a way to measure the luminance of your monitor, aim for about 100-120 cd/m2. If you can lower the brightness of your monitor, your pictures will look darker on screen (but of course they will match what comes out of your printer). If you want lighter prints, a darker monitor will let you get a better approximation of the correct amount of lightness in your edits, without fudging things with a "Brightness" or "Levels" adjustment. Also, remember that your monitor is light-based, and this is additive; printers are ink-based, and this is subtractive. Printers will always tend to be darker than monitors, since that's simply the nature of ink vs light.
  11. smadell

    Macro Merge All Layers

    Great! Also, you can include either of those commands in a macro. Also, Merge Visible has a keyboard shortcut, and you can assign a shortcut (through Preferences) to the Flatten command, if you want to.
  12. How much of a red tint are we talking about here? Could you post some comparison screeshots (e.g., a screenshot from DxO prior to export, then one from Affinity Photo upon opening). Also, what color profile does DxO assign when you export? And, how is AP set up in Preferences when it comes to opening images - specifically, does AP assign a specific profile as a “working space”? When I look at a DxO edit, then open the exported TIFF in Affinity Photo, the colors are, perhaps, minimally different, but certainly not enough to be objectionable or even, for the most part, noticeable.
  13. mrh2... Are you (i) opening a RAW file in DxO, developing it, and exporting a TIFF; or (ii) opening a TIFF in DxO, editing it, and saving? If you are doing the latter, you probably should forget about either DxO or Affinity Photo, since AP will completely ignore any changes you made (since they reside in the incompatible .dop file). If you are doing the former, the TIFF file you exported should contain all of the work you did in DxO and the .dop file is no longer needed. If you are starting with a TIFF file and want to use DxO as a first step, only then to continue working in Affinity Photo, you will need to (a) open the TIFF in DxO; (b) do your preliminary editing; and (c) use the Export to Disk button (bottom right portion of the DxO window) to create an second TIFF file that contains your edits. This new TIFF file will contain the editing you did in DxO, so that the .dop file can safely be ignored. Then, open this newly created TIFF file in Affinity Photo.
  14. smadell

    Macro Merge All Layers

    Use menu command: Flatten (Document menu). This merges all layers down into a single layer, and is destructive. Or, use menu command: Merge Visible (Layers menu). This merges all visible layers into a new layer, placed ontop of the layer stack, or above a selected layer.
×