Jump to content

James Ritson

  • Content Count

  • Joined

  • Last visited

Everything posted by James Ritson

  1. Hi Andrej, you can do this in Photo using Snapshots and the Undo Brush. Photo's Snapshots panel is very similar to the History panel in Photoshop. There's a video that covers using Dust & Scratches and the Undo Brush, but it's not as in depth as the video you linked to: It doesn't use Snapshots for one, so to emulate the workflow you're trying to follow, try this: Go to View>Studio>Snapshots to expose the Snapshots panel. Click Add Snapshot (second camera button) and name it Sharp. Run the Dust & Scratches filter (under Filters>Noise) using appropriate settings. You can do this on a duplicated layer if you wish. Add another snapshot and call it D&S. Click and select the Sharp snapshot, then choose Restore Snapshot (first camera button). Select the Undo Brush from the Tools Panel on the left (it's just underneath the Clone Tool) and set your desired blend mode settings (she uses Darken and Lighten for example). In the Snapshots list, click the little grey camera icon to the left of the D&S snapshot. This will set it as the active snapshot. You should now find when you hover over the image that the Undo Brush reveals areas from the D&S snapshot, just like in the video. Now you can just paint away and retouch the image quickly. A couple of pointers: The Undo Brush has a default Hardness of 80%. For smoother retouching you might consider setting this to 0%. You can always swap the layers for retouching. So you could work off the Dust & Scratches layer and instead choose to restore sharp areas from the Sharp snapshot. That might seem quicker and more intuitive depending on the image you're working on. Hope that helps! And it might also be time for an update to that tutorial
  2. Hi, I've tried the RAW file on the iPad version of Photo and you're correct, it looks incredibly bad. It seems to be an issue with how Apple's Core Image RAW decoder is handling the file, since we use that engine exclusively on the iPad version. Switching to it on the desktop version of Photo yields similar results but without the blurriness; the purple and green chromatic noise is still there. The Serif Labs engine on desktop handles the image fine. The image is heavily underexposed; do you have any similar images that exhibit the same issues? What happens with images shot under better lighting conditions? This is possibly because Capture One's DNG export has already demosaiced and applied some processing to the image using its own methods. I'm not sure to what extent Adobe's DNG converter manipulates the image. Not sure what to suggest yet, but if you have any other samples shot with your camera they would be useful to help determine what circumstances cause the issue. I've tried some RAW samples from PhotographyBLOG and I can't reproduce the blurriness, even with shots at ISO 6400 or 12,800. It did surprise me that there was chrominance noise - Apple's RAW engine usually removes that - but there was nothing as bad as the result of the image you've provided. I notice you've mentioned "certain raw files" at the beginning of your post; is there a correlation here? Are they mostly underexposed? If so, clearly there's something off about the way Apple's RAW engine is handling them.
  3. Hi Daryll, Creating an empty pixel layer and adding content to it is far more economical (in terms of file size) than duplicating the entire image. There's no right or wrong method; just do whichever you prefer. I always use the new pixel layer approach as working in 16-bit gives you a base file size of around 120MB for a 20 megapixel image - if you're duplicating entire image layers all the time, you can see how that can quickly add up! Cropping is non-destructive anyway - those areas you've cropped away are simply hidden. Let's say you've cropped your image and want some of the original image back. At any point during your editing, you can either: a) Select the Crop Tool, extend the crop boundaries outside the current area, then click Apply. b) Go to Document>Unclip Canvas; this should, as the name suggests, "unclip" the canvas to the entire image rather than your chosen area. If you do want to crop destructively, right-click the Background pixel layer and choose Rasterise (this will effectively discard the areas hidden by the cropping). Hope that helps for now! Regarding the sunset question, I'll investigate further as it may form the basis for a suitable tutorial video...
  4. Hi Jens, I can address a couple of these points quickly: I appreciate it's not a solution, but there are a plethora of video tutorials that will show you the interface in detail. You can check them out on YouTube here: https://www.youtube.com/playlist?list=PLjZ7Y0kROWitLXsh6z4Z3qBYCS6xoIXHN and in particular there's a Discover video here which runs through the interface: https://youtu.be/_wwgo9Z9yYQ If you open a JPEG or PNG straight into Photo, you'll be able to "write back" to them instantly based on edits you've made. If you've made purely destructive edits, using Save or CTRL+S will write back to the file instantly. If you've added non-destructive adjustments, filters, etc, then you'll get a prompt to either flatten and overwrite the initial file, or to save as an .afphoto document to retain the non-destructive edits. You can also do this with PSD documents, but you need to enable write-back through Preferences. I appreciate that having common file formats on a typical Save dialog would be useful; for now, though, you could quickly use the Export shortcut (Shift + Ctrl + Alt + S) and it will remember the last image format and preset you were using. Photo has live filter layers, they're accessed from the Layer menu. They behave like adjustment layers, so you can indeed tweak filter settings and edit the filter's mask whenever you want. Bear in mind that although you can add separate mask layers to adjustments and filters, there's no need to. Just click on the layer's thumbnail - that's the mask. So you can invert it, paint on it using white/black to show/hide parts of it, Alt+click to isolate it, etc. Hope that helps, that's all I've got time for this evening! (Got to go and play with the dogs )
  5. [Edit] Beaten! One solution is to switch the RAW engine over to Apple Core Image, which obeys crop instructions and will discard the edge pixels. With a RAW file open, you can click the suit/tuxedo icon to open the Assistant. On this dialog, you can then change RAW Engine from Serif Labs to Apple (Core Image RAW). The processing will differ slightly - Core Image RAW performs some automatic noise reduction and colours may differ slightly from Serif Labs, but your pixel resolution will be consistent with other software. Hope that helps! I've attached a GUI shot below:
  6. Hey, just thought I'd attach a result as well using Affinity Photo - not sure if the aim here was just to try and claw back the highlights but I've boosted the shadow detail too. There's not much precision there, as expected, but taking the tone curve off, sliding highlights all the way down and using a custom tone curve to boost the shadows/mid-tones produces a flat result that you can then do further work with. I've attached the flat version straight from Develop and then the edited version (a couple of adjustments, local contrast enhancement, etc). I've found that changing your mindset enables you to get the most out of Photo's RAW editing - I use it to get a flat result with no clipping, perhaps add some light noise reduction, then build the tones back up in the Photo persona where you have the entire toolset (and to be honest, that's where Photo's strengths lie). This could be because the iPad version uses Apple's Core Image RAW exclusively - on desktop, SerifLabs is used by default, but you can switch over to Core Image if you prefer the results. [Edit] Are you running the latest TestFlight beta by chance? The new shadows and highlights functionality is in that version too.
  7. Hey Seamaster, I've just tried to emulate your approach using Lightroom CC Classic by creating an HDR Merge, choosing not to tone in Camera Raw, then exporting as original (which does indeed export it as a DNG). In Affinity Photo, you can change the RAW Development from 16-bit to 32-bit unbounded by using the assistant (the little suit icon on the top toolbar), but this change only applies to images opened afterwards. This isn't a problem, simply: Open Affinity Photo with no document open Click the suit (Assistant icon) Click Develop Assistant at the bottom of the dialog Set RAW output format to RGB (32bit HDR) Open your DNG file The 32-bit output option will avoid clipping or rounding the pixel values in the DNG file. I've attached a screenshot to show my result and the visible highlight detail from scrubbing the 32-bit exposure preview slider. From a technical standpoint, the DNG exported from Lightroom will be in half-float, but you'll be temporarily importing and working in 32-bit within Photo. On the OpenEXR export dialog, you can always click the More option and set the Image pixels format to 16-bit (HALF) to save on file size Is there a feature that's tying you to do the merge in Lightroom mobile, e.g. does it stitch HDR 360x180 panoramas? (You mentioned using HDRIs) - If it's a single scene, I just wonder whether it would be easier to do the HDR merge in Photo (uncheck Tone Map on the dialog) and export straight to OpenEXR? Hope that helps!
  8. Hi travel bug, Photo ships with a very small set of profiles, the main one being ROMM RGB (for wide colour space editing in RAW). The other profiles you're seeing on your iMac were likely installed with other software. For example, Adobe Photoshop and Lightroom install ProPhoto RGB among several other profiles. Do you have software installed on your iMac that isn't on your MacBook? The reason Affinity Photo can locate these extra profiles is probably because they're installed to the top-level ColorSync directory on your Mac's hard drive. You can get to it by using Finder's Go to Folder feature and pasting it in (/Library/ColorSync/Profiles/). Sometimes, rather than install the actual profiles to that directory, software will create a symlink that redirects to the colour profiles in their application directory (I know Photoshop does this, for example). Either way, you can always copy these profiles across once you know where they are located if you need them. Alternatively, in a Finder search box, type .icc and then choose This Mac to locate all the profiles, so you can track down any that may not be in the ColorSync directory. Hope that helps!
  9. Hi, the issue is likely related to your use of the 32-bit preview panel. It's designed for working with 3D renders/HDR images that have a large tonal range - the idea being you can work on the images, previewing different tonal ranges as you go, then export the result back to OpenEXR for handoff to other software or another artist. Because you are exporting to JPEG it sounds like you are completing final editing in Photo. Make sure your 32-bit Preview Panel options are: Exposure: 0 Gamma: 1 Display Transform: ICC Display Transform When you export to JPEG, the results should be similar if not exact (converting 32-bit to 8-bit may result in some differences depending on the contents). As long as you are using the display transform colour management, you should be fine - do not use Unmanaged (linear light). I will just cover this in case there is confusion: the 32-bit Preview Panel does not make tonal changes to your image, it is simply a non-destructive way of previewing different tonal areas. If you want to make tonal modifications, leave 32-bit Preview alone and use adjustment layers, brush work, filters, etc. Hope that helps!
  10. Hi, sorry for not replying sooner - that behaviour is expected since the adjustments behave differently to account for the (potentially) huge tonal range in 32-bit float. An image developed from RAW still contains a relatively small amount of tonal range compared to what 32-bit can hold, so the adjustments will seem very sensitive. Things even out a bit if you're working with an HDR image or 3D render that has a large dynamic range. For all scenarios though, that's why you have Min and Max input options on the Curves dialog - these allow you to restrict the adjustment to particular areas of the tonal range (e.g. 0.2 to 0.8), and as a result the spline graph adjustments will be less sensitive. I might just ask if there's a reason why you're working in 32-bit? (Your image doesn't look like an HDR merge). For most single-exposure imagery I'd argue the benefits of working in 32-bit as opposed to 16-bit are negligible, with the exception of some edge cases like astrophotography... hope that helps!
  11. It's a term that refers to masking the background or foreground for compositing. It's more commonplace when dealing with video (e.g. green screen keying, you have "matte spill" and other options related to the term) but also applies to what we're doing with selection refinement and masking in Photo. If you're after some light reading there's a Wikipedia article on it: https://en.wikipedia.org/wiki/Matte_(filmmaking) - hope that helps!
  12. Thanks for all the feedback so far, in response to the architecture workflow video posted above I've produced a portrait retouching workflow video which you can see here: Portrait Retouching Workflow - YouTube / Vimeo It's 25 minutes long and covers a variety of techniques including: Initial RAW development Working in a wider colour space Frequency Separation Retouching tools including Blemish Removal, Patch Tool, Healing Brush, Clone Brush and Inpainting Brush Selection Refinement Quick Mask mode Mask layers and tweaking matte spill with Curves/Levels Brush work to enhance tones Masking on adjustments/live filters Final sharpening Export and conversion to sRGB colour space Hope you find it useful! I would hazard a guess that these more workflow-focused videos are quite useful, so I hope to produce more of them in the not-too-distant future. Thanks again.
  13. Affinity Online Help Hello all, we're happy to be able to offer you an online version of the in-app help! Access Designer, Photo & Publisher Help here: https://affinity.help Here are some of the additional features we're able to implement as a result of having proper browser support: Dynamic language switching: The help will determine your language and (if it's available) serve you a localised copy of the help. If you prefer to read in another language, however, you'll find a combo box at the bottom left which will enable you to change languages—and stay on the page you're currently reading. Print: Sounds simple, but with full browser support we can now implement printing of the topic pages. The print icon in the bottom left will give you a nicely formatted printout of the current topic. Share: Clicking the clipboard icon will copy the current topic's URL to your clipboard, which means you can easily point other people towards topics that may help them. Responsive: The help was responsive anyway, including off-canvas menu functionality so you could collapse the window and still read a topic, but this is taken further in this version of the help. The help is formatted nicely and usable even on a 4" iPhone SE screen. Search: we've implemented our own bespoke search for the online help which is fast and accurate. Access it via the tab system along the top left. Favourites: you can add topics to your favourites list to easily access them during future browser sessions. Simply click the + (plus) icon next to the "Favourites" tab to add the current topic. With this online version you'll be able to print out topics and view them on your tablets/phones, which are two of the most common requests when it comes to help feedback. As always, if you have any feedback or find any issues with this online version, please let us know! Hope you find it useful.
  14. The stickied thread at the top of this forum (Official Affinity Photo tutorials) has a list of project and workflow videos... they're at the bottom under the Projects and Windows Workflow headings and they all cover a variety of techniques for editing from start to finish - there are two portrait editing/retouching videos (although a newer one is due that is perhaps more extensive). Hope that helps!
  15. It sounds like Irfanview is showing you the embedded JPEG preview within the RAW file. As Merde said, Photo's RAW processing differs entirely to the processing that happens in-camera for the JPEG so it's natural for the result to look different. The point of RAW is to have more flexibility at the editing stage - rather than have it look the same, wouldn't you prefer to try and bring back some of the shadow and highlight detail? If you want them to match, it looks like the RAW version needs desaturating and perhaps a little reduction in contrast.. Some options only apply to RAW files, as Develop works in a completely non-destructive format where assistant options like the Tone Curve are applicable. Entering the Develop persona from an already opened file (e.g. JPEG) will limit your options because it's no longer working in that same format (as your image is already processed). Hope that helps!
  16. Hey all, I've slowly been posting new videos over the last week, so here are three new ones for you! Using Adjustment Layers on Masks - YouTube / Vimeo HSL Tonal Separation - YouTube / Vimeo Nighttime Architecture Workflow - YouTube / Vimeo The Nighttime Architecture video is a standout; it's a 17 minute complete walkthrough of an image edit from start to finish - I took the photo during a recent trip to Bern, Switzerland and got a few interesting shots, but this one came to life with a bit of editing and careful treatment of colour. Let me know if you find these workflow-focused videos useful!
  17. Hey Jayvin, yes, you can achieve what Sean is doing very easily within Photo, it's all done through the HSL Adjustment. You can target individual colour ranges and tweak hue shift, saturation and luminosity. There are many other ways you can isolate and edit colour in Photo besides this approach, but you can certainly do exactly what you're asking. Hope that helps!
  18. Unfortunately that's not not what the tutorials are supposed to look like.. are you watching on YouTube or Vimeo? And which browser are you using? It sounds like a decoding issue, but that kaleidoscope effect you describe is usually only prevalent for a few seconds at most. If you load up a video, click the little cog icon on the bottom right and choose a lower quality stream like 360p does the issue persist? Hope to hear back from you!
  19. Hi Michael, are you using the File>Open dialog within Affinity Photo? We rely on thumbnail support from Windows in this case, which would explain why you can't see thumbnails for the RAW files; Windows simply doesn't have updated RAW format support. The best recommendation would be to use one of the apps mentioned above, such as Faststone, and "pass" the RAW file to Photo for editing. You can usually do this by setting up Photo as an external editor - some apps may automatically detect Photo, others you would have to add manually. FastRawViewer, for example, allows you to set up Photo as an external editor and bind it to a shortcut key, so you can simply browse through, select the image you want, and hit "R" (or whatever you've bound it to) to launch Photo and begin editing it. Hope that helps.
  20. Hi Tutster, I've created and edited 40-50 image astrophotography stacks without issue (using preprocessed 16-bit TIFFs). Whether you're stacking for noise reduction (Mean/Median) or for a star trail effect (Maximum), I would recommend you do a Merge Visible (from the Layer menu), which will create a single pixel layer from your stacked layers, then hide the stacked layer group. It will make editing much smoother, as Photo won't have to re-render your stack every time you change zoom level or add a new layer. The one thing you should be mindful of is memory - the machine I'm using has 24GB RAM so I can comfortably edit large stacks (especially in 16-bit where the memory requirement is much higher). If your document requires more memory than you have installed, you'll find it starts eating into the swap space on your hard drive, which will slow things down considerably. You can check memory usage via the Info panel (go to View>Studio>Info to toggle it) - look at Memory pressure. If it goes near or above 100% you're in trouble (and it means you'll probably need more RAM if you intend to edit documents that large on a regular basis).
  21. Hi Grazer, you can indeed do this in Photo. There are several approaches, one involving the Channels panel and blend modes, but here's something that is closer to your calculations approach. In this example we'll add the Red and Blue channels together, invert the blue channel and add an offset - seems this is used for alpha selections of hair? Duplicate the image layer you want to create an alpha mask from. Uncheck/hide the original (so you can preview the alpha effect) Filter > Apply Image Click Use Current Layer As Source, then check Equations to enable channel equations In DA (Destination Alpha), you can type this: SR+SB to add red and blue channels together Now we need to invert the blue channel, so your equation becomes: SR+(1-SB) Finally, to add an offset to the overall result, we can type: SR+(1-SB)-0.2 Click Apply and you'll now have a pixel layer with a modified alpha channel. At this point, you can either work on the pixel layer or go to Layer > Rasterise to Mask to convert it to a mask layer (which you can then drag inside other layers to mask them). Hope that's what you're after and that you find it useful. In theory, you should be able to achieve most if not all of the Calculations behaviour using the channel equations in Photo's Apply Image dialog. Bear in mind that values in channel equations go from 0 to 1 and are in float. You can use expressions like lerp(SR, SB, 0.5) to linear interpolate between two channels, and as seen above use bracketed notation to isolate expressions.
  22. Hi Mike, thanks for your comments, I do really appreciate your feedback as it's something we are aware of (and have been for some time now). In an ideal situation with more time, we'd have already been able to provide a more comprehensive, structured beginners course. I have produced an initial set of beginners videos that take you through the absolute basics (opening, saving, adjustments, filters, exporting etc), but from your comments it sounds like there's nothing for the skill level past that point where it focuses more on A to B workflows. I know Simon's (Drippy Cat) video courses have received some great praise and he offers some great structured learning, which is what we struggle with as the in-house videos tend to be a mixture of new features, specific techniques, genres, etc. What did you find was the issue with his videos? I'm sure he would appreciate the feedback as he's pretty active and is constantly working on new material or revising it. The problem we have with our video structure at the moment is that there are plenty of really useful techniques, many of which I'd say could be suitable for beginners to digest, but they're almost "hidden" in videos that cover a particular feature or workflow aid. This is something that we have plans to address, but can't really provide a timeline for. Ideally I'd like to knuckle down and produce a few more of what we call "Projects" videos, where it's a workflow demonstration that covers the start to finish of an image edit. I'm a keen landscape photographer too so I have plenty of ideas to pool from for this subject! Sorry I can't offer more in the way of a resolution at the moment, but we are aware of the gap in the learning material we provide and it is something we're hoping to address. In the meantime, if you had any specific areas you were struggling with, I could try and point you towards videos that would cover those areas (as I mentioned previously, it's not always obvious by the video title). Do let me know, perhaps in this thread, and I'll do my best to help. Thanks!
  23. Hey Darragh, I think the option you're referring to is Metal compute acceleration? That's an extra option for integrated graphics on modern processors - Intel Iris should be supported. According to the integrated graphics page on Apple's site (https://support.apple.com/en-gb/HT204349), your iMac is possibly supported, but you'd need to switch over to integrated graphics. You should find a checkbox on that same Performance page called "Use only integrated GPU" - if you check that and restart the app, you may find you can now enable Metal compute. As far as I'm aware, though, you do need to be on High Sierra. Otherwise, whether you're using a discrete GPU or integrated graphics, hardware acceleration in general is already supported and defaults to OpenGL - you don't need High Sierra for this. Metal compute is just additional hardware acceleration for certain operations like filters, 360 projection etc. You could also try the new Metal renderer, especially if you're still on Sierra (since it currently behaves much better on that OS ). Hope that helps!
  24. Hey everyone, to coincide with the release of version 1.6, here are some new tutorials! It's a mix of new 1.6 features, revisions to old videos, and general tutorials covering functionality and techniques. Hope you find them useful! Quick Toggling Panels - YouTube / Vimeo Quick Inpainting Crooked Horizons - YouTube / Vimeo 360 Live Editing - YouTube / Vimeo 360 Roll Correction - YouTube / Vimeo Light UI - YouTube / Vimeo Brush Stabilisation - YouTube / Vimeo 3D Relighting with Normal Map Passes - YouTube / Vimeo Uplift Epic Skies (1.6 Bonus Content) - YouTube / Vimeo
  25. Sorry, I completely missed the fact that you had attached the file I've seen this happen before and the issue is with the actual merged HDR image. Certain manipulations, especially convolutions like Unsharp Mask, can exacerbate the issue and make it more noticeable. You can see the issue if you activate the 32-bit preview panel and scrub the Exposure slider. I've attached a screenshot to show you what I mean. At an Exposure value of 20 the canvas should ideally be pure white, but instead you have a large number of multicoloured pixels. If you zoom in, you'll start to see blotches of red, blue and green pixels. Without any adjustments, you may not have come across any issues, but the Unsharp Mask especially is further manipulating these pixels and resulting in an issue when the filter effect is rasterised (e.g. when you flatten the document). The loss of contrast when converting to 8-bit or 16-bit is expected: adjustments, blend modes, filters - they all behave differently in 32-bit float so the parameters you have used will not be consistent when they are applied to a 16-bit or 8-bit image. So, on to why this has happened... having looked at your workflow, with the conversion to DNG and processing in darktable, I wonder if it would be possible for you to try an HDR merge in Photo just using the source RAW files? (Or if you would be willing to upload them we can try) - It would be interesting to see if you get the same issue. Regardless, thank you for reporting the issue!
  • Create New...

Important Information

Please note the Annual Company Closure section in the Terms of Use. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.