James Ritson

Moderators
  • Content count

    409
  • Joined

  • Last visited

3 Followers

About James Ritson

  • Rank
    Advanced Member

Contact Methods

  • Website URL
    http://www.jamesritson.co.uk

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

307 profile views
  1. Hi vibuboneru, you can do all sorts of work with channels in Affinity Photo. The video you've linked to covers packing greyscale information into separate RGB channels. You can certainly do this in Photo: Channel Packing Additionally, you can isolate individual channels and perform filter/tool operations on them: Editing Single Channels And for a general overview, this is actually for the iPad version but the functionality is exactly the same as the desktop version: Channels (Affinity Photo iPad) Hope that helps!
  2. Hi LKain, if it's not an HDR merge you must have either converted the document to 32-bit yourself or opened a 32-bit document like OpenEXR/Radiance HDR? The problem here is that you should be using ICC transform colour management: both 8-bit and 16-bit colour modes are already managed with ICC display transforms, the difference is that 32-bit allows you to work in an unmanaged linear view (which, for your purposes, you should not be doing). If you want your image to look correct when exported and viewed, you need to use ICC display transform colour management. I assure you that your image is not losing any detail, it just may seem hard to work with what is initially a brighter and "pastier" image. Try setting your view to ICC and use adjustments like Curves to darken and increase contrast in your image. Alternatively, if you are familiar with using LUTs, try something like an "sRGB to Linear" LUT which should help. I have attached one to this post for you to try. For an HDR merge, Photo is set to an ICC transform view by default. Do not change it to Unmanaged because it won't represent what you see when you export and view it externally. I hope that helps/makes sense? sRGB_to_linear.spi1d
  3. Hey LKain, the huge difference is likely because you're editing a 32-bit document in Unmanaged (Nessuna trasformazione). That means you're seeing it in linear light - there's no display transform happening. When you export (presumably to JPEG or another format?), this image, like any other, is managed by a display transform and thereby accounts for why it looks so different. Ideally, if you're editing for general sharing and export, you need to set your 32-bit preview to "ICC Transform" and not "Unmanaged" (or use an OpenColorIO transform if you have it configured). Once you set it to "ICC Transform" you will find the document in Photo looks exactly like your exported image. So, if you wanted your image to look like it does when it's unmanaged, you would also have to use a LUT or additional adjustments to darken it. It would help to know your workflow - is the image from an HDR Merge? If not, was it your intention to edit in 32-bit to begin with?
  4. Hi kamone, you can use nearest neighbour resampling which should give you the sharp-edged result you're after (XBR and HQX, whilst useful, won't give you the result you want). Nearest Neighbour resampling is under normal document resizing, so on the resize dialog all you need to do is set Resize to Document - look for the Sampler option and set that to Nearest for nearest neighbour resampling. Hope that helps! Also worth noting is that XBR and HQX both only work as intended with proper 1:1 pixel art created from scratch - that is, every pixel is unique and represents itself. Images that have been downscaled or are "emulated" pixel art end up looking odd and "painterly" when upscaled with these filters.
  5. Hey MikWar, the differences you're noticing are fairly expected - Apple Photos applies a slightly different tone curve and it adds some sharpening which you can't control. The SerifLabs RAW engine applies its own tone curve (which you can disable) and adds some default noise reduction which you can also tweak. It doesn't add any additional sharpening - that's left up to the user. You should find if you move across to the Details panel and play with the Radius and Amount sliders under Detail Refinement you'll be able to make your image "pop" a little more. Alternatively, if you'd prefer the result Apple's RAW engine gives you as a starting point, you can change the RAW engine through the Develop Assistant - click the "tuxedo" icon on the top toolbar and you can change the engine from SerifLabs to Apple Core Image RAW. Any RAW files you decode in the future will then use the Apple engine and you'll get much closer to the results you see in Photos. Hope that helps! If you're interested, here's some additional video material that covers accessing the Develop Assistant and gives you a bit of insight into what to expect from the SerifLabs engine: Raw Development Quality Maximising Raw Latitude Salvaging Underexposed Images
  6. Hello all, just a bump to let you know the first post has been updated with three new videos: Changing Eye Colour, Panoramas and Hiding Tricky Skies. The new videos are of course available in-app too!
  7. Just to add to Lee's post, have you tried adding the .ocio extension once the files are actually on your iCloud Drive? The Photo iPad app actually registers ".ocio" as an extension with iCloud Drive, so you should be able to dump the folder onto your cloud storage then rename and convert it to a package. Apologies as I only realised this after doing the video (I only added the extension once the folders were in the iCloud folder). I'll have to look at revising the video or adding an on-screen note.
  8. I can definitely confirm (been using that camera since January!), but I suspect support was just edged in not long before launch, so I'll add it to the list.
  9. Hi, the E-M1 mk2 has been supported since Photo 1.5's release in December 2016 (including its High Res Shot RAW format) - are you having trouble opening the .orf files?
  10. Hey Jacques, thanks for your feedback, grouping of videos (in-app) into categories was something initially planned but didn't make it for release - we hope to revisit the functionality soon. In terms of the forum thread, however, something can be done about that sooner. The more videos are added, the more difficult it will become to read, so that will be revised soon. The newer videos you should hopefully find easier to follow - I've done away with the camera view (which was at an angle and proved difficult to view) and instead used graphics to highlight specific icons, tools and gestures etc. An example is the basic workflow video here: https://youtu.be/P17ai-zxG2I Hope that helps!
  11. Hi Dave, I'll try and give a quick breakdown, though it has prompted me to think that the in-app help blend modes topic needs fleshing out a bit... Not sure I can suggest practical use cases for them all yet. Reflect darkens the image using values from the composite layer. It's great for selectively enhancing parts of an image like reflections or areas of light. Glow essentially performs the same as Reflect but flips the layer order, so it brightens the image using values from the composite layer. Great for widening the radius of artificial lighting and making it more intense. Average is exactly that: a mathematical average between the image and the composite layer. In most cases it's the same as setting the composite layer's opacity to 50%. Erase will subtract from the image using the composite layer as a "mask" - the strength of this is controlled through Opacity rather than the composite layer's actual contents. Negation is like Difference (which subtracts pixel values between the composite and image layer) but is additive, so pixel values are simply added together from the layers - resulting in brightening rather than darkening. Contrast Negate I'm not sure exactly, but it seems to invert the pixel values of the composite layer based on the image layer's content. Useful for certain designs. I've attached a really rough example to this post. Not sure about Divide's no-show, I'll have to ask one of the developers at some point. Hope that helps!
  12. Thanks for your feedback so far Oval, I've removed the temporary (Google) translations for now from the clipboard icon - they'll be included when the help is translated for 1.6. As we're localising into at least 9 different languages (possibly more for 1.6), we need to make sure each translation for the phrase is correct (as already proven by German having different words for the adjective/noun). All feedback is considered for when the help is being revised for 1.6, so if you have more suggestions please do let us know!
  13. Good afternoon all, just letting you know I've added three more videos to the first post: Basic Image Editing Workflow, Removing Lens Flares and Quick Masking. Hope you find them useful! The basic workflow video in particular is in response to customers requesting a tutorial to take them through a general import-edit-export process.
  14. When you say you're tried all three resampling methods, which ones did you try, as there are actually five? - are you resampling on export or by resizing the document? The result you've posted looks like Lanczos resampling (either non-separable or separable). Whilst Lanczos is exceedingly sharp, it's renowned for ringing because of its brick-wall impulse response and overshoot. If you have lots of fine detail with tricky variations in contrast (the trees and branches against the sky in your image being a prime example), I'd recommend Bicubic or even Bilinear. You should find Bicubic provides a good middle ground between sharpness and artefacting. Lightroom likely uses Bicubic or Bilinear - stick to these and you should find comparable results. Definitely avoid Nearest Neighbour as that will be too soft. Hope that helps.
  15. Ahem, nothing to see here :ph34r: