Jump to content
You must now use your email address to sign in [click for more info] ×

James Ritson

Staff
  • Posts

    854
  • Joined

  • Last visited

Everything posted by James Ritson

  1. Hi, just some additional info that might help you: If you begin developing a raw file (and thus are in the Develop Persona), most of the processing is done in 32-bit unbounded with a linear colour space. Some of the colour operations are performed in ROMM RGB, which will allow you to saturate and intensify difficult colour tones (like artificial blue and red light) without clipping them. When you develop the raw file, by default the output is clipped to 16-bit integer and the colour profile is converted to sRGB. If you wish to remain in ProPhoto or work in another space, you can either: Check the Outputs option in Develop and choose an output colour profile.Or In Preferences>Colour, change RGB Profile to whatever you wish (e.g. ProPhoto) and by default all developed raw files will be converted to that profile. Just bear in mind that if you choose this option, any images you open without an embedded profile will have the chosen profile assigned. So, for example, if you open a JPEG with no ICC profile embedded, and you have ProPhoto set as your RGB profile, that JPEG will have the ProPhoto profile assigned to it and will look different. Just something to be aware of! But to summarise, yes, you can achieve end-to-end colour management in Photo, particularly if you're coming from raw. If you're pre-processing your images in other software I recommend exporting from that software as a 16-bit TIFF with ProPhoto (or another wide gamut profile) for optimum quality. Hope that helps!
  2. Either is fine, it depends on your workflow and what you prefer - if you work a lot with layers and make more use of the Photo persona then you'd probably benefit from using a live filter layer to sharpen your image. If you prefer to do most of your editing in Develop then just use the detail refinement options there. With either scenario, as long as you preview sharpening at 100% or greater zoom then you should find the end result is exactly as you see it. Hope that helps!
  3. Just to expand technically on MEB's post, what you're seeing is unfortunately a side effect of the way Photo works with its dynamic preview. In order to speed up complex operations and keep previews as realtime as possible, it uses a technique called mipmapping - generating lower resolution versions of your image data that are displayed at lower zoom levels. This speeds things up significantly, but when you're applying certain effects (e.g. convolution filters like blurring, sharpening) the preview is generated on the lower resolution image and therefore may look different. You may notice similar issues with other software; for example, Adobe Camera Raw has a little disclaimer on the Sharpening/Noise Reduction panel telling you to zoom to 100% or larger for a more accurate preview. The reason your image still looks different at the same zoom level is because when you click Develop, the full resolution image is processed and a new set of mipmaps are generated from that - a new mipmap from the full resolution image with sharpening applied will still look different to the old mipmap (where the sharpening is being applied to the low resolution version). Hope that makes sense!
  4. Hey Michael, I take it you have your camera's aspect ratio set to 16:9? When you shoot raw, the camera still records the full sensor resolution (so in your case it would be the full 3:2 image) and writes crop information into the raw file that software can read, meaning the crop is applied later during editing. The benefit of this is that you still have that information outside of the aspect ratio crop should you want to use it. Photo currently doesn't act on crop information, which is why you're seeing the full 3:2 frame. It's likely that support will be added in the future (perhaps cropping initially but allowing users to "uncrop" if they want to) but for now your best option is to use the Crop Tool (shortcut is the C key), choose Custom Ratio from the Mode dropdown and enter 16 x 9. From the dropdown you could choose Add Preset to save this ratio and re-use it on all your developed images. Additionally, if you're shooting JPEGs alongside your raw files, those will be cropped in-camera (and the image information outside of the crop will be completely lost). Hope that helps - apologies that cropping isn't yet supported, it's one of many areas that will hopefully be addressed over time.
  5. Hi Roger, There are several video tutorials that cover how to "live project" your 360x180 images and edit them, you can watch them here: https://www.youtube.com/playlist?list=PLjZ7Y0kROWisLWhUIenXLQuCWJVpyZCgg Additionally, the topics in the help are covered under the "Live Projection" category. The type of editing you're after is called equirectangular projection. Hope that helps!
  6. Do you mean you want each layer to be exported to its own individual EXR document? If so, you can use the Export persona to do so: Switch across to the Export persona (top left, the final icon with the three nodes) On the Layers panel, shift-click all of the layers you want to export, then choose Create Slice. Switch across to the Slices panel, and shift-click all the newly added slices. Set your export options above on the Export Options panel - so you'll want to change the preset to OpenEXR 32-bit linear. If each layer is to be a separate EXR document then you probably won't need multi-layered export, so the regular preset should be fine. Choose Export Slices and pick a directory to export to. That's it! Hope that helps; as Ben has mentioned above you also have the option of multi-layer export within one document which is a bit tidier - it depends on your needs.
  7. Hey Alain, When developing an image, click the suit/tuxedo icon on the top toolbar - this takes you to the Develop assistant. You'll be able to change several settings here, including whether or not to apply chroma/luma noise reduction by default (you can just set it to "take no action"). Hope that helps!
  8. Hello all, Just to let you know that I've gone back and updated three older videos to improve them - the Image/Canvas resize video has also now been split into two separate videos, covering document resizing and canvas resizing separately. Here are the updated videos: Live Filter Layers Mask Layers (was previously Layer Masking) Document/Image Resizing Canvas Resizing Hope you find them useful!
  9. While I'm on a roll, I figured I may as well post some more recent photography edited in Photo ;) Some of these you may recognise from the video tutorials: Lone Purple by James Ritson, on Flickr One of those nice surprises you get sometimes, I didn't think much of this photo until I started experimenting with blend modes - at which point it became very moody and striking. Spinning Out by James Ritson, on Flickr An abstract long-exposure shot of a spinning record. I was shooting some video for a personal project and decided to grab a few stills too. Monsal Tunnel by James Ritson, on Flickr A very wet day in Derbyshire, which kind of spoiled the view over Monsal Dale. There was this tunnel, however, which is really atmospheric with all the reflections from the damp areas. Sugar Factory At Night by James Ritson, on Flickr Shot at night, the initial image lacked impact until I brought in some brush work and blend modes - this resulted in the Creative Painting tutorial, where you can enhance colours in images with artificial lighting. Robin Hood Hotel by James Ritson, on Flickr A disused hotel building in Newark (Nottinghamshire), and I've brought some colour into the sign using brush work with blend modes (mainly Overlay, Reflect and Screen). That's all for now - just wanted to share some work, hope you find it interesting!
  10. Hey again, it's been ages since I actually shared some of my work in this forum. Over the course of getting imagery for the tutorials, I've also produced some final pieces that I'm happy with. Recently I've been tackling tutorials for astrophotography/star image editing as well as light painting, so there have been a few late nights! My favourite so far is a light painting composite near my home: Rail Tracks by James Ritson, on Flickr Then a light painting of Rufford Abbey (although I think I narrowly avoided a run-in with security): Rufford Abbey by James Ritson, on Flickr Followed by another light painting composite underneath an old railway bridge. This one's a little scrappy and I'm not sure how I feel about it at the moment: Under The Bridge 01 by James Ritson, on Flickr Moving on, a combination of star photography and light painting at Rufford Lake: Rufford Lake 01 by James Ritson, on Flickr And then a shot of the night sky, which was achieved by stacking 50 images (shot at ISO 6400, 1s long each) and pushing the tones quite severely to produce a vibrant result: Still Sky 01 by James Ritson, on Flickr That's about it for now - some of the above imagery is used in the recent tutorials and I plan to hopefully produce some more videos in the future covering these areas of photography. There are a few more images on my 500px and Flickr accounts as well if you're interested! Thanks, James
  11. Hey folks, it's time for some new tutorials! (And an updated one) Had a heavy focus on doing light painting and astrophotography recently, so the videos reflect this: Stacking: Star Trail Effect (Updated) Pin Sharp Stars Light Painting Compositing Tone Mapping Portraits Hope you find them useful! Even if you're not into light painting or shooting the stars, there are some useful techniques explored in these videos that you could apply to other types of imagery.
  12. Hey RockvilleBob, The quick answer is no unfortunately - Photo can't merge pixelidentical brackets whilst focus merging at the same time. What it will do is produce a psuedo-HDR result (without the increased precision or unbounded tonal range), so I'd encourage you to at least try it and see how you find the result. Alternatively, if you were to HDR merge the individual brackets first, then either save them as .afphoto documents or export to OpenEXR, you should then be able to focus merge these into a 32-bit HDR result. Hope that helps!
  13. Hi Peter, detail refinement is indeed an unsharp mask filter. The use of percentage rather than pixels for the radius is basically an oversight: at one point, there may have been further designs (e.g. adaptive sharpening), but as it stands, the percentage is simply 1:1 with the pixel radius (so 100% = 100px). Apologies for the confusion and it'll be with the developers to look at. Hope that helps!
  14. That's a lot of hard work! I can't imagine much time you must have put into this - very handy for users, thank you. How many times did you fall asleep listening to me drone on? ;)
  15. Hi Shaw, I've had a look at the provided files; the issue seems to be clipping of the colour tones. It looks like the image you sent across of the blue goggles isn't the same one as in your screenshot, so I can't test for that particular issue with the blown out purple centre area (your screenshot is image 0091, the one you've provided is 0005). I've opened the 0005 image in Photo and apart from the slight purple colour cast (which is also present when viewing in Windows Photos), it looks fine. The red goggles also seem fine as the red tones aren't clipped. If you open up the blue goggle image then turn on the colour clipping warning (top right, the yellow/pink checkerboard icon) you'll see the eyepieces in the goggles turn yellow, which means all those tones are currently out of range. Bringing the exposure down should help in this case. Have you got any images of the blue goggles which are exposed so as not to clip the blue tones? It would be worth trying them to see if you get the same issue. In terms of dynamic range, artificial blue tones are incredibly difficult to capture in the same exposure if you want everything else to be well-exposed. Maybe using bracketed exposures and merging them might be the best approach here. Would it also be possible for you to provide that raw file (0091) with the bad purple blowout for us to take a look at? That looks quite bad and may be a separate issue to the more general purple colour cast. Thanks!
  16. Hello all, hope you had a good weekend. Got five more videos to share with you, but this latest addition also marks a bit of a milestone: there are now 200 Photo tutorials! (discounting the Windows ones). Here's to the next 100... Abstract Ideas #02 Raw: Exposure Bias Tone Mapping 3D Renders Enhancing Low Light Trails Stacking: Star Trail Effect The star trail effect video is my favourite, it looks at using two types of stacking and compositing them using masking to achieve a clean foreground and starry sky. Hope you find them useful!
  17. Hey D23, this is because the SerifLabs raw engine doesn't look at crop information when decoding the raw file. To explain: In-camera JPEG processing discards edge pixels because it's more computationally expensive to interpolate them. Therefore, for parity, crop instructions are written into the exif data of the raw files so that the decoded raw image can be cropped to match. Because the SerifLabs engine ignores these crop instructions, you'll often find the X and Y dimensions of a raw file gain anything from 30 to 90 pixels extra (this is referring to the majority of DSLRs and CSCs). Most of the time this is fine, but I'm aware that the Powershots are an odd case. Either the lens's image circle doesn't appear to fully cover the sensor, or the lens exhibits some quite strong vignetting and distortion at the edges - hence why it is usually cropped off. In your case there seems to be excess vertical resolution. Out of interest, if you open a raw file in Photo, what's at the top and bottom of the image? Is it mostly vignette/distortion or does it seem usable? The other use case for cropping information in the exif data is if your camera has different aspect ratios like 4:3, 3:2, 16:9, 1:1 etc. Unless your camera has a multi-aspect sensor (e.g. the Panasonic GH2 and GH1), all these ratios do is simply crop the image from its original aspect ratio. If you're shooting JPEG, this crop is performed destructively in camera. If you're shooting raw, however, the entire image is still written, and instead it's up to the raw processing software to apply the crop. In this case, Photo doesn't. Apologies that all I can offer at the moment is a long-winded explanation! I do think it would be a good idea to have a configurable exif crop option so that users have a choice, and will mention this to the developers. Hope that helps!
  18. Hi Peter, you're correct that raw handling is important, and raw converters will handle demoisaicing differently with varying results. There are several more steps involved - which are again handled differently between raw converters. Demosaiced raw data starts as "scene referred", which is a measurement of light in the scene. You will very rarely (if at all) see a raw file in its linear "scene referred" form. It then goes through gamma curve correction, gets mapped to a colour space and has a tone curve applied to produce a result more in line with the user's expectations (similar to an in-camera JPEG). So to answer your question - yes, there's a difference between SerifLabs and Core Image RAW. SerifLabs will demosaic, gamma correct and tone map, but as you've found, the additional tone curve is optional. If you turn this off, you'll only see the image with a gamma curve correction. No additional sharpening is added by default - this is left entirely up to the user. Colour noise reduction is added by default; previously it wasn't, and we faced a lot of criticism over raw development quality because users are so used to having raw processing software apply it automatically. The harsh reality is that yes, your camera really is that noisy ;). You can turn this off if you wish on the Details panel, I just wouldn't recommend it. I've done some analysis, trying to make a SerifLabs-decoded image match a Core Image RAW-decoded image, and I've come to the conclusion that Core Image takes some additional steps. It adds sharpening whether you like it or not, there's no doubt about that. It certainly performs colour noise reduction, and I also believe it does some luminance denoising and then dithers slightly by adding in some fine noise to retain texture. This approach wouldn't be entirely out of step; for quite a while, Apple's H264 decoder added some fine noise to reduce blocking and banding (this is back in 2007/2008 when its hardware-based H264 support was less comprehensive). I'm unsure of Apple's modern approach to H264 but I expect it's more refined now. At this stage of Photo's development, the raw handling could still use some improvements, and over time it will be improved: namely a better approach to demosaicing and some more effective noise reduction. Demosaic implementations are continually being researched and written about, and there is always scope to do better here. As far as advantages of SerifLabs go, there is one that I can strongly point out: because betas are made available in-between major releases (either for bug fixes or to introduce new features), new raw camera support is added frequently - so if you invest in a new camera, chances are you could grab a beta and be able to open raw images sooner rather than having to wait for an official update. For example, 1.5 was released in December and supported the new Olympus E-M1 mk2 camera which also shipped that month. It also opened images shot with the camera's high res mode (sensor shifting to produce 80MP raw files) - a feature that's still yet to be supported in some raw converters. At the end of the day, the best advice is to experiment and find which raw converter's results you like the most based on what you shoot; e.g. if you do a lot of high ISO urban photography you'll want some fairly robust noise reduction and, perhaps more importantly, good colour handling. If you're into landscape photography with difficult dynamic ranges perhaps you'd find the ability to remove the tone curve more useful. And so on and so forth... Hope that helps!
  19. Hey PanoTom, Photo's panorama stitching currently doesn't identify bracketed sets of exposures to perform HDR panoramic stitching. I do believe this is something that could happen in a future version, but for now, you can always HDR merge your individual stitches, export them as OpenEXR, then stitch those for a 32-bit panorama. You can then tone map it like you would any regular HDR merge. This video runs through the process: HDR Panoramas (Vimeo link) Hope that helps!
  20. Hey pvprint, the issue is likely that you were trying to export to PNG - it's not intended for CMYK output and only supports the RGB colour model. If you were to open the exported PNG file in Photo, you'd see it has been converted to RGB/8 with an sRGB profile. Try exporting as a TIFF or PDF (if it's supported by Acrorip). If you export as a TIFF, just remember to choose "TIFF CMYK" under Preset on the Export dialog. As you've discovered, you can also export to JPEG, which would work fine (TIFF just offers lossless compression). Hope that helps!
  21. Apologies, I didn't have the app in front of me when I replied (in my defence, it was the weekend and I replied on a tablet ;)). You can indeed offer a drag-drop layer to the thumbnail, and the vertical blue bar appears to the right of it. It's become such a force of habit to move the cursor exactly where the blue bar appears that I forgot you can target any part of the thumbnail to achieve the same thing.
  22. Hi Dooza, Photo has two concepts when it comes to child layers - clipping and masking. When you're dragging the Levels adjustment into your pixel layer I assume you're getting a horizontal blue bar - this is clipping, but it's not what you want for your desired result. Instead, if you offer the Levels adjustment just to the right of the pixel layer's thumbnail, you should get a vertical blue bar. This will mask or "restrict" the Levels adjustment to the pixel layer. Check out the video on Clipping vs Masking, that should demonstrate the differences (later in the video I start using adjustment layers and masking them, so maybe skip on a bit): Hope that helps!
  23. Hi Charles, the feedback is appreciated. Between a very small team, we manage the documentation, which includes the fairly comprehensive built-in help and upcoming Photo workbook, and all the video tutorials. I'm currently responsible for the Photo videos and yes, you are correct, the majority of them assume some working knowledge of image editing software. I produced a small beginners set of videos as a more gentle introduction to the software, and am hoping to plan and produce some additional videos in the future - have you seen those? Additionally, I'd point out the offline help system (you might be surprised but many people assume there isn't one) as that documents all the features and tools. We're also currently in the middle of the Photo workbook, which will be very similar to the Designer workbook released last year; it will have a core skills section as well as full projects that span enthusiast, commercial and creative genres. That should go some way to covering the need for a hands-on written guide. We are of course looking at other ways of delivering training (e.g. written articles) for the future. Hope this answers your question!
  24. [edit] Good timing, dual post! Hi Tommy, you can indeed use the Nik plugins, we've got two videos - one for Windows, one for Mac - detailing how to install and register them with Photo. Check out the plugins playlist on the Affinity channel: Hope that helps!
  25. Just a quick post to say I've updated Soft Proofing - it's now more comprehensive and looks at how to use gamut check (as well as exploring the rendering intent options). Have a good weekend!
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.