Jump to content

James Ritson

Moderators
  • Content count

    639
  • Joined

  • Last visited

About James Ritson

Contact Methods

  • Website URL
    http://www.jamesritson.co.uk

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

7,575 profile views
  1. Glad it's all working for you! I'll have to test the group export again, did you have layers within the group that you gave .RGB affixes for it to work? If you have any other questions regarding EXR/32-bit/linear compositing etc don't hesitate to ask. All the best, James
  2. Hi again, I think the export error is because of your layer stack. Check out the help topics on OpenEXR at https://affinity.help — particularly this one: https://affinity.help/photo/English.lproj/pages/HDR/openexr.html Generally, all of your pixel layers should have an .RGB or .RGBA affix (for example "Layer02.RGB" or "Layer02.RGBA" if it has a distinct alpha channel). If you don't specify layers as RGB pixel data they will be discarded on export. The reason you have to do this is because Affinity Photo has lots of different layer types: pixel layers, image layers, curves, fills, groups, etc. The OpenEXR spec is much narrower and you have to dictate what type of channel information each layer becomes when exporting to EXR. If you don't specify the channel type, Photo will ignore or discard the layer in question during export. If you have a pixel layer with a mask inside it, you must use the .RGBA prefix if you want to export the alpha channel separately—if you just use .RGB the alpha channel will be baked into the pixel layer and will be non-editable. Separate colour channel information can also be exported, so you could affix layers with .R, .B, .G and .A etc and they will be stored as individual channels in the EXR document. You don't appear to be working with any spatial layers like .Z or .XYZ so you can probably ignore them. Embedded documents and placed images should be correctly rasterised as long as you give them an .RGB or .RGBA prefix. Groups you will have to rasterise, even if you give them an .RGB prefix, otherwise they won't export. Right click the group and choose Rasterise to do this (make a duplicated backup beforehand and hide it though). I'm actually struggling to reproduce the error message now—if you try the naming conventions listed above and are still getting the error message, please could I send you a private Dropbox link to upload the document so I can have a look? It would only be used internally and then deleted once the issue is addressed. Thanks and hope the above helps, James
  3. Hi @spoon48, I've seen this before and can't quite remember how I fixed it—would it be possible to see a screenshot of your entire layer stack, and also your export settings (the expanded settings on the More dialog)? Thanks! James
  4. Hi @extremenewby, on Affinity Photo's Levels adjustment dialog you'll want to use the Gamma slider as an equivalent to Photoshop's mid-point slider. If you have any other questions about editing astrophotography please ask, I've been working with various deep sky imagery lately and adapting various techniques to an equivalent approach in Affinity Photo. Hope that helps!
  5. Hi @marisview, you should be able to select the mask layer, then use Layer>Merge Down (Ctrl+E or CMD+E), which will "rasterise" or flatten the mask into the alpha channel of the parent layer—does this work for you? This also works if you have the mask layer above the target layer. Note that this only applies to Pixel layers—Image layers (if you drag/place an image into your document) are immutable and cannot be merged. You must rasterise them first (Layer>Rasterise), but doing so will actually achieve the same effect as Merge Down, so no additional steps are required! Hope that helps.
  6. Hi @SeanZ, here's a quick breakdown of the pixel formats and what happens: As you've observed, when you open a RAW file in Affinity Photo it's processed in a working pixel format of 32-bit float, which is unbounded. This prevents values outside the range of 0-1 from being clipped and discarded, and allows for highlight recovery. Colour operations are processed in ROMM RGB (ProPhoto), which helps colour fidelity even if the intended output colour space is sRGB. You are essentially working in the highest quality that is reasonable. When you click Develop, by default, the pixel format is converted to 16-bit integer, which is not unbounded. Any values outside the range of 0-1 will be clipped and rounded, so you should ensure you are happy with the tones before you proceed (e.g. using highlight recovery, changing black point etc). The colour space is also converted to whichever option you have chosen—by default, this is sRGB, but you can change this by clicking the Profiles option on the right hand side and choosing another output profile like ROMM RGB or Adobe RGB. I say by default, because you can change the output format to 32-bit HDR on the develop assistant (the tuxedo/suit icon on the top toolbar). Be very mindful, however, that 32-bit in Affinity Photo is a linear compositing format as opposed to nonlinear. Adjustments will behave differently (typically they will seem more sensitive) and blend modes may produce unexpected results. I would avoid working in 32-bit unless you either want to author HDR/EDR content—this is not the same as HDR merging and tone mapping—or you need the precision for more esoteric genres like deep sky astrophotography. A lot of customers think working in 32-bit is going to offer them the best quality possible. Whilst this is technically true, there are many caveats and I would seriously advise against it unless you have a specific requirement for this format. To answer your questions: Technically, the answer is yes, but it's like I mentioned above: unless you have a very specific requirement to work in 32-bit, any loss in quality will be negligible. 16-bit nonlinear precision is more than enough for 99% of image editing use cases. Here's an example of my workflow: I will often choose a wider colour profile like ROMM RGB, then make my image look as flat as possible in the Develop Persona, usually by removing the tone curve and bringing in the highlights slightly if they are clipping. I'll then develop to 16-bit and do all of my tonal adjustments in the main Photo Persona. I have yet to complain about any loss in quality The functionality is more or less the same, but the Develop Persona has more intuitive sliders. In fact, in demos I will frequently explain to people that if you want to do some simple destructive edits to an image, you can simply go into that persona and use a few sliders rather than have to build up a layer stack. One key difference will be the white balance slider: when developing a RAW file, it has access to the initial white balance metadata and the slider is measured in Kelvin. Once an image is developed, however, this slider then becomes percentage based and takes on an arbitrary scale. Whatever works best for you, I think. My approach, which I explained above, is pretty fool proof, but you can do as little or as much work during the initial development as you feel is appropriate, e.g. you might want to add luminance noise reduction during the development stage, perform defringing, change white balance etc. Just be aware that if you perform certain steps like noise reduction during the initial development, you can't undo them. With noise reduction, I prefer to add a live Denoise layer in the Photo Persona, then mask out the areas I want to keep as detailed as possible. Again, though, it's down to personal choice. Hope that helps!
  7. Hi @ijustwantthiss**ttowork OK, so my suspicion is that the blade.icc profile is deficient—out of curiosity, is this the Razor Blade laptop? Can I ask where you got the blade.icc profile from, did you download it from a website or did it come preinstalled with the laptop? So if you switch the colour profile to the sRGB device profile, everything looks fine inside Affinity Photo? No issues, and when you export to JPEG/TIFF etc with an sRGB document profile everything looks good with an external image viewer etc? This is expected, as Affinity Photo needs to restart to colour manage with the newly chosen profile if you change it on the fly. Definitely don't do this! The reason it looks OK if you use blade.icc as your document profile is because you're matching the document profile with the display profile, negating colour management entirely—the document numbers are being sent to the display with no alteration. It might look OK for you, but it won't for anyone else. Colour managed applications are supposed to use the display profile to translate the colour values in the document as they're sent to the display. This is especially important for wide colour profiles beyond the scope of sRGB. The thing that seriously isn't working has to be the blade.icc profile—not sure where you got it from but it's defective as regards compatibility with colour management. We've seen this a lot with various monitors that people are using, Windows Update seems to push specific display profiles that just don't work with colour management solutions at all—in the article that was linked above you'll see I invited people to search for "whites are yellow" in relation to apps like Photoshop because it's not just Affinity Photo that is affected. Have you got access to a colorimeter, e.g. i1Display Pro, Spyder, ColorMunki etc? If so, I would download either the manufacturer's profiling software or DisplayCal and use that to profile your own monitor—any profile created by the software will be valid and work correctly with Affinity Photo. If you can't profile the display by yourself, the best solution is simply to use the sRGB display profile rather than this factory-calibrated blade profile—when you say factory calibrated, that instantly makes me sound very skeptical. Again, this all points to the blade profile being incompatible with colour management. Not many applications are actually colour managed—to be honest, I've lost track of whether Windows Photos/Photo Viewer/whatever it is in Windows 10 is colour managed or not. I think web browsers by and large should be colour managed by now, but there's no guarantee there. The fact that things look different in Affinity Photo is a clear sign that it's performing colour management based on your active display profile, but unfortunately if that display profile is incompatible then everything is going to look whacked. As I just mentioned above, unless you can create your own accurate and compatible profile with a measuring device, I think your only solution here is to use the sRGB display profile. From your screen grabs, you haven't touched Affinity Photo's default document colour profiles (they're left on sRGB) which is good—just avoid using blade.icc as your document profile altogether and stick with standardised device profiles like sRGB, Adobe RGB, ProPhoto/ROMM RGB etc. If you do use a wide profile like Adobe RGB, don't forget to convert to sRGB on export if you're going to view it in other applications that aren't colour managed—the article explains how to achieve that. Hope all the above helps! [Edit] From a bit of searching, I found someone had posted their Razer Blade 2019 profile that they had created with DisplayCal here: https://drive.google.com/file/d/1l07D8CtFjXYVsDpeTAyLBEkZpELNomYo/view (from this Reddit discussion: https://www.reddit.com/r/razer/comments/ase96x/razer_blade_15_2019_color_profile/) I definitely wouldn't recommend using it for professional/production work (I'd seriously advise getting a colorimeter and creating your own profile to the standards you require) but it's worth installing and switching to it just to see if it helps. It could go either way, since if that person's display is wildly different to yours it will still look terrible, but it could also produce a much better result than the blade.icc profile you're currently using..
  8. James Ritson

    JR Workflow Bundle (Shortcuts, Macros, HDR Tools, Brushes)

    Hello all, hope you had a good Christmas break! Before we move into the new year, I've provided a small update to the macro pack which will benefit anyone needing to retouch images with dust spots (particularly if you shoot mirrorless) and astrophotography editors. The new additions are: Retouching macro category: set up multiple high pass frequency separation layers, reveal shadow detail and dust spots so you can easily spot imperfections in your images and retouch them. Astrophotography macro category: retouching for deep sky imagery, aggressive tone/detail extraction for stacked deep sky imagery in 32-bit or 16-bit. Wide field retouching for general wide field imagery shot with a wide angle lens and fast aperture including noise reduction, vignetting and structural enhancement. As always, hope you find it useful!
  9. Hi again @wlickteig, this sounds like you had OpenColorIO configured on the old MacBook (or you changed the 32-bit view to Unmanaged), whereas on your new MacBook it's likely defaulting to ICC Display Transform, especially if you don't have OpenColorIO configured. In this case, you don't actually need to add the nonlinear to linear layer, just try exporting it without. The reason I developed the nonlinear to linear macro was for users who were trying to match the transform view from 3D software like blender e.g. when using the Filmic view transform. You can emulate Filmic by using an OCIO adjustment layer and going from Linear to Filmic sRGB, but because you have the added non-linear transform when you convert to 8-bit or 16-bit (this is what ICC Display Transform emulates) you need to add that final correction. If you're using ICC Display Transform and are happy with the results you're seeing on screen, just export without adding the correction. Easiest way to check what view transform you're using is to go to View>Studio>32-bit Preview and see which option is checked. ICC Display Transform will show you what your image will look like with a non-linear colour profile (e.g. in 8-bit or 16-bit), whereas Unmanaged will use linear light. If configured, you can use an OpenColorIO device/view transform as well.
  10. Hi @wlickteig, are you using OpenColorIO or have you accessed the 32-bit preview panel (From View>Studio) and set the view to unmanaged (linear light)? If you switch the view to ICC Display Transform you will see what your document looks like with a non-linear gamma transform, which is what will be used when it is converted to 16-bit or 8-bit. Thankfully there’s a simple way to emulate the linear view: add a live Procedural Texture filter with three channels and do a power transform of 2.2 for each one. Please see this thread for an explanation: Also, there’s no real need to copy to clipboard and paste as new image if all you want to do is export—just add the procedural texture layer at the top then export to TIFF/JPEG etc, all the bit depth and colour profile conversion will be taken care of during export. There is also a macro that will automate this process in my workflow bundle, it might be useful to you: Hope that helps!
  11. Hi @smadell, hopefully this will give you a clearer answer! What upgraded GPU option are you considering over the standard one, and is it the 21" or the 27" model you're after? Looking at the specs, for the 21" the cheapest model has Intel Iris Plus graphics whereas you can upgrade to a Radeon 555X or 560X. With the 27" model the cheapest option is a 570X, and the most expensive upgrade gets you a 580X. Any of the upgraded GPU options will give you an appreciable boost in performance with pretty much all operations in Photo: the vast majority of raster operations are accelerated with a few exceptions—for example, blend ranges will currently fall back to software (CPU). Vector layers/objects are also not accelerated. The biggest advantage I've noticed is being able to stack multiple live filters without significant slowdown: I typically use unsharp mask, clarity, noise reduction, gaussian blur, motion blur and procedural texture filters with compositions and photographs, and being able to use compute is incredibly helpful as it keeps the editing process smooth and quick. Export times are also drastically reduced when you have live filters in your document: in software (CPU), as soon as you start stacking several live filters the export time can easily take over a minute, whereas with compute on the GPU this is reduced to no more than several seconds. However, the biggest limiting factor in my experience has been VRAM, and you will need to scale your requirements (and expectations) in accordance with a) the screen resolution, b) the pixel resolutions you typically work with and c) the bit depth you typically work in. To give you a rough idea, 4GB of VRAM is just about sufficient to initially develop a 24 megapixel RAW file and then work in 16-bit per channel precision on a 5K display (5120x2880). If you move down to 4K (3840x2160) 4GB becomes a much more viable option. This is somewhat subjective, but I would say forget about 2GB VRAM or lower if those are your baseline requirements—you simply won't have a good editing experience as the VRAM will easily max out and swap memory will be used, incurring a huge performance penalty. Ironically, if you can't stretch budget-wise to a GPU with 4GB or even 8GB of VRAM, the Intel Iris Plus graphics option may provide a better experience since it dynamically allocates its VRAM from main memory, therefore it can grow to accommodate larger memory requirements. From early Metal Compute testing I often found that disabling my MacBook's discrete GPU (with 2GB VRAM) and only using the Intel integrated graphics would alleviate the memory bottleneck. I believe the memory management has improved since then, but if you're on a budget that is an option to consider. However, if you're looking at the more expensive iMac models, I think you should weigh up your requirements and what content you work with in Photo. Here are a few scenarios I can think of: 4K resolution, light editing—development of RAW files, then adding some adjustment layers and live filter layers—you could get away with a 2GB VRAM GPU. 5K resolution, light editing—definitely go with a 4GB VRAM GPU. 4K resolution, moderate editing—development of RAW files, lots of adjustment layers and live filter layers, some compositing work with multiple layers—go with a 4GB VRAM GPU. 5K resolution, moderate editing—4GB VRAM GPU minimum. 4K/5K resolution, heavy editing—working with compositions that have many image/pixel layers, clipped adjustments, live filter layers—absolutely consider an 8GB VRAM GPU. 4K/5K resolution, 32-bit/16-bit compositing e.g. 3D render work, using render passes, editing compositions in 16-bit—8GB VRAM GPU minimum. However, if budget allows, do also consider the possibility of an external GPU: this might even work out cheaper than having to choose an upgraded iMac model. In the office I have a MacBook with a 560X GPU that has 4GB VRAM—this is sufficient for demoing/tutorials at 4K or the MacBook panel's resolution (3360x2100) but I work with an external monitor at 5K and for that I use an eGPU enclosure with a Vega 64 that has 8GB VRAM. The additional compute power is incredibly useful, but it's mainly the larger pool of VRAM that helps out here. You don't have to use a Vega, I believe the new Navi cards are supported so you could look at a 5500XT with 8GB VRAM which is a reasonably cheap option (although you would still have to get the GPU enclosure...) As you mentioned, your timeframe is 6 months so it might be worth waiting for Apple to refresh the iMac lineup as they will hopefully switch to Navi-based cards like they have with the new MacBook range. No doubt the cheapest option will have 4GB VRAM but models with 8GB should also be available. Apologies for the wall of text, but hopefully that gives you some more information to work with!
  12. Hi Sean, best advice is not to touch the default colour settings in Preferences: leave the RGB/RGB32 colour profiles set to sRGB. RAW files are not mapped to sRGB/Adobe RGB etc, the camera setting is used for the in-camera JPEG and may also be used to dictate the initial output profile with RAW development software. After demosaicing, the colours are translated from the camera's colour space to a working space, then mapped to the output colour space which you get to choose. By default this is sRGB. If you wish to use a wider profile, check the Profiles option in the Develop Persona and choose an option from the dropdown. If you want to edit in ProPhoto, you can pick ROMM RGB which is the equivalent that ships with the Affinity apps. As mentioned above, the easiest option is to keep everything as sRGB. If you choose to work in a wider profile like Adobe RGB or ROMM RGB and you export to JPEG/TIFF etc you will generally want to convert to sRGB during export because other apps/web browsers/image hosts may not be colour space aware or colour managed. To do this, click More on the export dialog, and from the ICC profile dropdown choose sRGB IEC61966-2.1 There is a good argument for using a wider colour space if you tend to print your work, but you also need to be able to see these colours on your screen—do you know what coverage your monitor has of Adobe RGB/ProPhoto RGB, i.e. have you profiled it? If not, the safest tactic is simply to stick to sRGB again. Hope that helps!
  13. James Ritson

    JR Workflow Bundle (Shortcuts, Macros, HDR Tools, Brushes)

    Hi all, I've now unhidden this topic. The workflow pack went through some major revisions including separating the macros into different categories, and there have also been some improvements and additions (e.g. 360 and architecture macros). Merry Christmas!
  14. James Ritson

    Rodeo

    Hi @GarryP, the new tutorials are very much focused on this approach, are you referring to them or the legacy set? For example, the new set (https://forum.affinity.serif.com/index.php?/topic/87161-official-affinity-photo-desktop-tutorials/) has a Selective Colour tutorial amongst other tutorials that look at specific filters and adjustments—Curves, Gradient map, Levels, Shadows/highlights, Displace, Denoise, Radial blur, Clarity, Channel mixer, White balance, Black & white, Zoom blur, Procedural Texture, etc... With the new tutorials, there's less of a focus on multiple techniques in one video, although sometimes this approach may be required. The videos also need to be kept fairly to-the-point because we now get them transcribed and translated into 8 different languages (which is particularly expensive)—there's also the issue of engagement, where many viewers don't make it through lengthy videos, and we have to take that into consideration as well. I would love to be able to produce more videos in-between major releases but time is quite limited, so it will tend to be during or just after updates to the app when you'll find more tutorials being made available. Hope the above helps, and thanks for your feedback!
  15. Hi kat, check out the hintline (bottom of the user interface), it will list the modifiers/shortcuts you can use with whichever tool you have selected. In the case of the Paint Mixer Brush, you can use L to load the brush and C to clean. Hope that helps!
×

Important Information

Please note the Annual Company Closure section in the Terms of Use. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.