Jump to content
You must now use your email address to sign in [click for more info] ×

James Ritson

  • Posts

  • Joined

  • Last visited

About James Ritson

Contact Methods

  • Website URL
  • Twitter

Profile Information

  • Gender
  • Member Title
    Product Expert (Affinity Photo) & Product Expert Team Leader

Recent Profile Visitors

16,954 profile views
  1. Hey @Announcement, would you be able to post a screenshot of your layer stack? Are you using any live filters or adjustments? Regarding a difference in sharpness, if you flatten your Photo document temporarily (Document>Flatten) and view at 100% zoom (Ctrl/CMD+1), does it then look the same as the exported JPEG at the same 1:1 zoom level?
  2. Hi @Sonny Sonny, as I mentioned above it's not an issue. Possibly Photopea does not interpret unassociated alpha?
  3. Hi @DaQuizz and @Sonny Sonny, this isn't actually a bug but rather an intentional behaviour: the TIFF metadata from Blender will be written as having unassociated alpha. Previously, Photo was ignoring this metadata and associating the alpha channel, which was actually undesirable for certain workflows. For example, when you save a final beauty pass from vRay without a separate alpha pass layer, it will write unassociated alpha data into the TIFF. This allows the user to optionally remove the background detail using the alpha channel, but they may not wish to: instead, they may just want to use that alpha data for other types of masking such as affecting just the sky or foreground with adjustments. I did a tutorial on the process here: Hope that makes sense and helps?
  4. Thanks again @playername, I'll do some more digging then will hopefully be able to discuss with the developers early next week. Just to clarify, the Filmic/AgX macros are meant explicitly for use with EXR/HDR formats—they're essentially an alternative to setting up OpenColorIO. Instead, you stick with the ICC display transform method and apply these macros above all your layer compositing work that wants to be done in linear space (e.g. blending multiple render passes together with Add such as mist, volumetrics etc). Then you apply the macros to do the required transform and get the linear values into bounded gamma-corrected space. At this point you would treat the editing as if you were working with a gamma-encoded bitmap like JPEG/TIFF, e.g. using adjustment layers, live filters and so on for further retouching. Hope that makes sense—it just sounded from your post like you were trying to use the macros with gamma-encoded output from Blender, but they already have the OCIO transforms applied. The main reason I made the macros is because it's notoriously difficult to export from Photo to a gamma-encoded format when using the OCIO view transform. Using File>Export and going to a format such as JPEG doesn't actually use OCIO at all, it uses ICC display transform instead—so people will be working in OCIO, then find their exported result looks very different. The macros allow you to work with ICC display transform instead and then get a consistent File>Export result. OCIO's implementation in Photo was more meant for VFX workflows where the app would be an intermediary, not an endpoint/delivery. For example: User would bring in an EXR document tagged with an appropriate colour space (which is then converted to scene linear) Alternatively, they would develop straight from photographic RAW using Photo's 32-bit HDR output option (which remains in scene linear) The user would then perform matte painting and any other retouching work required, perhaps using the OCIO adjustment layer to 'move' between colour spaces if they are compositing layers from different source colour spaces They would then File>Export back to EXR. appending the file name with the colour space they want to convert back to The EXR would then be brought into the VFX/NLE software Being able to 'match' what is seen with the OCIO view transform when exporting to a gamma-encoded format is still an ongoing discussion internally for now... If you're working with gamma-encoded TIFFs, I believe Blender writes them out untagged, so Photo will assume sRGB primaries when importing them. You can change this in Settings>Colour by modifying the first "RGB Colour Profile" dropdown to your input space (e.g. Adobe RGB). Hope the above helps!
  5. Hi @playername, thanks for the information, it's very useful. I had a look at Eary's configuration and it looks very comprehensive. It doesn't appear to have any file rules defined however—therefore I'm unsure of what Photo will be converting from into scene linear. I'm hoping it just looks at the 'default' role which is Linear Rec.709, but if your source colour space was something else then it would need a custom file rule adding into the configuration file. I haven't really experimented with other colour spaces when using Blender, can you actually set it up to use something like ACES or Adobe RGB for all its internal compositing? I'll have to take a look and see if Eary's AgX transforms and output differ to Troy's, as I based my macros off Troy's version... (I've developed some macros so that users can easily apply AgX/Filmic transforms without requiring OCIO at all, which makes it far easier to just export as a gamma-encoded image format straight from Photo)
  6. H @irandar, I'm assuming the files are all OSC since they're from a Nikon camera, in which case I wouldn't try and artificially split them into mono channel data straight away. What you could do is stack portions of the light frames and use the same calibration frames each time, e.g. stacking the light frames in groups of 0-100, 101-200, 201-300, 301-400. You could use file groups for each set if you wanted to do this. However, I'm unsure if it would actually offer any speed benefit over just stacking all 400 frames in one file group, to be honest. If you did take this approach, you would end up with multiple data layers once you apply the stack. You can select these layers, then go to Arrange>Live Stack Group and change the operator to Mean. This will average the layers and reduce overall noise further. Then you could either flatten (Document>Flatten) or Merge Visible (Layer>Merge Visible) to create a new pixel layer from this result. At this point, you could try using the "Extract OSC Layer to Mono RGB" macro in the Data Setups category, then try either "Mono Stretch (RGB)" or "Mono Log Stretch (RGB)", the latter of which will be more aggressive. Finally, you could then colour map using "RGB Composition Setup". There isn't actually a macro to synthesise a luminance layer, but I could actually add this in an upcoming version. Hope that helps!
  7. Hi @playername, there's something else that may be causing your issue: as per the OCIO v2 spec, Photo will now always convert from the document colour space to scene linear (in V1 this was optionally ignored if the EXR filename didn't contain a valid colourspace name appended to it). In the AgX config file (at least with Troy's main Git repo), we have the following: ocio_profile_version: 2 environment: {} search_path: LUTs strictparsing: true luma: [0.2126, 0.7152, 0.0722] roles: color_picking: sRGB color_timing: sRGB compositing_log: sRGB data: Generic Data default: sRGB default_byte: sRGB default_float: Linear BT.709 default_sequencer: sRGB matte_paint: sRGB reference: Linear BT.709 scene_linear: Linear BT.709 texture_paint: sRGB file_rules: - !<Rule> {name: Default, colorspace: default} Photo will be using the Default rule—you should see a toast in the top right every time you open an EXR file saying it has converted from 'default' to scene linear. The 'default' role, however, is defined as sRGB (see roles above), so Photo will be converting from non-linear sRGB primaries to scene linear, which will look very wrong. If you change the rule section to: file_rules: - !<Rule> {name: Default, colorspace: default_float} This will then convert from Linear BT.709 (Blender's default internal colour space) to scene linear, which should look correct. Does that fix it for you? Alternatively, you can go to Settings>Preferences>Colour and disable Perform OCIO conversions based on file name, and that will stop the conversion entirely and assume the EXR primaries are already in scene linear. I'm not sure what the correct approach is here—according to the OCIO documentation (https://opencolorio.readthedocs.io/en/main/guides/authoring/rules.html), rules are now a requirement for V2 configs as a default colour space must be mandated. I'm not sure why the AgX configuration has defined the default colour space as non-linear sRGB, I suspect there must be a reason. Perhaps adding some additional file rules may help make the configuration more flexible, e.g.: file_rules: - !<Rule> {name: OpenEXR, extension: "exr", pattern: "*", colorspace: default_float} - !<Rule> {name: Default, colorspace: default} This would at least convert from Rec.709 linear (or whatever default_float is in another configuration) if an EXR file is loaded. There is another separate issue where the colour space transform appears to be bounded 😬 (this is already logged). This issue won't help matters, but it doesn't account for the radically different results you're seeing, which I suspect is a result of the configuration file causing Photo to convert from the wrong colour space. I'll discuss with the Photo developer next week when he's back off holiday—we may need to do some more investigating...
  8. Hi @Andreas Wilkens, this is likely related to the brightness of the EDR panel in the MacBook constantly changing: there is an issue with V1 where many brightness change notifications are sent to Photo (which uses them to calculate dynamic range on the 32-bit preview panel). You can mitigate this by turning off automatic brightness adjustment in System Preferences. Please see this video for more detail: This issue is fixed entirely in V2 (and all future versions of the Affinity apps). It primarily only affects Photo V1 because Publisher and Designer still use an OpenGL view (whereas Photo uses Metal). Please don't follow any advice that suggests disabling Metal Compute to fix this issue, as it will noticeably reduce performance.
  9. Hi @WB1, the Photo video tutorial on placing images is about five minutes in length, and only because it goes through all the various options you have. If you're not bothered with them, simply drag-dropping images onto the document will place them (or you can use File>Place). None of the official video tutorials are as long as thirty minutes to my knowledge, perhaps you are watching third party videos? I believe the longest video on the Affinity Photo channel is around 18 minutes long, and only because it is an in-depth workflow video on HDR editing and exporting. Most tool-based videos are on average three to five minutes in length. Nevertheless, the Affinity apps shouldn't be any more particularly complicated than other similar software (or indeed, the previous Plus range)—have you checked the in-app help? It's rather "old school" in its approach, giving you a searchable list of books and topics that provide straightforward instructions on using a tool or feature.
  10. Hey @RTSullins, I came here to copy-paste my reply on Reddit but you've already found it! Glad to hear using File>New Stack works for you. For the benefit of anyone else with a similar issue, here's the reply I wrote:
  11. Hi @Stuart444, I shoot with an OM-1 and Affinity Photo V2 has supported it since its initial release in November of last year (V1 is unlikely to receive new RAW file support, only maintenance patches now). I'm not sure why you would have to purchase a universal license if you don't need to use Designer or Publisher, you can purchase Photo individually: https://affinity.serif.com/photo/#buy Hope that helps, James
  12. Hello, hopefully the following will help. I wrote this functionality a few years ago when we were developing the Windows versions of the Affinity apps. It's actually behaviour for the in-app help viewers which has been ported across to the web version. It detects the operating system and sets an appropriate stylesheet with the OS modifier keys. Therefore you may blame me! As has been mentioned above, the typical user experience is to be browsing the help with the same OS that they are using the Affinity apps on, so this is generally a sensible behaviour. If you need to see the modifier keys for the other OS, however, you can use Alt/Option and the left and right arrow keys to toggle between the two. Make sure you are focused on the topic window and not the TOC (a single click over the topic area will suffice). The above modifier is not advertised publicly since it was implemented as a development tool, but there is no harm in exposing this I believe. Having a web version-only toggle for macOS/Windows would be a sensible approach. Another would be to simply list both modifiers side by side with platform clarification, which would likely serve to frustrate both sets of users and create additional 'noise' in the topics—we receive enough criticism about this with the video tutorials where both modifier variants are always mentioned. Any decision would however be up to the documentation team.
  13. Hey @David in Mississippi, this is more of a workaround than a solution, but you can double-click an asset in the Assets panel to 'load' it into the Place Image tool—you can then click-drag and place it at any size you want, using modifiers such as CMD/Ctrl to use a centre origin and Shift to toggle aspect-correct scaling. Assets are just pointers to layer content (if you drag images directly onto the Assets panel, they are instanced as Image layers)—I don't think the asset creation process should necessarily manipulate these in any way, such as resampling them to a resolution based on the current document, but you raise a good point that perhaps the asset could be optionally constrained to the current document bounds. Hope the above solution helps for now!
  14. Hi @grunnvms, the difference in file size is likely due to Affinity Photo applying lossless ZIP compression by default. You can change the compression method or disable it entirely when using File>Export by scrolling down to the Advanced section: it's at the bottom of the settings. It certainly sounds like Capture One isn't applying any form of compression, as you've observed that its output file sizes are all consistent. Rest assured, you're not losing any information. You may gain a slight increase in encoding speed by not using compression when exporting from Photo—it's up to you whether the increased file size is an acceptable trade off. Here's the export dialog with the compression option:
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.