Jump to content


  • Content count

  • Joined

  • Last visited

About Jamon

  • Rank

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

222 profile views
  1. The new Affinity Photo for iOS appears to update faster than my Windows 10 workstation. But if you pay close attention to the video, it also updates with the same large blocks. I guess this is just how the Affinity engine works, and as much as I would've liked the software otherwise, this is a major fundamental problem for me, as I am uncomfortable without fluid updates. The sharp edges of tiles updating feels harsh and unhealthy to the senses, where it's a severe disturbance to workflow when there's any delay in screen redraws and the sharp edges become noticeable. I'd liken it to relaxing on the beach, feeling thirsty and drinking from the Affinity water bottle, only to feel sharp shards of glass in the water. Other software updates smoothly, without any sharp edges where some tiles update before others. This is a major human interface design flaw. Not everyone will notice or care. Some people have higher sensitivity. Like how I do not like to work on an LCD screen with slow PWM backlight, as I see strobing effects with sharp lines in the image too. I am using a $2,700 display for faster flicker-free updates, and I'd pay a lot more money for an Affinity suite that offered a more biocompatible style of screen redraws.
  2. I've seen Affinity Photo referred to as "Photo". Lenovo, a computer hardware company, had a product called the "ThinkPad Tablet". Imagine if you saw employees of Lenovo referring to "Tablet", with sentences such as, "Tablet is an amazing device! Not only can you watch movies on Tablet, but you can call your friends too!" In case you do not immediately feel and understand what's wrong with that, "tablet" is a common generic term that describes an entire class of hardware within that space. "Photo" is already a commonly used shorthand within this space for "photograph". "I edited my photo with Photo", is convoluted. The product names aren't really matched. There's Affinity Designer, which implies it designs, or is used by designers. But then there's Affinity Photo, which does not photo, and is not used by photos. But regardless, "Affinity" is the unique identifier for the products. "Designer" and "Photo" wouldn't mean anything to anyone. They are far too generic of words. But "Affinity Designer" and "Affinity Photo", are unique. If one must use a shorthand, simply "Affinity" is better than "Photo", and it's usually obvious from context which one you're referring to. "Affinity Photo" isn't so long to really need a shorthand, but "AP" is even shorter if you do. Please don't refer to it as "Photo" though, as it overlaps terms, doesn't convey enough meaning, and sounds ridiculous: "Before you could edit photos on a tablet with Photo on the Tablet using a Wacom stylus; but now, you can also edit photos on the iOS version of Photo using a Pencil on the iPad!"
  3. I emailed the Lensfun maintainer approaching a week ago, and there's no response yet. I have doubts about relying upon Lensfun in this manner. At the very least, I think there should be a function to update the Lensfun library from within the current version of Affinity, so one must not wait for an Affinity update; and also the ability to add your own Lensfun lens profile data to your local library, for people who cannot wait for the Lensfun maintainer to include it. Who pays the Lensfun maintainer? What happens when they get ill? Does Affinity pay people to profile lenses, and contribute the data, to keep the library complete? Because my camera is extremely popular, released almost half a year ago, and there's no support in Lensfun yet. New cameras are released all the time, and this will be an ongoing problem, where people will be waiting for an open source project to support their commercial software. If Serif is going to depend on Lensfun, I hope they are contributing to the project, either in the form of money or data, and making an effort to keep it up to date. There should also be multiple maintainers, where a Serif employee has commit access to the official repo; or forks the library, to control and work on it independently, while sharing patches upstream, so if the original project slows down or closes, Affinity is unaffected.
  4. Jamon

    Affinity One - Unified Product

    In Affinity Photo you can press P for the pen tool, and draw a vectored line, click the stroke parameter to increase it to 3 pt, then click the "fx" on the layer, enable "Gaussian Blur", and drag the slider. They're basically the same software. Affinity Design is a kind of "persona", that's focused on vectors; and Affinity Photo is focused on pixels. You could smash them together, but you'd need to implement those modes anyways, to optimize which tools are displayed. Having dedicated apps can be like having a fork and a spoon, where you could make a "spork", but then it's not optimized for either case. If I want to do more complicated vector work, I can do it in Designer, ctrl+g to group it, ctrl+c to copy, switch back over to Photo, and paste. I can also use "Edit in Designer" to switch apps, then "Edit in Photo" to switch back. It's basically the same software, so it's like switching modes. It's a matter of organizational preference whether they're a single app or not. The Mac approach might be unification, but the Unix approach is dedicated tools. It doesn't necessarily mean that a unified interface would be simpler. It would be tricky to organize, and could end up more complicated in practice than separate apps. The tools are made for people, and most "photographers" will never need to draw, and most "designers" will never need to develop raw images or merge multiple exposures. By separating the software, it's easier to focus on supporting each group of people, in simpler ways. It's fun to think about and discuss. I can certainly see why an "Affinity One" would be a neat idea. For more advanced users especially, having a mess of menus and options might be exciting and feel more powerful. But it's Serif's project, not ours; and they're doing just fine. Having Photo and Designer works, and most importantly right now is to focus on the core basics, to polish what they've got.
  5. I thought the only RAW engine available for Windows, which OP is using, is "Serif Labs".
  6. Create a macro: File > New HDR Merge Select files Uncheck "Tone map HDR image" View > Studio > Macro Click the red circle for "Start recording" Enter "Tone Mapping Persona" Make HDR adjustments Save HDR preset and apply Macro shows "Tone map" Make any other adjustments Click the white circle for "Stop recording" Click the grid with the plus in macro panel for "Add to Library" Name it "HDR test" Run the macro: File > New HDR Merge Select files Uncheck "Tone map HDR image" Double-click "HDR test" Repeat I think that's the best you can do today. Hopefully once they add JavaScript it will be possible to open a folder, loop through sequences of images, HDR merge them, apply preset, and export to incremental filename, all without manually doing anything.
  7. The administrator needs to enable it in AdminCP, then an RSS icon should be visible:
  8. Extract the attached file, then select it in Affinity with: File > Import ICC Profile It's copied to: C:\Users\$user\AppData\Roaming\Affinity\Photo\1.0\profiles ProPhotoRGB.zip
  9. Edit > Preferences > User Interface > ☐ Show brush previews
  10. In Affinity Photo, when an adjustment is made, the canvas is rendered in steps, where large tiles in a grid are updated. The first tiles to change are often near the middle, where they then randomly update in an outward motion. Affinity Photo tiles updating.mov Is it possible to provide an option to only update the screen after the entire grid has finished rendering? Perhaps the individual tiles do not need to be displayed as soon as they finish updating, but can be calculated independently, and then all of them switched simultaneously after the last has completed? I find the jagged steps to be uncomfortable. I would prefer to see no changes until it can update the entire screen at once. That would remove the harsh motion of square boxes updating all over the screen. When you're focused intently on an area, and attempting to make subtle adjustments, seeing many sharp corners flashing all over becomes painful after a while. It feels too distracting for serious work. It also seems as though performance is lacking, where one might expect adjustments could be made fast enough where these updates aren't visible to the eye. Creating a new UHD document, filling a single layer with a solid color, and adjusting HSL, the Xeon CPU at 100% cannot keep up, and the Quadro M4000 shows a maximum of 50% utilization. It is slower with GPU acceleration disabled, or in the LAB color format. This is in Windows with both 100% or 200% scaling on a 10-bit UHD display. For comparison, on the same machine in other software, the hue of UHD video can be smoothly adjusted while it plays, and it appears like a soft fade without any boxes no matter how fast I move the slider. ( If the video doesn't play, rename it as "mp4"; the forum software denied that extension. )
  11. Edit > Preferences > Performance > Retina Rendering: High quality (Slowest)
  12. My camera is listed under "Supported Raw Cameras", but there is no automatic lens correction. It is not in the Lensfun library, but their website says, "Note that new camera models can be added very easily. Contact the Lensfun maintainers for this." Are we supposed to contact Lensfun, then wait for Affinity updates, to get better coverage for newer cameras? Or do you plan to acquire more extensive lens distortion models a different way? I can use the Lensfun software to produce the corrections, but I'm wondering how viable this approach is, where their database is missing popular cameras, and most customers are probably unwilling to do the extra work to get their camera supported. For now I thought I'd open an image from an older supported model of the camera, then make a preset from the automatic lens correction settings. But the "Lens Correction" panel is checked, with everything at 0% defaults. Unchecking it does not disable the "Develop Assistant" lens correction, and it does not display which settings were used, so one cannot discover them to adjust or apply to other unsupported camera files.