Jump to content


  • Content Count

  • Joined

  • Last visited

  1. Years later and this is still not available... I would want DXF export because laser engravers won't take PDF (obviously), also sometimes there is a need to export to 3D CAD software for extruding accurate shapes, and SVG is basically useless in both case because it can't be exported in millemeter unit (so it loses its purpose since the size won't be precise anymore).
  2. Was about to make the same report but well, I guess this is a pretty common issue. My case is the opposite, I have my projector which got wide color gamut set as my primary monitor for my laptop, and my laptop pretty much only got a sRGB screen (both are color calibrated using i1 Display Pro). And thus when I open Affinity Photo on my laptop monitor all the colors are washed out and red become orange. The best workaround for now might be to just set the actual primary monitor as primary monitor, rather than use the system default? It would also solve many of the compatibility issues. As long as you set the correct monitor as your primary monitor you should be fine (as long as you don't put things like color picker on a secondary monitor). Ultimately the color correctness is based on if you loaded the correct ICC profile in Windows.
  3. Giving that they are still developing the enterprise version, my guess is, they would just keep the current "pro" version as-is, maybe with some minor gimmicks, and shift the focus to intergrate the software into their product design lines. It would be a very bad news for artists who aren't product designers. I was never very into the software except on mobile platform (which I have no choice, I'm an android user) and now probably use even less of it. I'm mainly a PaintToolSAI2 user, Affinity Designer got me into doing vectors but Sketchbook pretty much got me into nothing (well, maybe except their pencil textures are a bit better than other affordable software). Still waiting for CSP EX to have discount in China (EULA says Chinese user can't use any version outside China, and I don't want to pirate it), and Animation Paper which would has absolutely no ETA for open beta.
  4. Recently I've been dealing with a lot of 32 bit renders. I've started using ambient occlusion layer to add more depth, but I find the result image a bit too intense if I just multiply the AO layer to the RGB layer. Therefore, besides lowering the opacity, I also tries to apply a curve or level adjustment to the AO layer. However, upon adding the layer, the whole image would dim down a lot, even without me changing any settings. It could potentially be a bug, but I think it would also make sense if it's by design (having pre-process enabled by default so that the adjustment would feel more "linear" to human eye). I can still get things done after some trying, but that the adjustment's behavior kinda feels "random" bugs me a bit. I want to know how exactly does all the adjustment layers works in 32 bit environment so that I can be more confident when using them.
  5. Scanner support has been asked for a few people already, but I'd like to make it more clear why it make sense to use AP rather than just import scan from other softwares. There are two things that would not quite be provided with an external scanning software: 16 bit and color management By far I've not quite find any software out there that supports 16 bit scanning (maybe Photoshop can, but I'm not using that for obvious reasons), which kinda makes archiving slightly more "destructive", since later color management might create more banding. For driver that has color management, this might not be that much of an issue, but for people who need to scan film negatives (though I'm not one of those, at least not yet), no 16 bit might mean more color artifact than it should be. And for color management, well, it's kinda easier to explain. If scanning feature is introduced one day, it should scan to the develop persona.
  6. Hi, Sorry for being slightly late, but I've uploaded the file. Don't have the original exr file on me, but I uploaded the afphoto file that, if the denoise filter is inserted to the place as shown on the screenshot, should replicate the issue. I'm using Affinity Photo, in case there has been any update I missed.
  7. Also, apparently even without the layer, the exported PNG file would still have those area glitched (but in a different way). So I suspect that this is a bug in the 32-bit HDR engine.
  8. I was editing an 32-bit OpenEXR file. After tonemapping, I created a shape with "soft light" blend mode to remove a shadow, and then added a denoise filter to remove the unwanted noise. However, there appears some black chunks after adding the filter layer, which would only appear if I put the filter on top of the shape layer. Rasterize the shape won't help. And when I attempt to zoom out, the shape of the black chunk changes, and the software crashes. By the way, I should probably mention that this happens on a Surface Pro 4, and it seems that this crash is recreateable. This is proprietary image, so I would like to only share the file to the dev team privately.
  9. Have a button in the "layer" menu that converts a layer with alpha channel into a solid layer that has a mask containing the alpha info.
  10. For example, say I want to create a mask that is for gradient blend a layer to below, I would have to "rasterize as mask". It might work just fine if I don't want to edit it further, but it's not that uncommon that this isn't the case, and sometimes I might want even more complex gradient mask (for example, maybe I created a mask for reflection that has binary combined vectors painted in gradient), in order to use it as a gradient mask rather than a clip mask, I would have to rasterize it, which is irreversable and not that "edit-friendly". Well, the idea is simple. Maybe I should be able to click on the mask icon and switch the mask type (between clip and gradient).
  11. Just come to my mind. Might be a little crazy though, but it might not be that hard to implement, and there's absolutely a lot of usage if implemented. The idea is simple, allowing user to write their own live filter in GLSL. Something like this: //haven't coded GLSL some time so this is more of a pseudo code unity texture2D source; //the source image unity vec2 sourceSize; //the rastered size of the source image in px unity struct{ //user-defineable sliders, document driven /** * @name Slider A * @description The Slider A * @minValue 0 * @maxValue 100 */ float sliderA; /** * @name Slider A * @description The Slider A * @minValue 0 * @maxValue 100 */ float sliderB; } settings; in vec2 pos; //the normalized position/coordination of the current fragment, aka pixel out vec4 color; //the result this filter should produce //the above part should be auto-generated, with the settings being generated using a GUI interface, even though user can type those in themselves //below is a dummy filter that simply outputs whatever the source image is main(){ vec4 currentColor = texture(source,pos); return currentColor; } Which would already be GPU-friendly (since it's basically a fragment shader) and thus GPU acceleration can be automatically used. Also, by default this is 32-bit image editing ready. I don't see many people would want to write their own filters, but those who want to could benefit from this, and other people can use their code without knowing how to write one, which means that Affinity Photo can have potentially unlimited amount of live filters. There could be more features, for example side-chaining other images as secondary sources, but that would only be in the roadmap if this one is in first, thus not in the topic of this suggestion.
  12. The feature of these two personas are actually pretty similar, just that the former one is for raw photo while the latter one is for 32bit HDR. (At least that's what I'm assuming that they are for). Not gonna make the full comparison, but at least my common sense would be, OpenEXR is the "raw" format for 3D rendering, and I would use Affinity Photo to "develop" it into the final result. It does contain the accurate color value anyway. Therefore, it feels weird that when I use the Develop persona, it would treat the pic as a badly-developed photo rather than a RAW image. Therefore, I'd say that maybe the two personas could just be one instead. This might cut down some confusion when dealing with OpenEXR files, and might help camera RAW photo processing a bit as well: everything is converted losslessly into 32-bit, and then the whole workflow can become 32-bit without any additional cost (except performance and memory), which could make live filters producing better result. (Giving that live filters work in 32-bit internally).
  13. Again, if I could, I would not want to choose between having a circle hue picker and a square S-L picker. Currently you can only have one, but not both, and that's why I post the request in the first place.
  14. Aside from an accurate 180 degree or 120 degree usually won't look that pleasing, it's still inefficient to type in the digits when you can just click on the desired point on the wheel. That field is usually only good for copying values in and out, or if there's no color picker available (say if you are quick-fixing a website's color palette).
  15. It's more that the linearity and consistancy of the picker that matters. And thus, the waste of resolution is really just a minor thing.
  • Create New...

Important Information

Please note the Annual Company Closure section in the Terms of Use. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.