Jump to content
You must now use your email address to sign in [click for more info] Ă—

NotMyFault

Members
  • Posts

    6,823
  • Joined

  • Last visited

Everything posted by NotMyFault

  1. Could there something be missing in you post (the preset file)? besides the textual description, some screenshot how it works and an example document would help to understand.
  2. Nice start 👍🏼Thank you for sharing. Its all personal taste, for me your edits are a bit too much (oversaturated colors look a tad unnatural). It seems like you applied mostly global adjustments covering all areas. I would suggest to use masks and apply more specific and subtle adjustments to specific areas, like sky (reduce lightness, increase blue), trees on the left (black point, clarity to recover colors). My main advise (hardest to implement): The light was very unfortunate at the time of taking the photo, the castle lies a bit in the dark, and the trees to the left are to light and desaturated by too much sunlight. A different time with better light (on the castle) would bring a better photo. It’s almost impossible to achieve that in post with Photo (or any other app).
  3. It depends on your workflow style: if more pixel - oriented: Use Photo (best for small social media logos banner etc) if more vector oriented: Use Designer (best for book covers, but Photo may provide all you need, see below) You may need both, this can simplify some Workflows where you need to combine functions of both. You can (and must) purchase a license for each OS. The document format is shared across all platforms and apps. Quick tip: for Win and Mac, purchase via Affinity’s website instead app stores. It has some crucial advantages: Money back guarantee Less technical issue (yes) Affinity staff able to help you in case trouble while purchasing
  4. Good point, according to wikipedia https://en.m.wikipedia.org/wiki/Bézier_curve Some curves that seem simple, such as the circle, cannot be described exactly by a Bézier or piecewise Bézier curve; though a four-piece cubic Bézier curve can approximate a circle (see composite Bézier curve), with a maximum radial error of less than one part in a thousand, when each inner control point (or offline point) is the distance horizontally or vertically from an outer control point on a unit circle. More generally, an n-piece cubic Bézier curve can approximate a circle, when each inner control point is the distance from an outer control point on a unit circle, where t is 360/n degrees, and n > 2. I don’t know which types (cubic) Affinity uses, but the radial error should far less vs what is visible. If you reduce the circle diameter to 15px, the edge color change is strong visible.
  5. Actually I think this might be the potential root cause: create a circle (ellipse tool with fill color, e.g. blue) on integer position with integer size (100 px) Select move tool select anchor point middle. Check that it has integer value (no fractional digits). If not, move to integer position. rotate stepwise using transform panel. Check rendering of edge pixels: depending on rotation value, rendering changes. Set anti-aliasing to forced off. Repeat step 5 and see identical results. Increase size to odd integer 101px. Repeat step 5 and see identical results. Conclusion: A rotated circle renders differently depending on rotation at edge pixels, anti-aliasing on or off. This is totally unexpected as a circle should be rotation invariant. The issue raised by abject39 might be related or independent. Tested on iPad with Photo (to have control over anti-aliasing) Note: video shows no issue for even circle size - this is due to having selected wrong zoom level and crop for video. Issue can be seen at about 45 degree position of circle, try yourself. IMG_0389.MP4
  6. I raised an similar issue about rotated objects several month ago, in Photo. As you object have a size with even number of pixels, when you rotate them with the middle anchor point, you get unavailable rounding issues regarding the position. To be able to rotate objects on the middle anchor point, you should use odd pixel size. Otherwise, anti-aliasing could lead to fractional positions. If the number of objects stays within reasonable limits, as a workaround you could other alignment methods, like grid or rulers and snapping, or rectangular helper shapes (unrotated). You may even nest problematic rotated shapes into rectangular shapes, to allow perfect alignment / distribution.
  7. I didn’t see any announcement within the release notes covering this functionality. As you already started a different thread about this, and Affinity assigned a bug tracker label there, it would help to keep everything organized if you post in that existing thread instead of opening new ones.
  8. Welcome to the forum. Currently none of the Affinity apps provide animation functionality. You could create and export files containing individual vector objects, and use them in other apps specialized to animate objects.
  9. You need to use the File>Export function to convert them in one of the formats like PDF, JPG, PNG, SVG which can be read by other software. https://affinity.help/designer/English.lproj/pages/ShareSavePrint/export.html
  10. Seems to be caused by “mitre limit” if you set this to zero, issue solved
  11. The issue for me is that it could introduce an enormous level of redundancy if you really want to add X/ Y at every place. Just start with the image provided by @th_studio In addition to be marked area, there is another box on top “Page Preset” where the X/Y label must be added. For me (this is only a personal preference) I find it redundant, superfluous and disturbing. You have other preferences, which is ok. Regarding transfer panel (screenshot from iPad) the panel shows 3 rows of input values. It makes sense to use the shortest possible label ( X / Y) at that instance to distinguish between dimensions, position, rotation / shear. Otherwise the risk of editing in the wrong row would be high. If the transfer panel would showing only size or position, I could live without these labels.
  12. This seems a bit too much nannying to me. At least Affinity is consequent and always uses x first and y second. If you start that route, you may need the explicit information that x is counted from left to right, and y is counted from top to bottom, too. And that Affinity is using square pixels (no round, no rectangular with different aspect ration) etc.
  13. This might help You can automate the process by creating 3 simple channel mixer presets and recording a macro to apply them, plus change the blend mode ton „add“
  14. Most new Cameras have the option of tethered shooting via a smartphone app. This might be unfamiliar for you, but really opens a new dimension - you can control your Camera more or less via Smartphone including live preview, from where you stand (part of of a family image, not excluded behind the camera)
  15. This might be a misconception. JPEGs support only 8 Bit per color Channel. RGB uses 3 channel a 8 Bit = 24 CMYK uses 4 channels a 8 bit = 32. But you are right that support for 32 Bit JPEGs is limited - but this is caused by the CMYK color format So the best cause of action is simply to not use CMYK if you want to export to jpg - unless you have a very specific reason to use this combination. You could convert to RGB during export (might cause color shift), or export as PDF which fully supports CMYK To create a smaller file, you can reduce dpi in the PDF during export.
  16. Hm, based on your screenshot i don’t get the same iterating edge detection. If i combine one run of ED with e.g. highpass and threshold i get the outer edge faster, but not the inner edges of the dice.
  17. I share @IPv6opinion that there is a problem with handling of alpha channel in RGB/32 mode. A made a simple test document creating a white layer with alpha linearly depending on x-position (from 0 / black to max / white). If you use channels panel to only show alpha channel use 32 bit preview and adjust gamma from 1 to any other value the rendering of the alpha channel differs vs. gamma = 1. This is totally unexpected. The alpha channel should never depend on gamma. Gammy should only be applied to RGB or CMYK / LAB color channels. rgb32 alpha linear.afphoto
  18. Maybe we are talking about completely different topics. I strongly assume Affinity don't use Gamma correction on Alpha channel for layer blending in any color mode - except the (faulty) case of fill layers. Specifically what is explained in the wikipedia link about Alpha Composting. I did not relate to other situations, e.g. converting between 8/16 and 32 bit modes, or channels panels / channels mixer etc. I've read your post in the linked thread, but couldn't follow your argumentation. I will try to make some artificial test documents on my own, this helps me to understand.
  19. I've run the example images from https://de.wikipedia.org/wiki/Sobel-Operator through the Affinity Edge Detection filter and it delivers very similar results. As Affinity tends to use standard libraries (if available) i don't think they invented something different (more capable) on their own. To be honest, Affinity is more a jack of all trades than an specialist tool for specific edge cases. So if you have an specialist tool like Substance Designer, i wouldn't hold my breath waiting for Affinity providing the same level of functionality.
  20. Only a Fill Layer opacity will transform the alpha channel with a gamma of 2.0 which seems to be wrong. As far as i know, alpha channels does not apply gamma correction, see comments by Gabe and: https://en.wikipedia.org/wiki/Alpha_compositing Note that only the color components undergo gamma-correction; the alpha channel is always linear.
  21. Okt. 2020 opens normal on my PC (Win, 1.9.2.1015, and 1.10.1.11), but get a message about missing fonts and missing linked resources. Maybe one of the linked resource is corrupted?
  22. Maybe i found a probable cause: When reducing layer opacity of fill layer to 50%, it becomes much lighter vs. all other methods. You can spot this by disabling all red layers. maybe related to (missing / double) gamma correction in blend formula. 0.7^2.2 roughly equals 0.5 if i change opacity to .7 it gives expected color
  23. I think Affinity provides this already (detect edges), but only as non-live filter. As the detect edges filter has no input parameters this restriction is kind of acceptable. It may become more of a restriction if you want to adjust the image layers below while seeing its impact to edge detection. Could you please provide the use case where the existing non-live filter does not help? I don’t understand why it should integrate to procedural texture filter. Edge detections needs to access neighboring pixels, this is more in direction of blur/sharpen filters. PT filter are unable to access RGB data from neighboring pixels.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.