Jump to content

dmstraker

Members
  • Content Count

    762
  • Joined

  • Last visited


Reputation Activity

  1. Like
    dmstraker got a reaction from Patrick Connor in Vibrance Saturation slider algorithm?   
    Apologies for the confusion. If it will help, more detail is below that hopefully answers your questions and explains my motivations.
    1. Monochrome conversion
    In a colour image, Red, Green and Blue have different values in each pixel. When Red, Green and Blue values in a pixel are equal, the result is white, grey or black (in other words, monochrome). The puzzle in converting to monochrome is how to shift the Red, Green and Blue in each pixel so they are equal. Examples of this include:
    Take the average. So if R=100%, G=0 and B=0 (in other words bright, saturated red), the monochrome pixel value is R=33%, G=33%, B=33% Go half-way between the maximum and the minimum.  So if R=100%, G=0% and B=0%, the maximum is 100% and the minimum is 0% the monochrome pixel value is R=50%, G=50%, B=50% (which will make the pixel a mid-grey, which is brighter than the first method). Use the gap between the maximum and the minimum. So if R=100%, G=0% and B=0%, the maximum is 100% and the minimum is 0% the monochrome pixel value is R=100%, G=100%, B=100% (which will make the pixel even brighter - in fact it will be white). So the question is 'What method will you use to convert the colour image into monochrome (ie black and white)?'
    A further problem is that when we look at different colours, we perceive them to be of different brightness. Just taking the primary and secondary colours (red, yellow, green, cyan, blue ad magenta), Yellow is seen as bright, while Blue is seen as dark, with the other colours in between. If you use any of the calculations above to create monochrome, a yellow and a blue pixel will both have the same brightness. This can make an image look poor (a good example is a bunch of different coloured flowers which all come out in similar shades of grey).
    So someone found that when converting to monochrome, it worked well to take different amounts of Red, Green and Blue. Because of the eye's sensitivity to different colours, within each pixel in the coloured image, you take about 30% of the Red, 59% of the Green and 11% of the Blue and then create the monochrome pixel by setting Red, Green and Blue all to the same value from this calculation. As a formula, this is R*0.30+G*0.59+B*0.11. The result is a monochrome image that looks more realistic.
    So if R=100%, G=0% and B=0%, the 30/59/11% calculation is 0.3*100%+0.59*0%+0.11*0% = 30%. So pure red appears as a darkish grey. if R=0%, G=100% and B=0%, the 30/59/11% calculation is 0.3*100%+0.59*100%+0.11*0% = 59%. So pure green appears as a lighter grey. In this way, different colours, even when they have the same pixel value are converted to appear as different shades of grey.
    2. The Vibrance question
    A way in Affinity Photo to create a monochrome image is to turn down the saturation. A question is whether this takes account of the 30/59/11% tweak above.
    When you turn down the Saturation in the HSL adjustment, it uses a formula that means a similarly saturated Yellow and Blue will appear as the same shade of grey. In some situations this is just fine, but if you want 30/59/11% conversion, so the human eye makes more natural sense of the colours, you need a different method.
    The Vibrance adjustment also has a Saturation control, but turning this down has a different effect to the Saturation control in HSL. I had thought the result was just from using the 30/59/11% calculation, but it seems to be slightly different.
    A way to get a perfect 30/59/11% calculation is to use Levels and simply change the RGB drop-down control to Grey.
    3. My motivation
    I am both a curious old techie and the author of the InAffinity YouTube channel where I explore and explain Affinity Photo. In this, I talk about such things as the 30/59/11% thing and the difference between HSL and Vibrance Saturation controls. Hence my original question. This helps me learn as well as giving the buzz of helping others. And hopefully also helping Serif, a great British company, to sell more product. With nearly 10,000 subscribers and lots of nice comments, I'm guessing I'm nudging the needle a little.
    Slightly perplexingly, Patrick seems to think I am trying to get hold of company secrets and distracting the developers from their important work. Some people would be insulted by this rebuke, but I am not. I was both a developer and then a QA engineer and, if I'm honest, was a pretty prickly geek who sometimes got irritated by perceived challenges. But then I studied psychology, looked in the mirror and hope I'm a bit cooler now. In particular I don't assume I know the thoughts and intent of others. In fact I've found that thinking kindly about them is better for me as well as for my relationships.
    So all's good.
  2. Like
    dmstraker got a reaction from Old Bruce in Vibrance Saturation slider algorithm?   
    Apologies for the confusion. If it will help, more detail is below that hopefully answers your questions and explains my motivations.
    1. Monochrome conversion
    In a colour image, Red, Green and Blue have different values in each pixel. When Red, Green and Blue values in a pixel are equal, the result is white, grey or black (in other words, monochrome). The puzzle in converting to monochrome is how to shift the Red, Green and Blue in each pixel so they are equal. Examples of this include:
    Take the average. So if R=100%, G=0 and B=0 (in other words bright, saturated red), the monochrome pixel value is R=33%, G=33%, B=33% Go half-way between the maximum and the minimum.  So if R=100%, G=0% and B=0%, the maximum is 100% and the minimum is 0% the monochrome pixel value is R=50%, G=50%, B=50% (which will make the pixel a mid-grey, which is brighter than the first method). Use the gap between the maximum and the minimum. So if R=100%, G=0% and B=0%, the maximum is 100% and the minimum is 0% the monochrome pixel value is R=100%, G=100%, B=100% (which will make the pixel even brighter - in fact it will be white). So the question is 'What method will you use to convert the colour image into monochrome (ie black and white)?'
    A further problem is that when we look at different colours, we perceive them to be of different brightness. Just taking the primary and secondary colours (red, yellow, green, cyan, blue ad magenta), Yellow is seen as bright, while Blue is seen as dark, with the other colours in between. If you use any of the calculations above to create monochrome, a yellow and a blue pixel will both have the same brightness. This can make an image look poor (a good example is a bunch of different coloured flowers which all come out in similar shades of grey).
    So someone found that when converting to monochrome, it worked well to take different amounts of Red, Green and Blue. Because of the eye's sensitivity to different colours, within each pixel in the coloured image, you take about 30% of the Red, 59% of the Green and 11% of the Blue and then create the monochrome pixel by setting Red, Green and Blue all to the same value from this calculation. As a formula, this is R*0.30+G*0.59+B*0.11. The result is a monochrome image that looks more realistic.
    So if R=100%, G=0% and B=0%, the 30/59/11% calculation is 0.3*100%+0.59*0%+0.11*0% = 30%. So pure red appears as a darkish grey. if R=0%, G=100% and B=0%, the 30/59/11% calculation is 0.3*100%+0.59*100%+0.11*0% = 59%. So pure green appears as a lighter grey. In this way, different colours, even when they have the same pixel value are converted to appear as different shades of grey.
    2. The Vibrance question
    A way in Affinity Photo to create a monochrome image is to turn down the saturation. A question is whether this takes account of the 30/59/11% tweak above.
    When you turn down the Saturation in the HSL adjustment, it uses a formula that means a similarly saturated Yellow and Blue will appear as the same shade of grey. In some situations this is just fine, but if you want 30/59/11% conversion, so the human eye makes more natural sense of the colours, you need a different method.
    The Vibrance adjustment also has a Saturation control, but turning this down has a different effect to the Saturation control in HSL. I had thought the result was just from using the 30/59/11% calculation, but it seems to be slightly different.
    A way to get a perfect 30/59/11% calculation is to use Levels and simply change the RGB drop-down control to Grey.
    3. My motivation
    I am both a curious old techie and the author of the InAffinity YouTube channel where I explore and explain Affinity Photo. In this, I talk about such things as the 30/59/11% thing and the difference between HSL and Vibrance Saturation controls. Hence my original question. This helps me learn as well as giving the buzz of helping others. And hopefully also helping Serif, a great British company, to sell more product. With nearly 10,000 subscribers and lots of nice comments, I'm guessing I'm nudging the needle a little.
    Slightly perplexingly, Patrick seems to think I am trying to get hold of company secrets and distracting the developers from their important work. Some people would be insulted by this rebuke, but I am not. I was both a developer and then a QA engineer and, if I'm honest, was a pretty prickly geek who sometimes got irritated by perceived challenges. But then I studied psychology, looked in the mirror and hope I'm a bit cooler now. In particular I don't assume I know the thoughts and intent of others. In fact I've found that thinking kindly about them is better for me as well as for my relationships.
    So all's good.
  3. Like
    dmstraker got a reaction from h_d in Vibrance Saturation slider algorithm?   
    Apologies for the confusion. If it will help, more detail is below that hopefully answers your questions and explains my motivations.
    1. Monochrome conversion
    In a colour image, Red, Green and Blue have different values in each pixel. When Red, Green and Blue values in a pixel are equal, the result is white, grey or black (in other words, monochrome). The puzzle in converting to monochrome is how to shift the Red, Green and Blue in each pixel so they are equal. Examples of this include:
    Take the average. So if R=100%, G=0 and B=0 (in other words bright, saturated red), the monochrome pixel value is R=33%, G=33%, B=33% Go half-way between the maximum and the minimum.  So if R=100%, G=0% and B=0%, the maximum is 100% and the minimum is 0% the monochrome pixel value is R=50%, G=50%, B=50% (which will make the pixel a mid-grey, which is brighter than the first method). Use the gap between the maximum and the minimum. So if R=100%, G=0% and B=0%, the maximum is 100% and the minimum is 0% the monochrome pixel value is R=100%, G=100%, B=100% (which will make the pixel even brighter - in fact it will be white). So the question is 'What method will you use to convert the colour image into monochrome (ie black and white)?'
    A further problem is that when we look at different colours, we perceive them to be of different brightness. Just taking the primary and secondary colours (red, yellow, green, cyan, blue ad magenta), Yellow is seen as bright, while Blue is seen as dark, with the other colours in between. If you use any of the calculations above to create monochrome, a yellow and a blue pixel will both have the same brightness. This can make an image look poor (a good example is a bunch of different coloured flowers which all come out in similar shades of grey).
    So someone found that when converting to monochrome, it worked well to take different amounts of Red, Green and Blue. Because of the eye's sensitivity to different colours, within each pixel in the coloured image, you take about 30% of the Red, 59% of the Green and 11% of the Blue and then create the monochrome pixel by setting Red, Green and Blue all to the same value from this calculation. As a formula, this is R*0.30+G*0.59+B*0.11. The result is a monochrome image that looks more realistic.
    So if R=100%, G=0% and B=0%, the 30/59/11% calculation is 0.3*100%+0.59*0%+0.11*0% = 30%. So pure red appears as a darkish grey. if R=0%, G=100% and B=0%, the 30/59/11% calculation is 0.3*100%+0.59*100%+0.11*0% = 59%. So pure green appears as a lighter grey. In this way, different colours, even when they have the same pixel value are converted to appear as different shades of grey.
    2. The Vibrance question
    A way in Affinity Photo to create a monochrome image is to turn down the saturation. A question is whether this takes account of the 30/59/11% tweak above.
    When you turn down the Saturation in the HSL adjustment, it uses a formula that means a similarly saturated Yellow and Blue will appear as the same shade of grey. In some situations this is just fine, but if you want 30/59/11% conversion, so the human eye makes more natural sense of the colours, you need a different method.
    The Vibrance adjustment also has a Saturation control, but turning this down has a different effect to the Saturation control in HSL. I had thought the result was just from using the 30/59/11% calculation, but it seems to be slightly different.
    A way to get a perfect 30/59/11% calculation is to use Levels and simply change the RGB drop-down control to Grey.
    3. My motivation
    I am both a curious old techie and the author of the InAffinity YouTube channel where I explore and explain Affinity Photo. In this, I talk about such things as the 30/59/11% thing and the difference between HSL and Vibrance Saturation controls. Hence my original question. This helps me learn as well as giving the buzz of helping others. And hopefully also helping Serif, a great British company, to sell more product. With nearly 10,000 subscribers and lots of nice comments, I'm guessing I'm nudging the needle a little.
    Slightly perplexingly, Patrick seems to think I am trying to get hold of company secrets and distracting the developers from their important work. Some people would be insulted by this rebuke, but I am not. I was both a developer and then a QA engineer and, if I'm honest, was a pretty prickly geek who sometimes got irritated by perceived challenges. But then I studied psychology, looked in the mirror and hope I'm a bit cooler now. In particular I don't assume I know the thoughts and intent of others. In fact I've found that thinking kindly about them is better for me as well as for my relationships.
    So all's good.
  4. Like
    dmstraker got a reaction from Max P in Vibrance Saturation slider algorithm?   
    Apologies for the confusion. If it will help, more detail is below that hopefully answers your questions and explains my motivations.
    1. Monochrome conversion
    In a colour image, Red, Green and Blue have different values in each pixel. When Red, Green and Blue values in a pixel are equal, the result is white, grey or black (in other words, monochrome). The puzzle in converting to monochrome is how to shift the Red, Green and Blue in each pixel so they are equal. Examples of this include:
    Take the average. So if R=100%, G=0 and B=0 (in other words bright, saturated red), the monochrome pixel value is R=33%, G=33%, B=33% Go half-way between the maximum and the minimum.  So if R=100%, G=0% and B=0%, the maximum is 100% and the minimum is 0% the monochrome pixel value is R=50%, G=50%, B=50% (which will make the pixel a mid-grey, which is brighter than the first method). Use the gap between the maximum and the minimum. So if R=100%, G=0% and B=0%, the maximum is 100% and the minimum is 0% the monochrome pixel value is R=100%, G=100%, B=100% (which will make the pixel even brighter - in fact it will be white). So the question is 'What method will you use to convert the colour image into monochrome (ie black and white)?'
    A further problem is that when we look at different colours, we perceive them to be of different brightness. Just taking the primary and secondary colours (red, yellow, green, cyan, blue ad magenta), Yellow is seen as bright, while Blue is seen as dark, with the other colours in between. If you use any of the calculations above to create monochrome, a yellow and a blue pixel will both have the same brightness. This can make an image look poor (a good example is a bunch of different coloured flowers which all come out in similar shades of grey).
    So someone found that when converting to monochrome, it worked well to take different amounts of Red, Green and Blue. Because of the eye's sensitivity to different colours, within each pixel in the coloured image, you take about 30% of the Red, 59% of the Green and 11% of the Blue and then create the monochrome pixel by setting Red, Green and Blue all to the same value from this calculation. As a formula, this is R*0.30+G*0.59+B*0.11. The result is a monochrome image that looks more realistic.
    So if R=100%, G=0% and B=0%, the 30/59/11% calculation is 0.3*100%+0.59*0%+0.11*0% = 30%. So pure red appears as a darkish grey. if R=0%, G=100% and B=0%, the 30/59/11% calculation is 0.3*100%+0.59*100%+0.11*0% = 59%. So pure green appears as a lighter grey. In this way, different colours, even when they have the same pixel value are converted to appear as different shades of grey.
    2. The Vibrance question
    A way in Affinity Photo to create a monochrome image is to turn down the saturation. A question is whether this takes account of the 30/59/11% tweak above.
    When you turn down the Saturation in the HSL adjustment, it uses a formula that means a similarly saturated Yellow and Blue will appear as the same shade of grey. In some situations this is just fine, but if you want 30/59/11% conversion, so the human eye makes more natural sense of the colours, you need a different method.
    The Vibrance adjustment also has a Saturation control, but turning this down has a different effect to the Saturation control in HSL. I had thought the result was just from using the 30/59/11% calculation, but it seems to be slightly different.
    A way to get a perfect 30/59/11% calculation is to use Levels and simply change the RGB drop-down control to Grey.
    3. My motivation
    I am both a curious old techie and the author of the InAffinity YouTube channel where I explore and explain Affinity Photo. In this, I talk about such things as the 30/59/11% thing and the difference between HSL and Vibrance Saturation controls. Hence my original question. This helps me learn as well as giving the buzz of helping others. And hopefully also helping Serif, a great British company, to sell more product. With nearly 10,000 subscribers and lots of nice comments, I'm guessing I'm nudging the needle a little.
    Slightly perplexingly, Patrick seems to think I am trying to get hold of company secrets and distracting the developers from their important work. Some people would be insulted by this rebuke, but I am not. I was both a developer and then a QA engineer and, if I'm honest, was a pretty prickly geek who sometimes got irritated by perceived challenges. But then I studied psychology, looked in the mirror and hope I'm a bit cooler now. In particular I don't assume I know the thoughts and intent of others. In fact I've found that thinking kindly about them is better for me as well as for my relationships.
    So all's good.
  5. Like
    dmstraker got a reaction from NotMyFault in Vibrance Saturation slider algorithm?   
    Apologies for the confusion. If it will help, more detail is below that hopefully answers your questions and explains my motivations.
    1. Monochrome conversion
    In a colour image, Red, Green and Blue have different values in each pixel. When Red, Green and Blue values in a pixel are equal, the result is white, grey or black (in other words, monochrome). The puzzle in converting to monochrome is how to shift the Red, Green and Blue in each pixel so they are equal. Examples of this include:
    Take the average. So if R=100%, G=0 and B=0 (in other words bright, saturated red), the monochrome pixel value is R=33%, G=33%, B=33% Go half-way between the maximum and the minimum.  So if R=100%, G=0% and B=0%, the maximum is 100% and the minimum is 0% the monochrome pixel value is R=50%, G=50%, B=50% (which will make the pixel a mid-grey, which is brighter than the first method). Use the gap between the maximum and the minimum. So if R=100%, G=0% and B=0%, the maximum is 100% and the minimum is 0% the monochrome pixel value is R=100%, G=100%, B=100% (which will make the pixel even brighter - in fact it will be white). So the question is 'What method will you use to convert the colour image into monochrome (ie black and white)?'
    A further problem is that when we look at different colours, we perceive them to be of different brightness. Just taking the primary and secondary colours (red, yellow, green, cyan, blue ad magenta), Yellow is seen as bright, while Blue is seen as dark, with the other colours in between. If you use any of the calculations above to create monochrome, a yellow and a blue pixel will both have the same brightness. This can make an image look poor (a good example is a bunch of different coloured flowers which all come out in similar shades of grey).
    So someone found that when converting to monochrome, it worked well to take different amounts of Red, Green and Blue. Because of the eye's sensitivity to different colours, within each pixel in the coloured image, you take about 30% of the Red, 59% of the Green and 11% of the Blue and then create the monochrome pixel by setting Red, Green and Blue all to the same value from this calculation. As a formula, this is R*0.30+G*0.59+B*0.11. The result is a monochrome image that looks more realistic.
    So if R=100%, G=0% and B=0%, the 30/59/11% calculation is 0.3*100%+0.59*0%+0.11*0% = 30%. So pure red appears as a darkish grey. if R=0%, G=100% and B=0%, the 30/59/11% calculation is 0.3*100%+0.59*100%+0.11*0% = 59%. So pure green appears as a lighter grey. In this way, different colours, even when they have the same pixel value are converted to appear as different shades of grey.
    2. The Vibrance question
    A way in Affinity Photo to create a monochrome image is to turn down the saturation. A question is whether this takes account of the 30/59/11% tweak above.
    When you turn down the Saturation in the HSL adjustment, it uses a formula that means a similarly saturated Yellow and Blue will appear as the same shade of grey. In some situations this is just fine, but if you want 30/59/11% conversion, so the human eye makes more natural sense of the colours, you need a different method.
    The Vibrance adjustment also has a Saturation control, but turning this down has a different effect to the Saturation control in HSL. I had thought the result was just from using the 30/59/11% calculation, but it seems to be slightly different.
    A way to get a perfect 30/59/11% calculation is to use Levels and simply change the RGB drop-down control to Grey.
    3. My motivation
    I am both a curious old techie and the author of the InAffinity YouTube channel where I explore and explain Affinity Photo. In this, I talk about such things as the 30/59/11% thing and the difference between HSL and Vibrance Saturation controls. Hence my original question. This helps me learn as well as giving the buzz of helping others. And hopefully also helping Serif, a great British company, to sell more product. With nearly 10,000 subscribers and lots of nice comments, I'm guessing I'm nudging the needle a little.
    Slightly perplexingly, Patrick seems to think I am trying to get hold of company secrets and distracting the developers from their important work. Some people would be insulted by this rebuke, but I am not. I was both a developer and then a QA engineer and, if I'm honest, was a pretty prickly geek who sometimes got irritated by perceived challenges. But then I studied psychology, looked in the mirror and hope I'm a bit cooler now. In particular I don't assume I know the thoughts and intent of others. In fact I've found that thinking kindly about them is better for me as well as for my relationships.
    So all's good.
  6. Like
    dmstraker got a reaction from NotMyFault in help doc for procedural texture (x y w h rx ry ox oy)   
    Elementary. Of course. gr=golden ratio. Take a bow, NMF.
  7. Like
    dmstraker reacted to NotMyFault in help doc for procedural texture (x y w h rx ry ox oy)   
    A bit Sherlock Holmes:
    my forensic tool: PT filter with A=gr/x delivered smooth gradient. x=2 info panel shows A=206. Ok, next try A=gr-z/255 using input z of type Z. starting with z=412, and poking around i find gr-413/255 euqals 0. so gr=413/255 = 1.6196 Google finds this is the golden ratio Similar you can find pi=801/255. I rember seeing this long time ago (maybe video turorial), but you cannot find any hints in the helpfile.
  8. Like
    dmstraker reacted to NotMyFault in Vibrance Saturation slider algorithm?   
    I don't know if Photo handles this euqal to PS. Here you find some more info about the definition.
    https://www.photo-mark.com/notes/analyzing-photoshop-vibrance-and-saturation/
  9. Like
    dmstraker got a reaction from NotMyFault in help doc for procedural texture (x y w h rx ry ox oy)   
    Yup. I keep bumping into them. For example I use gn for green as gr means something (grads?). re and bl for red and blue are ok.
  10. Like
    dmstraker reacted to walt.farrell in Typing Framed Text on Shape (Mark Teague)   
    Oh. Text on a path, aka Path Text. That's new in Photo, but been there for a long time in Designer and Publisher.
  11. Like
    dmstraker got a reaction from mbrakes in Deconvolution?   
    I just had a question on my InAffinity channel as to whether APh uses deconvolution in sharpening algorithms. Apparently Photoshop uses it for its smart sharpening. Deconvolution? I had to look it up.
    So I though I'd pass it on to you good folks. Any moves in such a direction? Or maybe even more intelligent stuff?
    Tx
  12. Like
    dmstraker got a reaction from walt.farrell in Typing Framed Text on Shape (Mark Teague)   
    Disclaimer 1: The video was done a while ago. I don't have previous versions of APh so tricky to check.
    Disclaimer 2: I'm more of a photo guy so make little use of text (so claim limited expertise here).
    Disclaimer 3: My brain hurts! (any brain surgeons out there?)
    Otherwise I get both the original question and the explanation. It is a bit of a confuser if you don't know what's happening. Like what happens with the Artistic Text tool at shape boundaries (nice new feature, btw).
  13. Like
    dmstraker reacted to MEB in Curves Alpha not working when child layer   
    Hi @dmstraker, @NotMyFault,
    Sorry the delay getting back to you. We've been quite busy lately due to the amount of new posts/support tickets generated by the latest releases and the 90 day free trial offer/discounts.
    The Channel Mixer adjustment seems to be working fine for me when set to alpha in all positions. Can you record a clip please? Maybe I'm missing something...
    Curves and Levels are not working. I've logged this to be looked at. Thanks for reporting it.
  14. Like
    dmstraker reacted to firstdefence in Typing Framed Text on Shape (Mark Teague)   
    What you are doing is converting the shape into a text frame, that is why the shape loses some of it’s shape properties.
  15. Like
    dmstraker reacted to NotMyFault in Curves Alpha not working when child layer   
    Here you go @MEB
    same issue when you group both layers instead of nesting
    same issue with levels adjustment & alpha channel
    samle issue with chanel mixer (select alpha channel, set blue participation to 100%
    In all tested cases, every adjustment layer stops working when nested or put into a group:
    Windows 10, OpenCL active OpenCL inactive WARP driver Photo 1.9.1.979 Beta 1.9.2.1005 curves alpha not working 1.9.1.afphoto curves alpha not working 1.9.1.zip
  16. Thanks
    dmstraker got a reaction from smadell in Affinity Photo from 10,000 Feet - Free PDF   
    Nicely done, @smadell, and thanks for the InAffinity reference.
    For a future version if you like, I keep a parallel web-based index (and resources) here: http://changingminds.org/disciplines/photography/affinity_photo/affinity_photo.htm
  17. Thanks
    dmstraker reacted to smadell in Affinity Photo from 10,000 Feet - Free PDF   
    I am attaching a free PDF called “Affinity Photo from Ten Thousand Feet.” This is a 41 page book that explains many of the concepts and questions that forum members ask frequently.
    Much of the information is based on presentations given to my local photography club. Some of it is based on a series of articles written for the club’s newsletter. Much of the book is newly written material, based on years of working to understand digital photography, how color works, and how Affinity Photo fits into that picture.
    Please enjoy the book. All I ask in return is that you look it over and leave a comment.
    Affinity_Photo_from_Ten_Thousand_Feet.pdf
  18. Thanks
    dmstraker reacted to Artur in New Icons Pack   
    Hi,
    some time ago I found a post with free icons packs. Today I would like to share with you another 5 great libraries:
    Evil Icons by evilmartians.com (1.9.0) / MIT license fontkiko (1.0.8) 521 icons x 3 (light, regular, solid) / SIL OFL 1.1 Phosphor Icons (v1.1.2) 683 icons x 5 (thin, light, regular, bold, solid) / MIT license Heroicons (0.4.2) 224 icons (outline, solid) / MIT license Boxicons (2.0.7) / ~1500 icons / SIL OFL 1.1 You can download them @ https://aiac.pl/affinity
    Cheers,
    Artur.
  19. Haha
    dmstraker got a reaction from o0Spyder0o in Unsplash Stock photos crashing when dragged into image   
    Should have gone to Specsavers
     
  20. Like
    dmstraker got a reaction from Max P in Value box in Procedural Texture variables   
    ...and of course the corollary, to include a slider where there is now just a value box.
    So: Add value boxes to 0-1 and -1 to 1 range variables, and sliders to real numbers and integers.
    Thanks!!
  21. Like
    dmstraker got a reaction from crgreen in How to Edit the Alpha Channel   
    You're welcome. Ta works. Also Diolch (Welsh).
  22. Like
    dmstraker got a reaction from crgreen in How to Edit the Alpha Channel   
    Wow. Long conversation. I won't pretend to understand some of the discussions.
    However something I haven't seen here is mention of the Erase blend mode.
    A Pixel layer with Erase blend mode seems to work the same as a Mask layer, with Mask white=Pixel layer with  transparent, and Mask black=Pixel layer opaque. Non-pixel layers work in the same way. Coupled for example with Blend Ranges and inserted into a group (to limit the erase layer extent), it seems this could be of use at least in some applications.
    Am I barking up the wrong tree (or just barking)?
    Edit: Here's video:
     
  23. Haha
    dmstraker got a reaction from Chris B in Unsplash Stock photos crashing when dragged into image   
    Should have gone to Specsavers
     
  24. Haha
    dmstraker got a reaction from Alfred in Unsplash Stock photos crashing when dragged into image   
    Should have gone to Specsavers
     
  25. Like
    dmstraker reacted to Chris B in Placed image colouring   
    Hey, this is by design.
    An example of where this is useful is when we want to recolour stuff using say, a Pantone. I believe we do this when creating our workbooks. I agree it should probably be documented though.
×
×
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.