-
Posts
877 -
Joined
-
Last visited
Reputation Activity
-
rui_mac got a reaction from h.ozboluk in Blend tool in Designer
I requested for a Blend Tool when Affinity was still in the beta stage.
That was MANY, MANY YEARS AGO!!!!
-
rui_mac reacted to CaroleA in Scripting
As I am reading this thread and anxious to see some scripting capabilities added to Affinity, I notice that the users' needs are pretty varied. I am coming from PaintShop Pro where I use scripting extensively to create elements and effects, and in Affinity, I have tried using Macros (which seems the closest to scripting in the current situation). However, Macros lacks many possibilities that I am hoping to find when scripting becomes available. Of course, if a macro is enough for a task, that is great, but here are some features that I am looking for:
ability to edit a sequence of steps after recording them. ability to calculate based on the starting image. Ex: using a smaller scale pattern if the starting shape is smaller, and vice versa ability for the user to input a choice and have the steps continue based on those choices. Ex: Do you want a single or a double frame? Do you want the edges to be straight, slightly warped, or very warped? How many rows/columns do you want? ability for the user to select colors through the execution of the steps to be applied in specific places. ability to create multiple files. Ex: create individual images for each letter in a word, create 20 different buttons based on a selection of colors given by the user. ability to randomize settings so that every execution of the steps would yield something slightly different ability to pause the execution to allow the user to do something before continueing. Ex: draw a shape, move a selection to a different location. I have been using scripting in PSP for over a decade and have used similar functions to code hundreds of scripts so if I want to offer the same thing to Affinity users, I will definitely have to wait for the scripting to be added since Macros just won't cut it for 90% of what I would want to do.
But I am curious to see what others are hoping to use scripting for? What do you need scripts to do that macros can't do?
-
rui_mac reacted to NotMyFault in Why does Dodge and Burn tools don't work on masks?
You are mixing up 2 separate things:
numeric representation of images, including alpha channel, or image data visualization of those images data using colors on a display Affinity and all other apps simply use greyscale colors (or red overlays or whatever the user wants) to visualize data. And all 8 billion humans except 1 simplify this and use greyscale color names when speaking of numeric alpha values. This does not mean the values are color or have color, it only means that 80% opacity is visualizes as 80% grey. The alpha channel is rendered or displayed as a greyscale image. Super simple, handy, and all but 1 are happy. and you can use all brush tools on this greyscale layer, and Affinity will happily convert the colors into numbers and store the numbers (not the color shown in display) not the alpha channel. you can insist on your interpretation forever, that is totally fine.
all others will insist on their interpretation and nobody will ever convince the other side. Again this is totally fine and we simply want to stop arguing about this topic.
-
rui_mac reacted to Bound by Beans in Why does Dodge and Burn tools don't work on masks?
While it's true that masks represent opacity and are not color channels per se, they are effectively grayscale pixel maps where brightness values correspond directly to transparency. Since Dodge and Burn tools operate on pixel luminance, it makes perfect sense to allow them on masks. Photoshop and other tools support this because there's no technical limitation—only a design decision. Treating masks as grayscale layers is not only logical but essential for advanced workflows.
Rui_mac, influxx, and catlover focus on practical use and functionality—they argue that it's both useful and technically feasible to allow Dodge and Burn on masks.
R C-R sticks to a narrow technical definition, emphasizing what masks are not, without addressing actual workflow needs or established practices in other software. I don't know what Serif's customers are supposed to do with that kind of approach, other than be driven away from Affinity and the forum.
Therefore, Rui_mac, influxx, and catlover presents the stronger argument, both technically and functionally. 🌟
"Without addressing actual workflow needs or established practices in other software"... it strikes me as descriptive of Serif’s core problem. One could say that Serif prioritizes internal logic over professional workflows and industry standards, which risks making the software less appealing to experienced users and enterprise clients. And that risk, after 10 years of development, has become a permanent reality.
Not something I need to discuss any further. Reality out here matters more than the parallel universe of the Affinity forum.
-
rui_mac got a reaction from NotMyFault in Why does Dodge and Burn tools don't work on masks?
I explain it in the video.
Dodge_Burn2_0.mp4
-
rui_mac reacted to influxx in Why does Dodge and Burn tools don't work on masks?
C'mon man don't waste your time with this fool. you of all people over the years with this thread have tried and tried. He's just too stubborn to get it.
-
rui_mac got a reaction from R C-R in Why does Dodge and Burn tools don't work on masks?
If you create a 3D rendering and ask for the 3D application to save the RGB with an alpha channel and you choose a file format that does no allow for alpha channels (JPEG, for example), you have to save the alpha channel as an additional file (one JPEG for the RGB and one JPEG for the alpha).
What type of image will be the JPEG with the alpha? A colourless image? A JPEG without any coloured pixels?
Please, tell me what will be in that JPEG.
-
rui_mac reacted to influxx in Why does Dodge and Burn tools don't work on masks?
Sorry to dredge up an old thread but you are completely incorrect with your statement that you have made several times in this thread.
I have been in this industry for 40 years and have used PS every day in that time. Even back when layers didnt even exist. I suggest you go back and read the works of Kai Krouse. I suggest you go back and read the works of Zax Dow. I suggest you go back and read the works of John Knoll . I suggest you go back and read the works of Alex Lindsay. You will find that you have it completely backwards.
Layers appear to us as transparent BECAUSE behind the scenes the alpha channel is TELLING the layer what pixels to display. There is no such thing as colorless opacity levels. Its all just pixels.
you need to stop telling people the wrong info
-
rui_mac reacted to influxx in Why does Dodge and Burn tools don't work on masks?
Thanks for sharing your workflow. I will try that as it seems the closest way to achieve what I want. however with this method there is still a disconnect between the mask and the image. You are working on them separately. Totally undesirable. The superior method is to work on the mask directly so you can see exactly in real time how the mask affects the composite. Because after all the composite is what we are going after, not just masking as some arbitrary exercise.
anyway cheers for sharing your process.
-
rui_mac got a reaction from influxx in Why does Dodge and Burn tools don't work on masks?
The main reason that I don't ditch Photoshop completely is the way Affinity Photo deals with alpha channels.
It is so much more intuitive in Photoshop, since alphas are just greyscale images and you can do to them whatever you can do to a regular greyscale image.
-
rui_mac got a reaction from influxx in Why does Dodge and Burn tools don't work on masks?
R C-R, let me try to explain this slowly and clearly.
Internally, alpha channels are stored as a list of 8 bit or 16 bit values. Let us assume an 8 bit list, for the sake of this example.
A value of 0, means transparent. That is the opacity that will be applied to the RGB values, meaning that they will simply not be shown.
A value of 255, means fully opaque. That is the opacity that will be applied to the RGB values, meaning that they will simply be shown in full color.
Any other value between 0 and 255 will show the RGB values with different opacities.
But those alpha values (between 0 and 255) are shown as greyscale values because, in fact, they describe a greyscale image. And that is how you see an alpha channel: as a greyscale image.
Since it is shown as a greyscale (because it is simply a list of 8 bit values), and you can perform painting operations on it, there is simply NO REASON whatsoever for not being able to perform ALL operations that you could perform on a regular/official greyscale image.
You just have to imagine that an alpha channel is simply an extra channel, besides the Red, Green and Blue channels. Since each Red, Green and Blue channels are simply greyscale images by themselves, the extra (alpha) channel, is also a greyscale image.
If you export a set of frames from a 3D or video application an ask for the alpha to be exported as an independent files (required, when exporting frames in file formats that DON'T support extra channels besides RGB), guess what you get... an RGB file and a greyscale file.
So, alphas ARE greyscales.
-
rui_mac got a reaction from influxx in Why does Dodge and Burn tools don't work on masks?
I also did some darkroom stuff and I still think of dodging and burning in terms of "hiding light" or "let light pass through" ;)
-
rui_mac got a reaction from influxx in Why does Dodge and Burn tools don't work on masks?
Yeah, black=transparent sounds weird for graphic artists because to them, black is "all colors combined", because they use the subtractive system (CMYK) but is it obvious for the video-guys, because they use the addictive system (RGB) and, in there, black stands for "no light information" or "absence" :)
Anyways, all of those artists (print or video) assume that the alpha is just another image that is used to convey information about how the transparency/opacity of the color components should be shown.
And, since it is just another image, it should be possible to do whatever is possible to do with an image.
-
rui_mac got a reaction from influxx in Why does Dodge and Burn tools don't work on masks?
I have been giving classes for over 20 years now and many of my students are professionals in the graphic arts and video areas (I give specialized classes to recent graduates and long time professionals) and ALL of them know that, when it comes to masks/alpha channels, white stands for opaque and black stands for transparent. It is almost a mnemonic for them because when they visualize a mask (an alpha channel) that is how it is shown.
Most of them deal with alpha channels, straight or pre-multiplied, as extra channels or as independent files and all, and I really mean ALL of those alpha channels (masks) are simply greyscale images. And they must be, since they all have to manipulate those alpha channels (masks) as any other image, blurring, contrasting, smudging, sharpening, etc.
And, internally, an alpha channels (mask) is stored as grid of 8 or 16 bit values, just like any other greyscale image.
So, there is no reason whatsoever to treat an alpha channel (mask) as something other than a greyscale image. There are only advantages in treating it as a greyscale image and be able to manipulate it as such.
But if anyone can point me out any huge disadvantage in doing so that greatly surpasses the advantages, please, tell me.
-
rui_mac got a reaction from influxx in Why does Dodge and Burn tools don't work on masks?
Painting with a regular brush set to a low value (let's say, 10%) of black and set to Multiply, will simply add 10% of black with each stroke, no matter if you are painting over highlights, midtones or shadows.
The same way, painting with a regular brush set to a low value (let's say, 10%) of white and set to Screen or Add, will simply add 10% of white with each stroke, no matter if you are painting over highlights, midtones or shadows.
This is not substitute for the true dodge and burn tools.
-
rui_mac got a reaction from influxx in Why does Dodge and Burn tools don't work on masks?
When you are dodging/burning you are just adding more contrast to the textures that are already present in the image, since they are derived from the color channels.
So, imagine a wool sweater. It has a distinct edge, that results from the coarse wool. It even has a kind of... lets call it "organic feathered edge". So, painting would create a harder edge. Simply dodging and burning would create a nice mask with the "organic feathered edge" in a very controlled manner. I could even decide to increase the density of some parts of the edge and keep it softer in other places.
The same with hairs. Sometimes I just want to keep the thicker hairs and ditch the thinner ones. By dodging and burning on a channel that was derived from a color channel, I can create a very perfect hair mask, that is completely controlled by me, not a result of automatic processes. Oh, I can use automatic processes. But dodging and burning will allow me to edit the result of the automatic process in a very controllable way. Painting hairs is not efficient, specially when they are already there, in the mask, just requiring some brightness/contrast adjustments.
-
rui_mac got a reaction from influxx in Why does Dodge and Burn tools don't work on masks?
I already did.
I have been asking for more control over alpha channels ever since Affinity Photo was released.
Between 50% to 80% of a professional photo retoucher/calibrator has to do with alpha channels.
If editing alpha channels doesn't get more flexible and powerful, lots of things that are simple in Photoshop will be a pain (or almost impossible) in Affinity Photo.
And, believe me, I really want to use Affinity Photo for everything :)
-
rui_mac got a reaction from influxx in Why does Dodge and Burn tools don't work on masks?
Have you seen my video?
R C-R, I'm a image processing professional for more than 20 years. And I can assure you that dodging and burning a mask is A MUST for all professionals.
Don't think of a mask as simply a transparency. A mask can be used as a selection. I use masks/additional channels very, very often, because I need to create very precise selection for compositing or for color calibration.
So, I often start with a copy of a color channel or a combination of channels. Then, I have to edit that new, additional alpha channel.
To do so, I need to use ALL the arsenal of tools, adjustments, filters, etc, that I have available for a simple grayscale image. Because alpha channels/masks are just that: grayscale images.
If I can't edit an alpha channel/mask with all the tools, adjustments or filters, that will diminish the methods and techniques used by professionals.
-
rui_mac got a reaction from influxx in Why does Dodge and Burn tools don't work on masks?
I know this method, Crabtrem.
I don't mean dodging and burning on an image.
I mean, on a mask.
For example, when I have a mask that I derived from a color channel, sometimes I need to darken or lighten some areas of the mask. And I mean, is a very specific way, not globally.
Check out this example I created sometime ago:
https://www.youtube.com/watch?v=W6H_8gjX-eI
-
rui_mac got a reaction from influxx in Why does Dodge and Burn tools don't work on masks?
My mask creation workflow (in Photoshop) requires that I use most paint/edit tools on a mask. After all, a mask is just a greyscale image.
So, why is it that the Dodge and Burn tools (two of the tools that I use more often when creating masks) don't work with masks?
-
rui_mac got a reaction from BBG3 in Scripting
This is just a very simple example that I coded now in Cinema 4D. The script is attached to the top object, called "Null". This would be the equivalent of a layer or another object in Affinity. Then, no matter how many objects are set as children of the top object, the script always distributes them evenly between the position of the first and last child. As you all can see in the movie, I can add as many objects I want, and they will always distribute evenly. And it is fully dynamic.
Captura2.mp4 -
rui_mac got a reaction from BBG3 in Scripting
If someone in the Scripting development team at Affinity could answer me this, it would be great.
In other applications I use, that allow for scripting (mainly, Cinema 4D and VizRT), besides being able to create scripts that can be executed from a menu listing (Cinema 4D, only), I can also attach a kind of a "tag" to an object that runs scripting code in real time, whenever the scene/document is changed/refreshed.
That way, I can perform scripting operations on the object that the "tag" is attached and also on its children.
Will Affinity scripting abilities only allow for scripts that can be invoked from a menu or will we have a new type of "tag" to attach to our "layers/objects" that can allow us to add abilities/behaviours to any "layer/object"?
Something like my mockup...
-
rui_mac got a reaction from GRAFKOM in Blend tool in Designer
Here is a video I recorded a few years ago, when Affinity Designer was still at its infancy, showing how great the Blend tool was, in FreeHand.
I also recorded a few more videos about FreeHand, since it was such an amazing application, and I wish some stuff was implemented in Affinity, as the way it worked in FreeHand was still better that what Illustrator has now.
-
rui_mac got a reaction from Frozen Death Knight in Blend tool in Designer
Here is a video I recorded a few years ago, when Affinity Designer was still at its infancy, showing how great the Blend tool was, in FreeHand.
I also recorded a few more videos about FreeHand, since it was such an amazing application, and I wish some stuff was implemented in Affinity, as the way it worked in FreeHand was still better that what Illustrator has now.
-
rui_mac reacted to krsdcbl in Scripting
Quick reminder to everyone that:
Dev confirmed that the software below runs on C++ and is capable of precise calculations (obviously), so that will likely be no concern for any Scripting API layer on top When people talk about "floating point precision" concerns of JS, they are talking about 0-1 calculations in data processing that needs to be orders of magnitude more precise than whatever we will require for graphic production If scripts perform integer maths, "floating point precision" matters in terms of avoiding overflow -- which is a non-issue, since results will be rounded to 3-4 decimal places in the software anyway If scripts perform decimal maths, floating point precision is a concern for accuracy and rounding -- but this only ever starts to be an issue at accuracies far beyond what the base software will render anyway Actual Floating point math comes into play for bezier handling and rasterization, but incosistencies in JS and other languages are of the likes of "0.2+0.1 resulting in 0.30000000000000004 instead of 0.3" -- crudely oversimplidied, this means that even on massive artboards in the range of kilometers or millions of px across, we'd be talking about accuracies in the range of micrometers to nanonmeters when positions are represented as floats. And additionally, such conversions for rasterization are very likely handled by the C++ software below, not by your script itself! In short, it is very very unlikely that it would ever affect the outcomes of even bezier curve ratios in the realms of 2D print & digital graphics. Those are concerns for 3D rendering and illumination, data science, AI, quantum physics, termodynamics, acoustics, etc -- and have little to do with the language a given API is implemented in anyway.
I'm confident the devs have considered any such concerns seriously when devising a plan for building an API, let alone the software itself, and would be very happy to have this thread discuss feature propositions and ideas for the actual API, rather than in-depth topics of their stack without having any insight or informations as basis for such a discussion. 🙏