Jump to content

rui_mac

Members
  • Content count

    799
  • Joined

  • Last visited

About rui_mac

  • Rank
    Advanced Member

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

876 profile views
  1. rui_mac

    Scripting

    I arrived late to this thread but the subject of adding scripting abilities to the Affinity suite has started a long time ago. I can't wait to be able to make my own scripts inside any of the Affinity apps. And I think python is, definetelly, the best way to go. Multiplatform, lots of libraries available, easy and fun to code, very efficient,etc. Having coded in BASIC, assembler, C, C++, javascript and python, I think python is the best one for scripting.
  2. The native format is very structured, I know. And that means that is very compact when saved and also when in memory. But, and this is a big but, if it is possible to paint or blur the alpha data, then it is possible to Dodge and Burn also. If the alpha was stored in such a way that it was not possible to darken or lighten the values (in fact, dodging and burning are simply the increasing or decreasing of the values), then it would not be possible to paint on alpha too.
  3. I know all about storing methods. I teach my students how JPEG stores data. And how indexed colors (GIFs, for instance) or mipmaps work. But when a file is loaded, even if there is some sort of internal storage scheme, when reading, writing or manipulating the data, all that data has to be accessed as raw data. I know how that is because I code my own tools and deal with all types of data.
  4. I just created a render in Cinema 4D and asked for an alpha to be produced. I defined the file format as a TIFF file and asked for a separate alpha. I got two files: an RGB TIFF file and a greyscale TIFF file. So, alphas ARE simply greyscale images. And the same happens in any format. Sometime, if the alpha is stored as a separate file and the file format does not support greyscales, the alpha is even stored as an RGB image, with all channels storing the same information (three equal greyscale images, one for each of the R,G and B channels). Of course, that is overkill, as the file gets much bigger.
  5. CMYK images are a set of four greyscale channels, each one defining the amount of Cyan, Magenta, Yellow and Black ink. Internally, there is no difference in the way they are stored. So, a greyscale image is a list of values. An RGB image is a set of three list of values. A CMYK image is a set of four lists of values. And an alpha is a list of values, just like a greyscale image. Exactly the same. Stored in the same way. The way you interpret or use that set of values may be different, but they are all the same type of data, internally. So, we should be able to use all tools in the same way in all of them.
  6. The main reason that I don't ditch Photoshop completely is the way Affinity Photo deals with alpha channels. It is so much more intuitive in Photoshop, since alphas are just greyscale images and you can do to them whatever you can do to a regular greyscale image.
  7. Exactly as all image files are. They are just lists of numbers. And those lists of numbers represent colors, greyscales and opacity values. And those opacity values are represented as greyscales. And, since you can perform operations on these lists of numbers (painting, blurring, smudging, dodging, burning, apply filtersetc), it should all be possible in CMYK, RGB, greyscale and alphas. Please, tell me, R C-R, what is the difference, internally and numerically, in the list of values of a greyscale image and the list of values of an alpha?
  8. R C-R, let me try to explain this slowly and clearly. Internally, alpha channels are stored as a list of 8 bit or 16 bit values. Let us assume an 8 bit list, for the sake of this example. A value of 0, means transparent. That is the opacity that will be applied to the RGB values, meaning that they will simply not be shown. A value of 255, means fully opaque. That is the opacity that will be applied to the RGB values, meaning that they will simply be shown in full color. Any other value between 0 and 255 will show the RGB values with different opacities. But those alpha values (between 0 and 255) are shown as greyscale values because, in fact, they describe a greyscale image. And that is how you see an alpha channel: as a greyscale image. Since it is shown as a greyscale (because it is simply a list of 8 bit values), and you can perform painting operations on it, there is simply NO REASON whatsoever for not being able to perform ALL operations that you could perform on a regular/official greyscale image. You just have to imagine that an alpha channel is simply an extra channel, besides the Red, Green and Blue channels. Since each Red, Green and Blue channels are simply greyscale images by themselves, the extra (alpha) channel, is also a greyscale image. If you export a set of frames from a 3D or video application an ask for the alpha to be exported as an independent files (required, when exporting frames in file formats that DON'T support extra channels besides RGB), guess what you get... an RGB file and a greyscale file. So, alphas ARE greyscales.
  9. I don't mind if the interpolation will only occur in the final stage (printing or exporting). But we should have a way to define individual interpolations, per image, and not a "one option serves all".
  10. The best and most versatile way would be: - Have a document DPI (as it already has) to define the global resolution of the document and at what resolution should the images be resampled when printing or exporting. - Have a global interpolation method, as a document default. - Allow for each image to have an interpolation override. - Use the global or individual interpolation method of interpolation when a layer is Rasterized.
  11. I know that images should not be resampled destructively. It would be just a matter of display, print and exportation. As it is now, it seems like a bilinear or bicubic method is being used for display/print. And I do know that I can set the interpolation method of resampling when exporting. But, it would be nice to be able to set a "bypass" method for individual images. For example, I write software manuals and I use many screen captures of dialogs and icons. Sometimes, I prefer to have those images enlarged with the Nearest Neighbor method, so that the image doesn't become "soft".
  12. When bitmaps are placed and scaled (up or down) in Publish, what is the interpolation method used? Also, what interpolation method is used when we Rasterize a bitmap layer? Sometimes, I would like to have a bitmap scaled using the Nearest Neighbor mothod. Other times, Bilinear or Lanczos. It would be great to be able to define a default method for all bitmaps in a document and, also, an individual setting, in the Resource Manager.
  13. Although my suggestion is not for that specific problem, I do think that it could help, somehow. Having more control in how each layer is displayed/rasterized/deformed is always good.
  14. Each pixel layer should have an individual interpolation parameter so that each layer could be set to Nearest Neighbour, Bilinear or Bicubic (and, if possible, more advanced methods). This way, different interpolation modes could be defined for each layer, as its size changes, or is deformed. Sometimes, "softer" modes are required. And, sometimes, a more "pixelated" version is required. This should be a parameter for all pixel objects, in Designer, Photo and Publish.
  15. Instead of simply displaying a frame for the bleed, the area inside the bleed and outside the page should display as a dimmed version of the elements that spread out. Like this mockup I did.
×