Jump to content
You must now use your email address to sign in [click for more info] ×

kirk23

Members
  • Posts

    731
  • Joined

  • Last visited

Everything posted by kirk23

  1. it's actually 5 channels in Affinity . you forgot alpha one. It's rather CMYKA What I am asking is not related to printing. I couldn't care less about printing. It's all about extra channels . And gamut is irrelevant also. I can do my own LUT in Affinity that would show me whatever colors I need. CMYK is all just same value numbers and 16 bit gives us more of them . Would allows to use extra channel for impasto painting , special ID masks. Keep UV values in K and A . Have distortion values in extra channel and gazillion other appliances necessary in 2d compositing . If they make 32 bit cmyk like in Photoshop it could be actually 10 channels in single layer since 32 bit allows to keep different images before and after comma. In a word I am asking for all those things people still using Cmyk in Photoshop for after printing industry have died basically.
  2. So we could have 5 channels for each layer instead of just 4 . Would be so helpful for so many things.
  3. Like Photoshop. Not because I need the CMYK itself. I couldn't care less about printing . I just need one more channel at least. I subscribed back to Photoshop just because of this while I generally prefer Affinity over PHs.
  4. No, I absolutely don't want to use Photo for this . I just need a tool that can scatter both vector or raster objects along a vector spline , randomly or in certain order. And still be ABLE to replace the objects , use symbols , re-link bitmaps. If I want . Not immediately rasterize it in document resolution as what Photo would do most probably. Like what Xara or old good Creation house Expresion could do decades ago So bitmap nozzle brush is not ok and text tool is such a pain in your a.. for that
  5. some other live filters can access a layer bellow so it seems really weird why PT can't do same .
  6. I am talking about stuff 2d compositing soft like Nuke do. It doesn't have to be full scale ray tracing , or ray marching or whatever. Just a depth channel you could use for deforming a logo on a flag in one click instead of mesh deforming manually. Or adding some mist, compositing in some particle effects . splashes , volume light, fog layers etc. t's nothing complicated really . You could call it 2,5D as Zbrush named it. Or what Corel Painter does for impasto . It's all perfectly re-creatable with current layer system , just inconvenient as hell since requires a huge messy pile of groups , layers and in-between linking . What I offer is to make it easier on UI part. That's all. Some phone cameras already include depth channel to its photos , why not make it usable.
  7. They have CMYK right , it's basically 4 channels + alpha , so 5 channels . The only thing to make it work is to support 16 and better 32 bit in CMYK mode , like Photoshop does. The rest is just color management and support in brush nozzles . Well, the later is optional probably . In Photoshop the idea would be perfectly workable right now if it would have same live lighting filter to visualize the depth on the fly somehow. Again , I am not talking about turning Affinity into Nuke. Just a few features like impasto painting for example. People sometimes forget that there is no difference in between 2d and 3d in computer graphics . 3d is also 2d on your screen surface or camera plane + a bit of 3 century geometry. Same as any perspective drawing starting from antiquity.
  8. No need of actual 3d features IMO. Would be nice if they just add some 2d compositing features to work with CG images , to combine them with photo properly etc. 1. cryptomatte live filter with a mask on demand ( not like exr-IO plugin for Photoshop with gazillion layers at once, just one exact necessary mask at a time on demand) 2. RGBDA channels mode with depth channel. Just convert CMYK to this one please. Or make CMYK to support 32 floating point like Photoshop. 3. A few special layer blending modes for depth combine . Like where depth is using lighter blending and other channels use a mask produced on the fly by subtracting underlying depth from combined one + threshold. 4. UV live filter that would sample UV AOV or manually made UV gradients to remap pixels over it. So we could replace rendered materials in one click without messing with mesh deforms etc. 5. Proper live AO and shadow ray tracing from depth channel as a live filter. 6. Brush nozzles with depth channel . A vector way to scatter them in Designer. Imo it should be all easily possible, just a few small interface fixes here and there. Please don't waste your time on something like Adobe Stager . It's totally useless. Just make some basic 2d compositing abilities , preferably Blender compatible . So we wouldn't have to buy Nuke just to make a nice one frame CG picture. No need to make a mess of node based interface for that too ( while would still be nice indeed).
  9. No, my version is no different in that regard . You can load separate depth too but can work with just what's beneath + "texture" slider that adjusts normal map intensity/contrast. The only improvement I offer is the gamma fix for 8 and 16 bit per channel images . in 32 bit linear mode you don't need the gamma fix.
  10. Like Corel Painter has . Only please do it accessible and exportable in 16 bit at least . And having depth info for nozzles too. Plus a special separate blending modes for the depth: add, sub and depth combine/height blend when depth channel mixes using lighter (MAx) blending and at the same time provides a mask for other channels. ( the math is simple like 2x2 really) Plus an auto masking feature for brushes when they would paint in cavities or bumps with a curve based tweaks. It all could almost work that way right now . Well, except nozzles. Just requires a painful setup and insane stack of layers. So do it a proper and elegant way like being just an extra channel please. ideally 8 bits in RGBA and 32 bit in the depth at the same time. You are already supporting exr files for nozzles . just let the soft read the depth from them too. With your nice live lighting filter it could work super natural altogether. Well, maybe an extra feature of casting shadows from the bumps could be nice too. Could be a huge advantage over Photoshop and even Painter where the depth is inaccessible and mostly useless . Not sure why they can't do anything right with it for decades. ps. I am not asking for another Zbrush here , just simple things really. would be super helpful not only for impasto painting but for texture works too , focus stacking etc.
  11. That workaround only works nicely in 32 bit linear mode. You can get more or less ok result with gamma fix in 16 bit integer mode although. It's still not 100% mathematically accurate and honestly I don't see why they can't do it . sphere.afphoto
  12. Any frequency separation technique is just a fancy name for blur / kernel math down to what software actually does really . You already have plenty of ways to recreate it live starting from Hipass live filter which itself is also re-creatable by blur, blending and opacity sliders. People do it since ancient time of first iteration of pixel processing software. Whatever some tutorials suggest it's all the same basically just with different level of convenience. In one word try Hipass live filter applying it separately for each of RGB and blend option curves
  13. Like accessing/sampling beneath image to modify like some other live filters do? Like displace one. Can I do my own displace with procedural filter? I mean sampling not the pixels the filter is applied to but what's beneath that layer ?
  14. v2 is a nice step forward indeed and in the right direction. The problem it's not big enough step. With that pace we would have to wait our whole lifetime before we get truly modern non-destructive image editor. Non-destructiveness is what I need for the first hand and we still can't link pixel content without rasterizing bitmap first killing its link to its original source. Procedural filter is still in a baby stage , no sampling other layers except beneath one , nothing truly helpful beyond noise texture. I just hope Affinity wouldn't stop to develop the program itself and do only AI filters like Photoshop seems to per-occupied now. I am so much more disappointed by recent Photoshop release. For simple like 2x2 image compositing we still have to buy Nuke and deal with this typical node puzzles even if we don't need any animation at all .
  15. Interface changings are usually just annoying pain in your a.. Never helps a bit . I couldn't recollect a single case for any soft at all. Contrary it always makes you new puzzles . Never understood why software devs insisting on this. Software interfaces of 20 years ago was already perfectly fine with me. It's not like in a typical market when you buy same crap again because of new shiny box and cool buttons.
  16. I recall I did it with a text along a path approach. But honestly it was an absolute torture to work that way. Have anyone found a better workaround?
  17. I tested that vectorQ on iphone. Haven't noticed anything special. Perfectly same as inkscape. Can't recognize fonts or just simple shapes like rectangle or oval . Lots of redundant path nodes , no optimization. Is it better on real thing like mac? ps. would love a kind of AI vectorizer , a smart one, but doubt we would see it any time soon.
  18. Affinity photo exports those files with alpha channel just right . V2 at least. In v1 it was also perfectly possible with a simple trick. The problem is not what Affinity export. The problem is that when you open the file back to Affinity photo it multiplies black over rgb values of pixels having perfectly zero alpha, Only over those ones. End this needs to be fixed for sure. ps. V2 doesn't do it for exr files although. floating point images being opened perfectly right too, without those black holes
  19. How is it vs substance painter or 3d coat? Looks very basic from youtube videos.
  20. I meant not 3d object but rather 2d image, computer generated . Basically rendered picture. Why I need to modify 3d scene and render again when it's simple and easy to just distort a logo to match something perfectly using depth or UV AOV if they have been included in your render. No guess as with manual mesh wrap. Try to use that mesh wrap to make a logo following something complicated , a waving flag for example. Still the mesh wrap is a nice addition too indeed.
  21. Yeah but currently it can't to do it even with the sky reliably based on Photoshop. Along a horizon line where colors are somehow earthly already , a dusty haze of some sort it doesn't work that good. People , cats are totally ok. And that's all. Anything I ever needed for my job . no chance. Resent new Photoshop version with better cloud AI option didn't make a slightest difference. I wish they would rather focus on their super outdated layer system, 32 bit support and slow like hell smart objects.
  22. Inskscape has it. An open source soft. Does it approximately same as this vectorQ just not so quick. www.graphicpowers.com this one Is the best I tried .$120 a year. I just don't need it very often so I made my sub lapse
  23. Mesh warp in nice indeed but I would prefer they would do direction control (both manually and by texture input) in live displace. Displace is so much proper way to deform 2d Image. Even better idea probably is to use UV coordinate AOV image to place bitmap on a necessary shape. Another live filter probably. Like everybody do it in 2d compositing . You want to put a logo on 3d rendered car in a movie post render , nobody would ever want to do it with mesh deform especially when you need it in 2k frames.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.