kirk23

Members
  • Content count

    180
  • Joined

  • Last visited

About kirk23

  • Rank
    Advanced Member

Recent Profile Visitors

238 profile views
  1. I suggest to take a look at Substance Painter , a soft for UV fixed textures. It has "anchors" a way for a mask to has a reference/link to any other layer alpha. color or whatever property. Imo we need something like this in any image editor. Preferably even more advanced thing, an ability to reference a mask to a certain combination of layers, outside images or a calculating formula . maybe being updated by request or by some scripted condition. etc. It's just impossible to keep everything within same mask stack. Would be a nice elegant way to have a truly non-destructive workflow without a huge mess of nodes or complicated and unreadable "group clipping" from Photoshop. Yeah, Photoshop can do it nowadays actually but amazingly inconvenient.
  2. I suggest to take a look at Substance Painter , a soft for UV fixed textures. It has "anchors" a way for a mask to has a reference/link to any other layer alpha. color or whatever property. Imo we need something like this in any image editor. Preferably even more advanced thing, an ability to reference a mask to a certain combination of layers, outside images or a calculating formula . maybe being updated by request or by some scripted condition. etc. It's just impossible to keep everything within same mask stack. Would be a nice elegant way to have a truly non-destructive workflow without a huge mess of nodes or complicated and unreadable "group clipping" from Photoshop. Yeah, Photoshop can do it nowadays actually but amazingly inconvenient. i
  3. I have Lightroom as a free Photoshop addition but never use it really. As a DAM it's ridiculous, can't handle huge databases, slow and inconvenient. Not even close to something like iMatch and I hate to do "export" instead of just drag and dropping things instantly when I need them. As a Raw processor it can't handle Sigma camera Foveon matrix pictures I love to do so also of little help.
  4. Yeah, The idea is from After Effect but I personally don't find AF especially convenient. It's imo only good thing about it. Photoshop can do it right now with clipping group idea and a stack of layers within such group where layers are chain linked to their same clones in other clipping groups. What I am offering is basically same thing without necessity of smart object clones and chain transform links. For my taste it would be so much more intuitive and easy-to-use. As of Substance Designer and Painter while they somewhat suited for textures and at least capable do its job and they are what I actually have to use still I can't consider them even being close to be named a convenient soft. Many things just amaze me there as if developers never saw Photoshop or any other image editor in general. For example I have been regularly asking them to do a proper 2d transform node in Substance Designer where we could scale things around arbitrary center , have a rotation reset and scale values related to original. It's been a decade already they never even said anything. Their HLS node is so terribly inconvenient for adjusting colors even Photoshop4 would be a super cool thing comparing to that. Substance Painter brush system not even up to level of Affinity one and I bet they never try to look at Corel Painter brushes. Their projection mode is hardly usable at all since transform gizmo works same as 2d transform in Designer. Somehow they have a huge number of features I could perfectly live without and at the same time lacks in very basic. I believe what I offered is not just for 3d compositing or textures. it's just a way to make Affinity flexible enough for a huge number of purposes, not solemnly a photo retouching. I think the idea of "live" dynamic masks could be very useful even in simple photo collages . It would let you not care of any mask at all since they would be adjusting automatically every time you would change your mind and have to re-compose something.
  5. I would prefer if all the objects would have it's own Z depth info (channel) like alpha channel which we could influent directly by painting into that channel or just adjusting with a mouse wheel. That way any specific sorting could be irrelevant and a half of any object could be in front of something and another half behind with its depth channel as a gradient for example. No need of masks. Never understood why all those 2d image editors couldn't fake Z depth properly. It could open unbelievable amount of possibilities for complex object compositing without any care of a specific layer order or masking, lasso selection, edge feathering, perspective emulation etc. You just grab a thing on screen and wheel it upfront or behind with 100% proper object to object intersection based by their depth info. Kind of same thing Zbrush 2,5D pixols do only vector and non-destructive. The math is so simple for that I just can't understand why every image soft can't do it , vector or pixel based. It's already there usually among layer blending modes. Needs just to be set working automatically. This could be especially helpful with new fashion iphone photography is developing where depth info is getting an integral part of any image
  6. I used a cool soft before "Imagaro Z" Was dirt cheap years ago. Sadly my old version is no more working on current Windows and the soft became subscription based. https://www.graphicpowers.com/ A bit pricey for vectorisation only but to be honest it did and I am sure still does wonder things including auto font recognition and replacing. It's on a level Affinity would spend years and millions to rival. I would prefer they would just focus on current things which still have lots to desire: feather edges, chain links in between objects/layers, alternating fragments in curve/pattern brush which was available in their old Serif Draw, layer comps or object tags and so on and on. What Illustrator has is a useless toy really so why even spend time on it. .
  7. Filter Forge + Affinity , especially Design one, could be a best thing ever happened in image editors, could be a blend in between node based and layer approaches we have always dreamt about. But it should be working the way we could get input from different Affinity layers/objects into Filter Forge at the same time with FF having an access to the full stack of layers/objects somehow and the calculation result would stay "live" and tweakable, not just being rasterized into static layer. Maybe not true "live" but at least having a button to update the result after you scale/move something. It actually worked that "live" way in vector editor "Xara" years ago , same as other Photoshop plugins that Xara re-rasterize each time you edit them or re-scale the object . Pretty slow but worked . Then suddenly stopped to work after an update , FF or Xara one I am not sure now.
  8. 3D

    Yes, exactly. It 's necessary to say subtract Group 2 depth from Max (lighter) combination of Group1 + Group2 , a standard way to get a mask for one object intersecting another properly by their actual depth. So we have to put a layer from group 1 inside group 2 and still move it in sync with Group 1 and not its new host Group2. It's possible in Photoshop with chain links but not in Affinity. Affinity does have all necessary blending modes working in 16 and 32 bit but not that last step. In fact I would prefer just kind of "live" or "formula" mask where I could write the whole equation as simple expression , kind of " layerA - ( layer A Max LayerB) " with involving layers blinking in the stack. That way I wouldn't have to put layers from one group to another in the actual layer stack and Affinity would just know how I want certain mask be calculated. Would keep Layer stack simple and elegant. Photoshop way with chain links works ok but not very convenient too since you have to do a huge undecipherable mess of layers , clipping groups and chain links. Or maybe even simpler approach like one Corel Painter has, although it's too simple there to do such tricks but still I like the general idea of having special depth channel for each layer and special drop down list of depth related blending. The math behind depth involving Zbrush styled 2,5 D compositing/ Z combine is simple like 2x2 , could easily be working in real time and could be just coded into layers as a layer effect . In fact Corel Painter is very close to Zbrush, they did it same time Zbrush first appeared , i.e. decades ago but for some uncertain reason never developed it into something useful while still having best brush system on the market. Could be way more advanced than Zbrush in area of 2,5d canvas. Weirdly even to export depth info from Painter document you have only option of some obscure Blender addon.
  9. 3D

    IMO Affinity doesn't need true 3d. There is open source Blender after all. What it really needs is depth channel and depth aware object/layer blending based on an object depth/height black&white channel. Very same to what is called 2,5D mode /canvas in Zbrush, "impasto" in Painter and Artrage, "deep pixel" in video composers etc. In fact it only needs Photoshop styled transform "chain" links in between layers/vector objects being independent from parent/groups to work that way. Everything else is already there. ps. Plus maybe a couple of "live" adjustment layers turning grayscale depth into normal map and another one turning it into cavity/curvature mask
  10. photools.com iMatch is single user mostly, does exactly categorizing/keywording and does it on a whole another level over LR , managing huge base of images LR is incapable of, having a lot of advanced tools to do categorizing automatically. Lightroom is not even in same league and rather toyish in its categorizing abilities. iMatch doesn't have raw developing but paired with Darktable it works just fine for me. I would really prefer Affinity team wouldn't spread over fields where efficient open source or affordable alternatives already exist. Their core software, Photo and Designer are not on Adobe level yet and I would like them to advance it way beyond Photoshop and Illustrator.
  11. There is a plenty of alternatives which are doing very same (raw processing) and some of them open source like Darktable or RawTherapy. There is also quite cheap Luminar 2018 and Polar, although I didn't try them. For my taste Darktable is enough. I am paying Photoshop+Lightroom subscription for a few years and hardly opened Lightroom at all. Didn't notice any real advantages but I am not a photographer and perhaps just didn't look deep enough in Lightroom. Still would prefer Affinity focus on making Photo and Design more innovative ps. As of "DAM" aspect Lightroom is way behind specialised DAM software . iMatch for example which is also quite affordable and imo best DAM software available.
  12. 1.edge feather 2. alternating fragments in curve brush like in old Serif Draw 3. transform links in between certain objects ignoring other groups/parents relations 4. imported photo instantly becoming bitmap fill
  13. Please make an ability to do explicitly specify transforms links in between certain layers/objects independent from what group they might be parent to. So we could subtract certain object from a mask of one group and at the same time could have that same object be linked to totally different group , not the mask stack we subtract it from. Let it be a chain mark like in Photoshop for example. So all the other objects/layers follow its group parent/container and a few selected ones, chain marked would be moving in sync to each other only ignoring any other group/parent relations.
  14. Wile layer comps is super helpful in Photoshop I would offer a kind of alternative. Xara has so called "names" Basically it's a kind of tags you can assign to objects and when you copy an object a new copy inherits all the same tags. You can do multiple tags to same object. Then from a tags (names) list you could select and make visible only objects with a special tag. Same as color codes in Photoshop but more flexible since each thing could have several tags. Beyond replicating layer comps behaviour the system allows easily to find and select necessary things and all related objects while you spare the time to keep it consistent since the tags are inheritable automatically.
  15. Medical Officer Bones Still a regular layer based image soft where you can actually draw something conveniently with a feel of brush and canvas under your finger prints , instantly pick a layer/object of screen etc has it's own undisputed advantages. Those nodes easily turn into a kind of Gordian knot and you feel as if you hold your brushes in somehow prosthetic or robotic arms. I don't need any animation too. Just dream about a normal handy image editor having a bit more modern functionality and be not so slow ancient junk as Photoshop. Can't tolerate Photoshop but still see no other options really. I also think the term "3d compositing" is somewhat misleading so people consider it irrelevant to their tasks. In fact it's still regular 2d image editing , with just dynamic "live" masks being calculated on the fly. Any image editor should be capable to do so. The math behind it is simple like 2x2 . Games/video cards do it as "deferred" "post effects" on 4k screens with a speed of light. Populating each rendered frame i.e. compositing gazillion pre-rendered 3d objects within a fraction of second. So why it's so hard to do in a regular image editor? How many decades we still have to wait till we got something similar. It was necessary 20 years ago already.