kirk23

Members
  • Content count

    173
  • Joined

  • Last visited

About kirk23

  • Rank
    Advanced Member

Recent Profile Visitors

102 profile views
  1. 3D

    Yes, exactly. It 's necessary to say subtract Group 2 depth from Max (lighter) combination of Group1 + Group2 , a standard way to get a mask for one object intersecting another properly by their actual depth. So we have to put a layer from group 1 inside group 2 and still move it in sync with Group 1 and not its new host Group2. It's possible in Photoshop with chain links but not in Affinity. Affinity does have all necessary blending modes working in 16 and 32 bit but not that last step. In fact I would prefer just kind of "live" or "formula" mask where I could write the whole equation as simple expression , kind of " layerA - ( layer A Max LayerB) " with involving layers blinking in the stack. That way I wouldn't have to put layers from one group to another in the actual layer stack and Affinity would just know how I want certain mask be calculated. Would keep Layer stack simple and elegant. Photoshop way with chain links works ok but not very convenient too since you have to do a huge undecipherable mess of layers , clipping groups and chain links. Or maybe even simpler approach like one Corel Painter has, although it's too simple there to do such tricks but still I like the general idea of having special depth channel for each layer and special drop down list of depth related blending. The math behind depth involving Zbrush styled 2,5 D compositing/ Z combine is simple like 2x2 , could easily be working in real time and could be just coded into layers as a layer effect . In fact Corel Painter is very close to Zbrush, they did it same time Zbrush first appeared , i.e. decades ago but for some uncertain reason never developed it into something useful while still having best brush system on the market. Could be way more advanced than Zbrush in area of 2,5d canvas. Weirdly even to export depth info from Painter document you have only option of some obscure Blender addon.
  2. 3D

    IMO Affinity doesn't need true 3d. There is open source Blender after all. What it really needs is depth channel and depth aware object/layer blending based on an object depth/height black&white channel. Very same to what is called 2,5D mode /canvas in Zbrush, "impasto" in Painter and Artrage, "deep pixel" in video composers etc. In fact it only needs Photoshop styled transform "chain" links in between layers/vector objects being independent from parent/groups to work that way. Everything else is already there. ps. Plus maybe a couple of "live" adjustment layers turning grayscale depth into normal map and another one turning it into cavity/curvature mask
  3. photools.com iMatch is single user mostly, does exactly categorizing/keywording and does it on a whole another level over LR , managing huge base of images LR is incapable of, having a lot of advanced tools to do categorizing automatically. Lightroom is not even in same league and rather toyish in its categorizing abilities. iMatch doesn't have raw developing but paired with Darktable it works just fine for me. I would really prefer Affinity team wouldn't spread over fields where efficient open source or affordable alternatives already exist. Their core software, Photo and Designer are not on Adobe level yet and I would like them to advance it way beyond Photoshop and Illustrator.
  4. There is a plenty of alternatives which are doing very same (raw processing) and some of them open source like Darktable or RawTherapy. There is also quite cheap Luminar 2018 and Polar, although I didn't try them. For my taste Darktable is enough. I am paying Photoshop+Lightroom subscription for a few years and hardly opened Lightroom at all. Didn't notice any real advantages but I am not a photographer and perhaps just didn't look deep enough in Lightroom. Still would prefer Affinity focus on making Photo and Design more innovative ps. As of "DAM" aspect Lightroom is way behind specialised DAM software . iMatch for example which is also quite affordable and imo best DAM software available.
  5. 1.edge feather 2. alternating fragments in curve brush like in old Serif Draw 3. transform links in between certain objects ignoring other groups/parents relations 4. imported photo instantly becoming bitmap fill
  6. Please make an ability to do explicitly specify transforms links in between certain layers/objects independent from what group they might be parent to. So we could subtract certain object from a mask of one group and at the same time could have that same object be linked to totally different group , not the mask stack we subtract it from. Let it be a chain mark like in Photoshop for example. So all the other objects/layers follow its group parent/container and a few selected ones, chain marked would be moving in sync to each other only ignoring any other group/parent relations.
  7. Wile layer comps is super helpful in Photoshop I would offer a kind of alternative. Xara has so called "names" Basically it's a kind of tags you can assign to objects and when you copy an object a new copy inherits all the same tags. You can do multiple tags to same object. Then from a tags (names) list you could select and make visible only objects with a special tag. Same as color codes in Photoshop but more flexible since each thing could have several tags. Beyond replicating layer comps behaviour the system allows easily to find and select necessary things and all related objects while you spare the time to keep it consistent since the tags are inheritable automatically.
  8. Medical Officer Bones Still a regular layer based image soft where you can actually draw something conveniently with a feel of brush and canvas under your finger prints , instantly pick a layer/object of screen etc has it's own undisputed advantages. Those nodes easily turn into a kind of Gordian knot and you feel as if you hold your brushes in somehow prosthetic or robotic arms. I don't need any animation too. Just dream about a normal handy image editor having a bit more modern functionality and be not so slow ancient junk as Photoshop. Can't tolerate Photoshop but still see no other options really. I also think the term "3d compositing" is somewhat misleading so people consider it irrelevant to their tasks. In fact it's still regular 2d image editing , with just dynamic "live" masks being calculated on the fly. Any image editor should be capable to do so. The math behind it is simple like 2x2 . Games/video cards do it as "deferred" "post effects" on 4k screens with a speed of light. Populating each rendered frame i.e. compositing gazillion pre-rendered 3d objects within a fraction of second. So why it's so hard to do in a regular image editor? How many decades we still have to wait till we got something similar. It was necessary 20 years ago already.
  9. yeah, OP didn't asked transform links and what he asked could be done just with 'embedded" layers , a bit less convenient than with Design symbols. Still I see not that much of a difference. But its actually not enough for the purpose he described , namely editing and compositing 3d renders. making a collage of rendered ,photographed and hand painted objects. To do so the soft should has extra option to link transforms in between layers/objects from within different groups. For example all objects of a given group is following the transforms of that same group and a certain object within very same group is not( subtracting its pixel values from a certain mask within that group for example) and should inherit transform from totally different group. Such complicated transform links are necessary to calculate dynamic masks on the fly when you compose several 3d rendered objects which masks are stacks of depth images each related to its own objects/ group. So we could set certain 3d object be always in front of another based on it's true scene depth, then put a photo somewhere in-between , make parts out of focus also based on true depth, have objects intersect each other and the ground/background plane properly without guessing and any moment re-compose a whole collage as if it would be a 3d scene and not actually 2d. Very same Nuke is capable for. It's possible in Photoshop (with proper chain links setting) and not in Affinity where we have no means to link transforms beyond simple groups. And I bet the depth will soon become integral part of any photo so first image editor that would be ready for that would definitely win the competition.
  10. Unfortunately, even being a step to right direction, it doesn't work at all , at least for what original poster meant. To be somehow useable that "simbol" linked mask should stay in same position, keep same scale as its original. i.e. should have linked transforms with its original and not with simbol container, while for some blending operations another mask in the masking stack might still need to follow symbol container transforms. So in fact we need not just symbols but also transform links , like ones Photoshop has: "chain" links. The whole approach OP asked is totally possible in Photoshop if you use groups clipping instead of layer masks. Neither Photoline , nor Krita actually allow to work in that manner because of same problem: no switchable chain/transform links in between clones. But the whole Photoshop approach is so hard to manage , it turns layer stack in so huge mess of never ending smart objects stacked in groups within a parent group with special properties clipping another smart objects, all of them chain linked across dozens of other groups /smart objects, I often can't figure out anything in my own psd files. And updating those smart objects takes forever. Still Photoshop is totally ready for so called "deep pixel" compositing Nuke does. Affinity , Photoline, Krita doesn't. And finally Affinity Photo does have cloned/ instanced layers. They call them "embedded". Could work almost same way symbols do in Design. You just always have to keep the original "embedded" file in the main layer stack to be accessible , then you can put clones of that embedded file withing masks of other layers and they would keep the link ( you sometimes have to scale document a bit to nudge the updating). But not the transform link unfortunately. So it still doesn't works. In fact Affinity looks so close to be really useful and needs to do just a few extra steps. And it could be much better than what's in Photoshop. I have no idea why they didn't do it while sounded promising somehow.
  11. For example a mask which would be a live blending/calculation result of certain specified layers in the layer stack based on expression. Something like( layerA + LayerB) * LayerC max LayerD. It would allow to automatize a lot of routine operations, would make the layer stack a lot more simple,elegant and readable. Would give a lot more flexibility seen only in node based image editors. Current layer system having no Photoshop styled chain/transform links doesn't allow to do so at all while having all necessary blending modes. ( in Photoshop it's possible theoretically but so inconvenient making it useless) Such self adjustable masks , following some specific properties , a canvas impasto depth/ cavities or texture for example, composed scene depth, scene lighting intensity, certain color / material masks etc, could provide so huge amount of creative possibilities never seen in Photoshop and could make Affinity soft truly innovative.
  12. Would be super cool if we could record an inpainting macro on one layer and then recreate it on another layer with its own content , matching 100% with original layer. For example one layer is color image where we record the original inpainting , then the second layer is a depth image of same subject where new inpainted depth would perfectly in sync with original color inpainting.
  13. Brushes itself needs alot of improvements . Old Draw soft had a nice feature of alternating picture fragments along curve brush. Same as in Microsoft Design/ old Expression soft. Was very cool feature. And for pixel brushes a super coll improvement over Photoshop ones would be an ability to do a brush stroke on several specified layers at the same time. With specified dabs for each layer. For example one dab for "color"layer , other dub for "normal map" layer etc. Maybe just same brushes Corel Painter or Art rage both have painting extra depth channel beyond regular RGB+ alpha. Would be nice to have a few brushes making GPU particle simulations on the fly, Something like Rebelle or Expresii , also depth/surface aware. Brushes that are "painting" some real world phenomena : rust , color drips. scratches in proper places through automatic depth masking etc and not just drop pixels on an artificial "layer". All this is extremely necessary nowadays to be considered an innovation and be really competitive IMO. Then a cool brush manager would be nice too of course.
  14. imo it's almost there . I mean "Replace document" button. Should be a kind of macro that would reload same named documents into embedded layers. I am very annoyed by Photoshop linked layers. They sounded great but proved to be almost useless since you have to save them each time you need to edit something. It takes ages in Photoshop. Another super annoying Photoshop thing is inability to for smart objects to keep same size once you replace content for more or less pixels. Thanks Afinity it keeps one (X) dimension intact at least while I would prefer both actually
  15. There is no such solution on the market. it's what I am talking about. There is a huge gup/ market share nobody filled yet. There are a simple photographer tools like Photoshop and such and very specialised programs like Substance Designer, Nuke, Fusion etc. They differ from Photoshop in ability to work not only with image represented by 3 RGB + alpha channel but also many extra properties, channels and masks representing surface reflectivity , surface geometry, depth etc. A complete physical material, not just rgb color. Still they are very same 2d image editing programs in its core. All they are very flexible with their node based approach and could do whatever you want but at the same time they are inconvenient as hell when you need to prototype something quickly. My main tool is Substance Designer and it's like being created by aliens. I hate it but have no other choice. Layer approach is so much more instantly intuitive, simple and easy to work with for simple things. Photoshop and such usually have so much more convenient color tweaks and brushes but they are just not enough to do something useful. In fact just a bit more complex layer approach with a kind of expression system , dynamic mask, transform links to interconnect several images representing an object material properties, namely roughness, normals and depth at least doing necessary blending math automatically based by some predefined "formula" , After effect style maybe, would be perfectly ok. I hoped for something like that when Affinity promised sophisticated layer linking but looks like it would never happen. I dream about a soft that could bring together robotic flexibility of nodes and true feel of a brush in your hand of layer based approach. Something in between Photoshop and Nuke. Modern CG painting is not just RGB+alpha I need a brush that would paint over rendered 3d scene or photogrammetry produced image and be depth and surface aware. It's just a matter of time when we would see the depth as a part of every photo. But seems we would see it first on cell phones and neural something with all the apple innovations. Then maybe all the photographer masses will suddenly realise how necessary true layer linking/ interconnections could be