Jump to content
You must now use your email address to sign in [click for more info] ×


  • Posts

  • Joined

  • Last visited

Contact Methods

  • Website URL
  • Twitter

Profile Information

  • Gender
  • Location
    Sarasota, FL

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. He. But yes, fill opacity. And the workaround works pretty well. I can apply effects individually. Just seems strange it doesn't work using copy + paste FX.
  2. Workaround - Select all objects (First, shift+last selection in layers panel), Quick FX > Fill Opacity.
  3. When pasting FX, fill opacity doesn't get copied. One method I'm using for water is to make hundreds of vector strokes, copy and paste the 3D FX for each one (doing it for the layer/group causes all objects to be merged for the 3D effect, which is only applied once, not per stroke). The look of water comes by lowering the fill opacity of the object color, but keeping the specular at full opacity. Unfortunately, I have to do this per-stroke.
  4. At the moment, the mirror feature only mirrors manual brushes and erasers. If I need to, say, move, rotate, and scale a hand, I cannot properly transform both hands at the same time over the mirrored axis. The process to fix both sides is tedious: delete the other half, copy the layer, flip, then realign exactly, and merge both layers together. If the transformation doesn't look good, then the process needs to be repeated. Any rotation/scaling/moving suffers the same problem. Option #1 - Mirror all transformations to the other side. While more difficult to implement, this would allow us to see the changes being made in realtime, and seamlessly accomplish our goals. Option #2 - Force mirrored area to not accept any art or design work on the layer, and update the mirror in realtime. This would allow realtime transformations to take place without the complexity of mirroring the transformations for each tool. But if an object needed to be unmirrored (I.E. lettering), enabling this mode would eliminate that possibility. Option #3 - As a simpler alternative, being able to quickly hit a button and get a mirror copy on the other side could help the workflow tremendously. No realtime updates at all here, but the process would be quick enough. As with before, anything unmirrored would be forcibly mirrored, so there would be drawbacks. It would be nice to see the mirror feature get some love. It's so useful, but some very real pain points make it difficult to use in serious practice.
  5. I second this. It's so weird not being able to alt-drag and get a copy of some selected raster lines when manipulating linework. I do this for sketches and concepts all the time. The problem with Copy/Paste is it splits the new copy on a separate layer. And if you have effects running, you have to copy the effect style, group the two layers, rasterize, then paste the effect style in order to merge the duplicated objects together and retain the same number of layers and other effects. Alt+drag would be much easier.
  6. Any camera will introduce both lens distortion and perspective issues, not to mention if the paper isn't perfectly flat you won't be getting the same results as a perfect high-res scan from a scanner. If your designs work then they work, but this process is NOT ideal for everyone. Drawings that require perspective, faces, and shapes that would be more sensitive to distortions would benefit far greater from a proper scan. I'm selling prints of drawings from graphite and ink/pen with textural details: cell phones don't capture that level of detail as cleanly or accurately as what I need. And working 13x19 at 600dpi with many layers and a mix of vectors and bitmaps kind of rules out the option to work totally on mobile. In a professional environment, everything that stands in the way of getting work done should be eliminated.
  7. I tried researching what export formats it supports, and nowhere on Adobe's page does it specify export to SVG. All it mentions is saving to libraries. Needless to say, the program is intended for cell phone cameras and taking the results to a separate program rather than performing a quality scan. Cameras introduce distortion and perspective, you need scans. And if the drawing is generated initially from a digital raster program, it doesn't make sense to spit out to a cell phone to get the job done. Adobe Capture's vectorizing is VERY quick and dirty, not professional in any sense of the word. There are other features in that app more worthwhile, like finding fonts on printed media. But if you want to vectorize a drawing in Adobe's suite, just scan it and use Illustrator. Personally, I just use Inkscape because it's free. But a single program that can do it all would be Godlike.
  8. 1 - Adobe Capture is a mobile app for Android and iOS. It's not available on PC. 2 - App doesn't support exporting 😕 Assets are saved directly to Creative Cloud libraries for use in other programs. Your solution to this one missing feature doesn't even work in a professional environment, and it's far from an all-in-one package. It costs $600 per year to properly use and you took what is currently a two-program problem and made it three. Meanwhile, there are totally free programs that vectorize in a pro environment with export to any software, not just Adobe. This workflow is already DOA across many programs. I will continue to use free external software, but having it integrated in Designer would open up a lot of doors to making workflows even better.
  9. To sum up what users want: Transparency of currently supported/unsupported features to current and potential users so they can make educated decisions about workflows and programs. Buying a program with the expectation of better workflows and features only to stumble because of a feature that doesn't exist is a huge cause of grievance. Roadmap so users know which features are being worked on, instead of being left to wonder. If we see something like Android/Chromebook development, or performance improvements in the works, that might make a lot of sense to schedule ahead of development for this feature. Without a roadmap, it just feels like nothing is happening, and every now and then we get a release dropped without the features we want. SOME kind of community interaction would be massively beneficial. A feature that's requested STRONGLY enough by the community should be given some kind of formal response. 17 pages of replies, over 7 years, for a feature found on most vector-editing programs and required for many workflows, but missing from Affinity, should receive some formal response a little more than "we won't do it unless it's done well," and then silence. Plugins for new features or filters to be added in by third parties and the community. Not just brushes and textures, but actual code that can allow Affinity to do things it can't otherwise do. This way, the program can expand far beyond the core developer's capacity.
  10. Of course, if they publicly confirm they're not going to work on it for the foreseeable future, they'll lose a lot of potential buyers. People who rely on it as part of their workflow, designers who constantly get logos as bitmaps and need to vectorize them, etc. will be forced back into Adobe. Anyone working in a professional capacity is stuck with Adobe because they monopolized the pro market. But people deserve good tools, not just the 10% of designers who can afford the Adobe tax. If this hasn't been prioritized in 7 years, will it ever?
  11. So, features that just flat out cause crashes wouldn't be released for beta. They'd be in alpha/beta/experimental phase if they didn't support all of the functions they needed to. Say we got Autotrace black and white first. Then we got limited color palettes. Finally, the feature was radically improved to support multithreading and large color palettes with excellent results and it moved to a production-ready status with the beta label removed. Continued performance and quality improvements can be made as minor updates after the feature is still considered production-ready. This kind of development path means we get to see and test new features and workflows before they reach 100%. This can improve workflows for some people while we wait for completion. As of now I still have to resort to another program for this stuff, should I ever need it.
  12. Not a graphic design program, but Unreal Engine will release features in an alpha, beta, and experimental category with the promise of getting those features production-ready later. Raytracing released without support for foliage or instanced meshes and gained that support later on. They made a spreadsheet of what was supported in the latest version, and you saw more and more features go into production-ready status. If a feature is beta, you know it'll be production-ready soon. They're pretty consistent about that. If Affinity adopted the same model, highlighting features in orange or something with a Beta tag and an option to include or exclude features in Beta, it could give us a chance to test new features before they become fully complete. For instance, people who need to make black and white tracings from solid color ink splotches shouldn't have to wait for years until the tool is able to perform a 256-color portrait of a photograph. The beta version might only have limited support and not function as optimized as a production-ready release. Having features released in beta would also push devs to work on those and get them production-ready before moving onto other tasks. It just feels frustrating having to wait so long with 0 support. There are no better options at a reasonable price point.
  13. In the meantime, we have nothing. They can release features in an alpha or beta state and iterate on it with future releases, but at the moment we have to do this: This is entirely the problem. We need to use other programs to get this one feature that has become standard everywhere, including totally free programs. I hate Inkscape with a passion, but I have to use it for bitmap tracing. There are workflows where converting with color palettes could actually become useful: converting a base color layer from raster to vector and saving tons of space, for instance. But at the moment, this workflow is interrupted by needing to leverage another program. I'm just wondering what the dev team is doing if their time is not spent on features that have become industry standard. 7 years without a trace is kind of a lot.
  14. Really cool! But softer brushes mess up the effect... We'd need grayscale conversion to normals, or something more solid, like nDo... which seems to have been discontinued.
  15. Photoshop is the WORST program to paint a 3D model. It paints right through to pieces on the other side. Not to mention the whole 3D thing is a horrible UI experience. They tried, they failed. But the normal generator is amazing. If Affinity Designer had a way to paint normal maps like nDo, that would be really cool!
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.