Jump to content
You must now use your email address to sign in [click for more info] ×

Ben

Staff
  • Posts

    2,001
  • Joined

  • Last visited

Everything posted by Ben

  1. Not sure if it's possible - I'll ask the Windows guys. The main problem is how useable it would be. On the Mac keyboard those keys are nice and close together, and we spent considerable time choosing how the modifiers worked in combination to open up tool features.
  2. @Mithferion On Windows the modifiers might be different (or you have to use right mouse). Check the status bar when dragging to see what the options are. On Windows does it work better starting with left mouse, then adding right mouse during the drag?. If only you had that extra modifier key - life would be so much better.
  3. Then the issue has to do with your document, not Slices or pixel rounding.
  4. ok - looking at your pdf, I'd say you do have a workflow issue. If you we using artboards, with lapped objects, it'd take care of this.
  5. @davidlower8 Here is our problem - that works for your workflow, but other people (historically) have used slices for pixel based export. So, the pixel alignment matters - a lot. It also matters that our pixel preview mode shows what they will export. Out of curiosity - why use slices for exporting a brochure? Most people would use pages or artboards and export to PDF for this purpose? Arguably, for print, you should also be using bleed anyway to get around the issue of colour changes at edges.
  6. Two methods - 1) You can hold Ctrl to switch to translate mode and drag from a point. The point you drag is that one that will snap. 2) Or you can drag from inside the object (away from any point). In this method the snapping point is the rotation centre (crosshair). You can click on one of your points to set the crosshair to identify that as your snapping preference. Cmd will toggle a clone at any point while translating. Look at the status bar - we use modifier keys to- 1) Cmd - Just rotate 2) Shift - Just scale 3) Ctrl - Just translate
  7. @SrPx I think a lot of the issues come from people working mainly in Photoshop, which is natively a pixel-space workspace, but think they are using real vectors (when they are not). Any scaling gets rounded to whole pixels when the transform is applied. Affinity uses a full precision vector model. The rounding happens at the end (for example, when finding slice bounds). We don't want to round out precision early on, because that error gets compounded as you do subsequent transformations.
  8. @ashf Antialiasing is not the problem. The problem you see is due to the slice rectangle being calculated from the bounds of an object which is not precisely on a pixel boundary. The antialiasing is due to rendering that off-pixel object.
  9. @JGD I just had to read through all your comments to make sure that I wasn't missing any pertinent points of value. Turns out I wasn't... but it sure took me a lot of otherwise valuable time. Your videos illustrated enough of what we already knew anyway. Seriously - no more waffle. I'm getting pretty serious about the 100 word limit. My time is better spent writing code. If people want to wax lyrical and go completely off-topic, don't blame us when we fail to read your actual points related to this software.
  10. Oh, and here's another using the Point Transform Tool.... No ghost, but that would maybe save 0.5 seconds. Screen Recording 2019-07-25 at 11.14.46.mov
  11. I thought I'd do a video anyway.... How many seconds? So roughly 4x as fast as your demonstrated AI workflow? Screen Recording 2019-07-25 at 11.10.48.mov
  12. @JGD ok - you failed to understand "concise", and I think you are underestimating our ability to understand things. Anyway.... 1) This is a completely pointless usage example - and exactly what I thought you were trying to describe anyway, and illustrates entirely the point I was making. Why would you ever need to move an object relative to itself in complete isolation? If there are no other reference points in your document, that you can also snap to, and you are not cloning, how will it ever matter where the object used to be? As I already said, I completely understand the mechanics of this, but totally disagree that there is ANY justification with this usage example alone. 2) Have you not seen our Point Transform Tool? Minus the ghost position, it does exactly the thing you are trying to do - transforms a object with snapping referenced from geometry points. Bingo - turns out we already thought of this one. Incidentally, it also does exactly the kind of snapping you are attempting in example 1 (with cloning, no ghost). I'm not making videos - we have enough tutorials and examples elsewhere. I also know what a ghost is - but we put processing power into showing you the immediate effect of your changes, rather than the 1980's way of AI doing a delayed update (which I'd assume most non-elite users would find the AI way not so useful). If I implemented any sort of "ghost" it'd only be to show where the object was, and only for purposes of snapping to its original position - as a visual cue to show what snapped. Again, I'm still unconvinced of the need to compromise the general use of these tools to fit these use cases. It's also been shown that what you are demonstrating can already be achieved with other work flows. You are insisting that these workflows are so much slower - but how often do you actually do these things???? Enough to justify compromising the majority use cases? I am not convinced. So, I spend time catering for the use case that in reality will only get used 0.01% of the time?? I asked you to prove to me that this use case is as "CRITICAL" (your word) as you are claiming. 3) Again, can all be done with our Point Transform Tool. I think you could have saved yourself a few hours there with one video of what you were asking for. Turns out we already have the tools. I think a lot of what you consider the "correct way that Illustrator does things" is mostly a throwback to the fact that they only update the document when you release the mouse button. The fact they snap to the original position is probably less a UI/UX choice as a long standing legacy side effect of the limitation of their software. Because the actual object is still where it was unit the end of the drag. Not at all WYSIWYG, and not great in a lot of situations. Incidentally, your video example 1 is absolutely NOT WYSIWYG. It is the complete opposite!!! You are not "seeing what you will be getting" - you are seeing an outline which later resolves to being "what you get". What Affinity does is WYSIWYG - you see live updates that show what the result will be as you edit.
  13. If you can't say it with 100 words, and possibly one attached video to illustrate the issue....
  14. We have seen some corrupt files from users of GFS. So far we have not identified how that happens. The next version of Affinity will have better diagnosis and reporting of corrupted files. The issue is - if they are corrupted they still cannot be opened, regardless of how we report the issue.
  15. @ashf Not sure exactly what you mean? If you use snapping, objects will snap to each other with no gap (the issue of antialiased lines between edges is another issue). The issue in this thread is to do with the behaviour related to snapping to pixel positions - a completely separate issue to snapping to other objects in your document.
  16. If you have "Move by whole pixels" turned on - it will do exactly that - move things by whole pixels. If your object is off pixel at the start, it will remain off pixel at the end (the new position will keep the relative pixel-local offset). Try turning it off. You might find it's all magically 'fixed' a lot of your issues. Arguably "Force pixel alignment" is not the best choice of words. Essentially, it is just snap to pixel position - but it will only snap to a pixel boundary if there is no other real snap that is taking place for each axis. As far as scaling goes - if you do a constrained scale, then you have the potential to put one of your edges off pixel. This is because we keep strict proportions, and the overall scale change will not result in a new size that is whole pixels in both axes. If we didn't keep strict proportions, you could end up with transforms that tend towards shapes at 1:1. Not a problem for large objects, but if you are working on icons at 32x32 pixels, you can quickly lose the aspect ratio of your objects if you scaled and both axes conform to a pixel position. This is also true if you snap alignment to another object or guide which is off-pixel. @SrPx The precision setting is only to set how many decimal places are shown in UI - it does NOT affect the internal precision of the actual values.
  17. Here is the problem. Our current parallel axonometric system allows for non-destructive transformations as the inverse is easy to determine. The transformation is also just a simple shear-scale-rotation applied to the original object. The underlying geometry itself remains unchanged. We also use affine transformations for our document model, allowing us to deduce an inverse. Converging perspective requires deformation of the original geometry. Anything nearing the vanishing point will tend towards a size of 0. It is impossible to then determine an inverse transform for anything that has been transformed in this way. We do not currently preserve any notion of "original geometry" as such. Only, a transform for which we can return to a 2D representation. If we do converging perspective, it will have to use destructive deformations to stay in keeping with our current Document Model. This is not a 3D application - it's a strict 2D application. Any notion of 3D (or 2.5D) is purely illusion. I've reiterated this point many, many times. In order to provide a full perspective model a full concept of 3D position is required. We are not going there.
  18. Just to be clear - our bounds are not just rectangles. If you have investigated our advance grids system, you'll see that bounds are the limits of an object with respect to the axes of the current grid. We only present a rectangle for ease of common transformations. You'll notice that can become a parallelogram when transformed or selecting an object with shear. Snapping during translation always conforms to the current grid axes, and not the visible selection box. As far as your WYSIWYG issues go - I think I'm loosing your actual point in the avalanche of too many words. How about some concise and short examples of what you mean? I also don't understand your objection to being able to see the immediate effect of tools. This to me is far better than outlined changes with delayed real updates. Too much visual clutter beyond what is needed is often a fine balancing act. It also depends heavily on what discipline you are in when using the tools. For technical drawing, lots of information might be useful. For free illustration, not so much. Believe it or not, some of us on the development team here also have some qualifications and experience, and from a wide range of disciplines. I think cumulatively it will be running well into centuries. Opinions on usability are affected by user experience and knowledge. We have to make our tools to accommodate all levels of expertise - not just those who consider themselves elite in the field. Our choices aren't driven by coding considerations first - but limitations of interface and hardware are always going to have to be a consideration. I'm also afraid that after explaining everything, that silver bullet real world example of what you are asking for is the only thing that is going to motivate us to put work into this. I think I've shown that there is only one specific use case that we are not covering, but I am not convinced of the weight of need for this use case without real evidence. You claimed it is "critical", so you must require it frequently enough that a real example should be easy enough to find?? Beyond this, all the very verbose philosophising is really leading nowhere. This applies to all requests - give an actual example with use case, and we can make some sense of it. And keep it concise - all the excessive noise, personal claims and soap-boxing in these threads is distracting from any relevant point that might be hidden in there.
  19. I don't think using guides achieves anything different to cloning an object and deleting the original. If anything the clone-delete method is potentially faster than using guides. Like I say - defining special behaviour for guides is not going to happen - it would require too much tool development for no real gain.
  20. How exactly do you associate guides with an object? And if the guides are part of the object, don't they still move with the object anyway (as it is being moved, I mean). So what are you snapping to - the original position of the guide again, before you started moving it? Sounds like the same issue to me, just being made more complicated. No - not sure this is a solution. And if it still only addresses this one very specific use case, it is not a justification for adding a whole lot of complicated functionality.
  21. I will just describe a counter use case to what you are requesting - that I know is common. I have a document with a layout of fairly regular objects. I want to correct minor errors in the positioning of a couple of objects that are almost in line with the rest. I rely on bounding box snapping for this. I move an object to allow snapping to pull it into perfect alignment (this might be while also Shift constraining to one axis of movement). If snapping also took into account the original position - I'd end up fighting between snapping to that position and the position I actually want. I'd then have to look carefully at the snapping overlays to ensure that the position is snapping to my intended target and not the original position. This operation then becomes more difficult due to having a rogue snapping target - in fact the very position I am trying to correct away from.
  22. This isn't a question of WYSIWYG tooling. This is primarily the question of predictable behaviour with clear visual cues. Snapping to invisible objects is a nightmare - they can easily interfere with expected behaviour. That is why I say snapping to the original position of an object without a clear visual cue is not good. This also has nothing to do with technological considerations - you are off the mark there. It is 100% down to the reason I give above. I'm also going to argue against your "50% faster operations". What you mean is a 50% faster operation for the one use case you describe - which is actually not a very common operation for most people (given that we haven't had many calls for this behaviour in the 5 years since release,). Beyond constraining a translation to the original position of an object (which can already be achieved by holding Shift) the operation you require is to be able to snap to some demarkation of the object's original position that cannot be determined through constrained motion. Specifically, the main operation is snapping to the centre or opposite edge of the original position of an object while also not making a duplicate (else the original object is still available for snapping). These are the only target positions you'd be able to achieve with bounding box snapping. This is actually a very specific use case when considered against the existing functionality available, and I'd argue (while you say it will be useful to you) whether it justifies compromising the 999 other use cases that people more regularly have? Specifically, your use case (as far as I can ascertain) is the need to snap to the bounds of an object's original position while having no other objects present providing the same alignment, and that cannot be made through just constrained translation. I understand the mechanics of what you are asking for, but can you please elaborate on a real world example for this use case. I mean - where you would use it many times, in order to justify not using the clone-delete method described by others.
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.