Jump to content

Agent interfaces that support AI hardware and software


Recommended Posts

I use Designer a lot in my work (love it), and also increasingly use AI tools in my workflows. If you aren't already putting agent interfaces on your product roadmap, I think the time to do so is now. Here's why:

Hardware

After watching this demo of the new Rabbit r1 device, which can do one-shot learning in graphic user interfaces using a Large Action Model (LAM), I ordered one. I expect to receive it in June. I plan to train rabbits to perform task sequences in Designer for common workflows and geometric constructions. Rabbit r1 can understand speech, drive mouse/touch interfaces on any device, see and understand images on screens, see objects in the real world, has numerosity (can count and quantify things) and can explain its actions. 

My test for driving Designer with Rabbit will be to see if I can describe what I want to have happen command-wise, and watch it unfold on the screen. Here are some examples of verbal commands I'm hoping to ultimately teach rabbits to do:

  • "Create a column of five arrows pointing to the left, along the right edge of the artboard", "evenly space the arrows vertically", "select all of the arrows", "geometry combine", "gradient fill", "orient gradient vertically", "change gradient, red to blue"
  • "Create a pie, centered in the image, 800 pixel diameter", "start angle zero degrees, end angle one hundred and twenty degrees"

That sort of thing - I think I could very quickly get used to, and be more productive with, voice commands like this than with clicking and typing. This is just an experiment on my part to see how far I can push the r1 in terms of automating commands and workflows. 

The reason I wanted to post this is to just put this on your radar as something to keep an eye on, and perhaps experiment with as well. It's becoming clear to me that intelligent agent interfaces (of which Siri and Alexa are early examples) will be increasingly replacing the now 30-year-old GUI of buttons and menus in our apps. Interfaces won't switch overnight, and devices like the Rabbit r1 are really just a halfway step, but we can see the nuts and bolts of an agent interface for products like Designer and Photo are all pretty much there from a technical perspective, and that it will be coming to most desktop apps over the next 2 to 3 years.  

Software

There has been an explosion of AI vector image generators, like VectorArt, or Redcraft, and of course Adobe is going all-in with Firefly. These tools make creating awesome vector-based artwork a snap. Turning tasks that used to take days into prompts that take seconds. I love Affinity products, and I hope you too are working on building AI tools into Affinity Designer 3. Just a heads-up that software as we know it is going to change radically over the next couple of years.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.