drstreit
-
Posts
52 -
Joined
-
Last visited
Posts posted by drstreit
-
-
1 hour ago, pixelstuff said:
Which other tool SDKs have an AI implementation already running on it?
Photoshop https://github.com/AbdullahAlfaraj/Auto-Photoshop-StableDiffusion-Plugin
Krita https://github.com/sddebz/stable-diffusion-krita-plugin
Gimp https://github.com/blueturtleai/gimp-stable-diffusion
… masses of websites doing the same.
Thats for open source, many commercial AIs are also available to integrate, IF you have an interface to do so.
Not releasing a plugin SDK is a strange decision, I would argue that PS became a stabdard because of the power to extend it, its more an oecosystem than a single application today
-
12 minutes ago, Juan Garcia said:
SDKs are limited by the capacity and the quality of the software underneath. AI via an SDK would be a lot of workfy to make it happen and thus, the plugin costly that maybe wouldn't justify the effort to create it if the potential customer realise they are better off paying an Adobe's subscription.
I dont mean to create an AI via an SDK, just need one to integrate the existing ones…
-
1 minute ago, Juan Garcia said:
That's the point of my comment. Does the effort necessary to create this in Affinity, justify the amount of customers that would not prefer to pay for a Photoshop subscription? That is why this needs to come from Affinity. It needs to provide some AI. On macOS it is easy, on Windows not that much.
I agree, they have to embrace it. Just saying for some first impressions, an SDK would go a far way, seeing that all AI generators are accessible via API…
-
4 minutes ago, Juan Garcia said:
SDKs are limited by the capacity and the quality of the software underneath. AI via an SDK would be a lot of workfy to make it happen and thus, the plugin costly that maybe wouldn't justify the effort to create it if the potential customer realise they are better off paying an Adobe's subscription.
No, only the most primitive ones
-
2 hours ago, Fossil said:
Affinity needs to act here. Exactly how, they need to decide, but they need to do something.
Whether it's partner with Midjourney, or support Stable Diffusion on a local machine, or an API or SDK, or some service of their own, they need to act and give users a good experience and a functional AI workflow that is better than just 'using Affinity' to fix your AI images.
Photoshop is already there. Firefly is already a thing. The plugin is already working. Affinity is playing catch-up.
If they don't catch up fast, they'll be left with a tiny niche market.
At the end, that's why I started this thread: The ABSOLUTE minimum would be to open their SDK for us - someone would implement first results within a week like they did with all other tools who offers an SDK.
Not doing anything, not even answering here or showing a roadmap at least is quite strange a behaviour...
-
On 5/26/2023 at 12:39 PM, deepz said:
Fact is that AI consumes a lot of energy and has challenging hardware requirements.
A server can easily handle 1000 simultaneous web requests,
but it can't handle 30 simultaneous AI jobs.So, we're going to see a lot of delays, credit systems, subscription services with monthly billing, ...
Goodbye one-time purchases and fixed prices.
---
Alternatively, it would be cool if Affinity just had some kind of "AI server software" to self-host on a server as an "agent".
For companies, it is becoming more economical to buy a couple of servers with a couple of high-end GPUs.(Maybe it sounds far-fetched. However, other companies are doing this stuff already. e.g. JetBrains is exploring similar solutions with their IDE software. i.e. You can install an agent on a strong server, and then connect from your laptop to it. It's all integrated in their software. The software looks as if it runs locally, but it's actually executing all commands on a remote server. e.g. my laptop with just 8GB RAM connects to a server with 128GB RAM, 12 CPU cores. And that server costed me just 2500€ and will easily last 10 years. The concept reminds me of the "mainframe systems" of the 90s, then again, that concept never really stopped making sense.)
Thats not fully true: To generate the model, you need absurd hardware.
To use the model, a desktop GPU is enough - with an RTX 3080 (so already quite outdated) it takes me ca. 15 sec to generate a 1k picture locally.
-
Fun fact: The current generation of generative AI models is what the older of us remember being the first handys: 3kg with battery, sound worse than a walkie talkie and expensive like a car.
You all argue like this AI generation is the end result, but its not even a full start, b3cause they are ONLY ONE YEAR OLD.
Computer WILL beat experts and artists in terms of creativity, inventions and - always - price.
-
"Its not Art"? The intention of the human using it is the Art - computer are dump - but powerful.
Have a look at this video: https://s23.q4cdn.com/979560357/files/doc_news/videos/Adobe-Photoshop-x-Firefly_Sizzle.mp4
Art? No. Powerful, time saving, soul crushing for everybody using a tool that does not offer it soon? YES!
-
12 minutes ago, nickbatz said:
And there are others who realize what a bunch of toilet water this whole stupid AI "art" thing is.
Those techbros need to find partners and spend their time engaged in productive activity.
Sure. But who would want to live in a world without toilet water today? I expect with generative AI, it will be the same, soon…
That AI will exceed expert level in most activities within 10 years is already concluded, sorry.
-
3 hours ago, deepz said:
Midjourney runs on a discord server right now. 🤨
It's screaming for a decent UI.
Something like Affinity Photo should integrate it.It would lure all those midjourney-hobbyists towards Affinity, and would make Affinity really big.
Having said that, we're obviously missing AIs that are able to correct themselves, make changes to previously generated images in an orchestrated way. e.g. to just regenerate a small portion of an image. But once those are in place, I want it embedded in affinity. I won't be doing that on some discord server.I am with you there, Midjourney serves a certain niche. Much more important is Firefly (https://www.adobe.com/sensei/generative-ai/firefly.html : Adobe will serve content within Google Bard (like ChatGPT, only from Google) soon. Might be that Adobe in some years earns more with KI than with graphical tools.
There ARE managers that are not sleeping through industry revolutions, it seems…
-
1 hour ago, nickbatz said:
See, my reaction is that this crap is yet another reason to use Affinity products instead.
Check at least the one pager where they showcase some things Firefly will deliver at start. And then tell me, that this will not make you more productive.
Or dont. And be out of jobs because others ARE more productive.
My concern is, that they will loose the small market slice they have when they keep sleeping 😴. Would be a shame.
-
Adobe just announced that they will integrate AI generated content into all of their products as an standard tool: https://www.adobe.com/sensei/generative-ai/firefly.html
So I repeat my plea here: Serif, move it and be fast about it.
This "we have a timeline that is inflexible" is the Kodak way, please react to an industry trend that will be THE defining topic in content creation for the next years.
-
I honestly dont expect IP will be much of an issue soon: You can easily train a model to generate concepts with another model checking the result (GAN). That way, you create a model that will create photorealistic images of everything without being fed any real world images.
More effort, but totally doable.
Better concentrate on the fact that everybody WILL have access to such tools, and they WILL create and modify images without any technical skills needed.
And hopefully these tools will also come from Serif.
-
Must read article: https://www.marktechpost.com/2022/11/29/artificial-intelligence-ai-researchers-at-uc-berkeley-propose-a-method-to-edit-images-from-human-instructions/
And again: The current models are developed less than 2 years, so 3xpect much more to come.
And to Serif: Would it not be great to integrate someting like that? Its all open source, so expect that eceryth8ng that has an SDK will have it implemented by enthusiasts…
-
To rail back the topic:
- I think we all agree that AI per definition cannot produce art - as art always needs an intention
- On the other hand - 99% of grafic design is not art, but producing nice stuff - here AI will make a huge difference
- Examples mentioned are real AI selections/replacements, being able to train your personal style for image variations etc
All that above needs an integration into Serif's product line: An dthat was why I started that topic: I see open APIs in other products getting them the integration needed, but I hear SDKs/APIs not even on the roadmap of Serif - that concerns me greatly, as I personally think that the current AI models will become so powerful that they will replace a lot of traditional workflows.
-
On 11/9/2022 at 4:48 PM, pixelstuff said:
I think you made a mistake labeling this topic. A Plugin SDK is perhaps urgently required, but an AI Picture Generator is more of a curiosity at the moment.
I humbly disagree: You can train the models any style or face desired (Dreambooth), you can create everything imaginable within seconds, the quality is doubling every 4 weeks.
Compare it with programming via notepad or Visual Studio: In both you can get results, but there is a reason nobody is doing notepad anymore…
Maybe this is a read: https://arstechnica.com/information-technology/2022/11/midjourney-turns-heads-with-quality-leap-in-new-ai-image-generator-version/
„its almost too easy“, and „Considering the progress Midjourney has made over eight months of work, we wonder what next year's progress in image synthesis will bring.“
Do not make the mistake to look at the current stage, that is like the jokers that told everybody that electronic cameras will never make it, because the first generation was worse than mechanical DLSRs. The journey for cameras took 20 years - here we are talking software: It will not even take 20 more months.
-
Here another grafics editor that integrates AI: https://www.theverge.com/2022/11/8/23447102/photoroom-app-magic-studio-ai-image-tool-generation
Time is wasting…
-
45 minutes ago, Bryan Rieger said:
It would make more sense for Affinity to focus on what makes them unique, and enable this functionality for users that want it via extensions.
Agree, that is the point of this thread: Give us an SDK to write plugins and extensions. Providing that all by Serif will be hard given the knowledge needed is extremely limited on the market.
-
Last entry here from me on this topic - it really was a pleasure to discuss this with you all!!
Microsoft just announced that they will integrate the leading model (OpenAI's DALL-E) into their Designer app - and will invest a lot to make the integration as smooth as possible!
It has started.
-
10 minutes ago, ffrm said:
The workflow will shake the foundations of all digital photo editors. Even Photoshop will have to do some big moves to accommodate a proper AI integration imo.
Yes, the models are there - but who will integrate it seamlessly - even invisible - into the workflows? If I select a brush and start drawing hair, should not the AI recognize that and automatically get guidance from my paintstrokes but create the hair according to the model - no texting, no interaction needed at all?
-
Agree, usability needs to be much higher than text prompts, such as save prompts as reusable partsto build your library. Also the possibility to train the model with your styles would be needed.
Outpainting and crude image-to-image also definitely needed: The later you can try out at https://huggingface.co/spaces/huggingface/diffuse-the-rest
-
No problem: .-""""""-. .' '. / O O \ : : | | : ', ,' : \ '-......-' / '. .' '-......-'Joan Stark; ascii art archive
-
Big 👍to DelN!
I got nostalgic when you mentioned CorelDraw - at that time we took the example eye grafic from them as test for new GPUs how long it takes to render all the elements there 🙂
And I think what you did in your career was an early example for what we see today: More and more people learning things like Python even in school and therefore are not afraid for example to set up a local Stable Diffusion instance (but if you are really new to that, I found https://github.com/cmdr2/stable-diffusion-ui to be extremely simple as it mostly hides all the necessary Python prerequisites from you).
So for people with that skillset, AI generated images can already be used in a production line as the APIs are all available and documented.
This post really looks at the many people who do not want to dive into this technologies - only wanting to concentrate on the artistical process, the well known tools: Here an integration is mandatory, is neccessary - and would actually be quite simple for a first minimal viable product - IF we can have a SDK...
-
Let me try to be very specific - I got scolded to be too general here - and rightly so: Hi @Serif - here is my user story for your backlog!
"I as a grafic designer want to be able to train a Serif integrated AI model with my own grafical style at a maximum of 10 training pictures. With that locally modified model, I want to be able to go to a customer and create/modify any number of pictures in this newly trained style on the fly according to specifications, examples:
- Recreate the upper left corner of a given picture but put a stylished car into it seamlessly
- Draw out (extend the image content) of that image another 30% seamlessly
- Generate brainstormed images with the topic "future cars" in the newly trained styles
- ...

AI picture generators urgently required
in Feedback for the Affinity V2 Suite of Products
Posted
I see the challenge in th3 fact that an AI model is not as straightforward like implementing a technical interface: There a HUGE quality differences that will take time and resources.
Adobe recognized that and went all in. Serif does not even show a roadmap…
Not sure if they can catch up tha t gap.