Jump to content

AI picture generators urgently required


Recommended Posts

10 minutes ago, ffrm said:

The workflow will shake the foundations of all digital photo editors.  Even Photoshop will have to do some big moves to accommodate a proper AI integration imo.

Yes, the models are there - but who will integrate it seamlessly - even invisible - into the workflows? If I select a brush and start drawing hair, should not the AI recognize that and automatically get guidance from my paintstrokes but create the hair according to the model - no texting, no interaction needed at all?

Link to comment
Share on other sites

  • 2 weeks later...

Given that the primary focus of Affinity Photo is on manipulating photos, rather than painting, I would argue that the best aspect of AI technology to integrate into the product would be in the form of selection tools.

Serif has indicated in the Publisher section of the forums that an (unfortunately ECMAscript) scripting API and a C-based plugin API will be available at some point in the future.  When they are, most of the more painting-oriented features being discussed here are probably best handled in the form of plugins.

Link to comment
Share on other sites

This appears more of a threat to existing stock image libraries than to design tools. Canva, Adobe Express, Microsoft Designer, etc provide basic (yet adequate for many) design tools and publishing workflows linked to a vast library of keyword searchable stock imagery. Tools such as DALL-E and Midjourney offer a different approach to stock art in that they take the keyword search and generate images based on that criteria. I suspect much of the heavy lifting for these tools will largely remain on the server, with the actual generation being a relatively simple service accessible via https. If that's the case, and Serif do add plug-in/extensibility support in a future release, I don't see anything preventing someone providing an interface within the Affinity apps to these services (much like Christian Cantrell's Photoshop plugin).

Personally, I don't see this functionality as critical to the Affinity applications as it's extremely likely there will be many competing image generation services vying for new customers. It would make more sense for Affinity to focus on what makes them unique, and enable this functionality for users that want it via extensions.

Link to comment
Share on other sites

45 minutes ago, Bryan Rieger said:

It would make more sense for Affinity to focus on what makes them unique, and enable this functionality for users that want it via extensions.

Agree, that is the point of this thread: Give us an SDK to write plugins and extensions. Providing that all by Serif will be hard given the knowledge needed is extremely limited on the market.

Link to comment
Share on other sites

  • 3 weeks later...
On 11/9/2022 at 4:48 PM, pixelstuff said:

I think you made a mistake labeling this topic. A Plugin SDK is perhaps urgently required, but an AI Picture Generator is more of a curiosity at the moment.

 

I humbly disagree: You can train the models any style or face desired (Dreambooth), you can create everything imaginable within seconds, the quality is doubling every 4 weeks.

Compare it with programming via notepad or Visual Studio: In both you can get results, but there is a reason nobody is doing notepad anymore…

Maybe this is a read: https://arstechnica.com/information-technology/2022/11/midjourney-turns-heads-with-quality-leap-in-new-ai-image-generator-version/

„its almost too easy“, and „Considering the progress Midjourney has made over eight months of work, we wonder what next year's progress in image synthesis will bring.“

Do not make the mistake to look at the current stage, that is like the jokers that told everybody that electronic cameras will never make it, because the first generation was worse than mechanical DLSRs. The journey for cameras took 20 years - here we are talking software: It will not even take 20 more months.

Link to comment
Share on other sites

On 11/12/2022 at 12:57 AM, drstreit said:

the quality is doubling every 4 weeks

I probably use other AI   but  see no quality changes for years already.   They are  all same ridiculous  as AI assistants in a bank.     Still wait for something  AI based  would  appear and make life of  videogames industry  artist easier .     All I ever seen in UV unwrap,  3d modelling, texture creation are so  stupid  and so bad  I stooped to even care . No hope at all.     Even Photoshop AI selection that could be used theoretically to select some  photography features to make  proper roughness  texture from same photo  doesn't work AT ALL  with Ai selection.     No use  whatsoever.

Link to comment
Share on other sites

On 11/11/2022 at 1:57 PM, drstreit said:

 

Do not make the mistake to look at the current stage, that is like the jokers that told everybody that electronic cameras will never make it, because the first generation was worse than mechanical DLSRs. The journey for cameras took 20 years - here we are talking software: It will not even take 20 more months.

It will get better, but I consider that irrelevant.

You can come up with 50 billion analogies about other inventions, and they all collapse at the same point: human have a need for the arts (both creating and enjoying it).

Having a machine create "art" is totally unclear on the concept, even if it can produce some interesting things at times.

I have less utter disdain for the whole concept than I did at first, because there are creative uses for it (for example creating backgrounds). But it will never be art, no matter how hard you struggle to compare the process.

Shorter version: I don't give a FF about it. No AI can do what I do, and it's not something I'll ever use - whether it's visual art, music, or anything else.

Link to comment
Share on other sites

You keep saying these tools have only been around about 4 weeks and they are progressing rapidly but, they have been working on these for years. They have only been available for weeks. We have been working on AI since the 1950's and it's only in recent years it has even become usable. Think about how many times you have to repeat a question to Siri, or Alexa and just give up and look up information for yourself.

AI works in very narrow fields and has a long way to go to replace artists and designers in broad areas. Especially if you want to create something new and unique and not just smash stuff together. 

Link to comment
Share on other sites

 

2 minutes ago, Bigwillt said:

AI works in very narrow fields and has a long way to go to replace artists and designers in broad areas. Especially if you want to create something new and unique and not just smash stuff together. 

 

I'm afraid you're underestimating the impact programs like Midjourney and Stable Diffusion are already having.

But the second sentence of yours I quoted here - yes, in fact it's a tautology to say that unique art is always going to be unique! 

Link to comment
Share on other sites

Does anyone know  if there is an AI selecting tool that can select  say  ,mossy areas on rocks image , or rusty spots on something metal still having little color difference, or  peeled paint layer on same colored another layer.   Easy and quickly .  That would be huge AI  help.    

Or something that can remove lighting , shadows  and  keep just diffuse color?

Link to comment
Share on other sites

1 hour ago, kirk23 said:

an AI selecting tool

This is really where artificial intelligence can perhaps make a big difference. Object detection where as you move the mouse around it highlights objects it recognizes like people, animals, trees, sky, grass, houses, cars, fruit, vegetables, hamburgers, furniture, etc. Along with perhaps some fine tunning features for the human to help guide it like using the mouse wheel to make it more or less specific, for example if you've selected a human you can force it down to just the shirt, just the face, or just the hair. Alternatively maybe expand it out to multiple people while skipping the disconnected space between them.

Link to comment
Share on other sites

11 minutes ago, pixelstuff said:

This is really where artificial intelligence can perhaps make a big difference

Yeah  but currently it can't to do it even with the sky reliably based on Photoshop. Along  a horizon line where colors are somehow earthly already , a dusty haze of some sort it  doesn't work that good.     People , cats  are totally ok.   And that's all.       Anything I ever needed for my job  . no chance.

Resent new Photoshop version with better cloud  AI  option  didn't make a slightest difference.   I wish they would rather focus on their super outdated layer system, 32 bit support and slow like hell smart objects.  

Link to comment
Share on other sites

To rail back the topic:

  • I think we all agree that AI per definition cannot produce art - as art always needs an intention
  • On the other hand - 99% of grafic design is not art, but producing nice stuff - here AI will make a huge difference
  • Examples mentioned are real AI selections/replacements, being able to train your personal style for image variations etc

All that above needs an integration into Serif's product line: An dthat was why I started that topic: I see open APIs in other products getting them the integration needed, but I hear SDKs/APIs not even on the roadmap of Serif - that concerns me greatly, as I personally think that the current AI models will become so powerful that they will replace a lot of traditional workflows.

Link to comment
Share on other sites

9 hours ago, drstreit said:

To rail back the topic:

  • I think we all agree that AI per definition cannot produce art - as art always needs an intention

Actually, I'd debate the semantics of that point (not that I disagree entirely). Apologies in advance for being so argumentative - this is something I've been thinking about a lot recently, and no doubt I'm far from alone in that!

Anyway, text-to-picture certainly has an intention, and whether or not you call what it does art isn't really important (although I don't).

To me the main issue is what I already wrote: we are humans with a deep need to create and enjoy art. AI has the potential to take a lot of that away.

Or will it force real artists to come up with things that are beyond a machine?

9 hours ago, drstreit said:
  • On the other hand - 99% of grafic design is not art, but producing nice stuff - here AI will make a huge difference

Remember, programs like Midjourney rely on a database of images created by artists.

9 hours ago, drstreit said:
  • Examples mentioned are real AI selections/replacements, being able to train your personal style for image variations etc

All that above needs an integration into Serif's product line: An dthat was why I started that topic: I see open APIs in other products getting them the integration needed, but I hear SDKs/APIs not even on the roadmap of Serif - that concerns me greatly, as I personally think that the current AI models will become so powerful that they will replace a lot of traditional workflows.

Machine learning is a totally different thing from AI "art." Whether Serif *needs* to incorporate it into the Affinity programs, who knows.

Anyway, my hunch is that at lot of this will end up as... I'm not a programmer, but it'll be in libraries of routines that are part of the SDK.

You're right that it's the next wave, though. New Macs even have dedicated neural engine cores in their current computers.

Link to comment
Share on other sites

  • 3 weeks later...

Must read article: https://www.marktechpost.com/2022/11/29/artificial-intelligence-ai-researchers-at-uc-berkeley-propose-a-method-to-edit-images-from-human-instructions/

And again: The current models are developed less than 2 years, so 3xpect much more to come.

And to Serif: Would it not be great to integrate someting like that? Its all open source, so expect that eceryth8ng that has an SDK will have it implemented by enthusiasts…

 

 

Link to comment
Share on other sites

2 hours ago, drstreit said:

Must read article: https://www.marktechpost.com/2022/11/29/artificial-intelligence-ai-researchers-at-uc-berkeley-propose-a-method-to-edit-images-from-human-instructions/

And again: The current models are developed less than 2 years, so 3xpect much more to come.

And to Serif: Would it not be great to integrate someting like that? Its all open source, so expect that eceryth8ng that has an SDK will have it implemented by enthusiasts…

 

 

 

Utter revulsion aside, I just don't understand the point.

We already have humans. Minor flaws aside, I think we're pretty remarkable.

Link to comment
Share on other sites

8 minutes ago, nickbatz said:

So there is hope for humanity after all?

Well, they already pushed the envelope by introducing a fee to receive program updates for a version people already bought... they need less controversy not more

Microsoft Windows 10 Home (Build 19045)
AMD Ryzen 7 5800X @ 3.8Ghz (-30 all core +200mhz PBO); Mobo: Asus X470 Prime Pro
32GB DDR4 (3600Mhz)
EVGA NVIDIA GeForce GTX 3080 X3C Ultra 12GB
Monitor 1 & 2 @ 150%

WACOM Intuos4 Large; X-rite i1Display Pro; NIKON D5600 DSLR

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

×
×
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.