Jump to content
You must now use your email address to sign in [click for more info] ×

AI picture generators urgently required


Recommended Posts

45 minutes ago, kirk23 said:

Perhaps they are in its infancy state , yes   but I am waiting for 10 years already and  nothing  really helpful yet.   just toys.

Disclaimer: I am complete amateur with creating seamless patterns, so a question to your expertise: Would you mind to have a look at https://barium.ai/dashboard , maybe register and try out some patterns and tell me if that is something you see as being useful?

It uses the Stable Diffusion model (the only one to my knowledge that you can easily deploy locally), so its quite accessible and extendable.

Link to comment
Share on other sites

I'm not sure if this could actually be a problem, but I notice that some of the images are of living people. I wonder if it is possible that, depending on how the finale images are used, there could be legal problems, regarding defamation or libel, for someone using this software?

Acer XC-895 : Core i5-10400 Hexa-core 2.90 GHz :  32GB RAM : Intel UHD Graphics 630 : Windows 10 Home
Affinity Publisher 2 : Affinity Photo 2 : Affinity Designer 2 : (latest release versions) on desktop and iPad

Link to comment
Share on other sites

4 minutes ago, PaulEC said:

I'm not sure if this could actually be a problem, but I notice that some of the images are of living people. I wonder if it is possible that, depending on how the finale images are used, there could be legal problems, regarding defamation or libel, for someone using this software?

Absolutly, that is already a problem: Just think about the Deep Fake images/videos that are available, plus of course - humans being what they are - Deep Nude Fakes...

A couple of countries outlaw the creation of certain pictures, but that is quite difficult to define at the end: Should a caricature be banned in principle, because it shows a living person? Its about the usage scenario and the intention I guess.

At the end, as with every new technology, there is a need for a broad discussion - at the moment the main drivers are AI specialists and technicians... 

Link to comment
Share on other sites

10 minutes ago, N.P.M. said:

I take vegetables and herbs and oil and meat and put it in a pan with the right recipe(prompt) it is called food.
Now let AI do the same thing with the same ingredients.
Looks like it but tastes like eh well compost.

Sorry - too late with the cooking: https://researcher.watson.ibm.com/researcher/view_group.php?id=5077

But in earnest: Printing books destroyed the scriber class of the midle age, photography destroyed most of the painter jobs that normally were called once a year into the homes to paint your family, around the corner where I live the last photograper of my small town just closed down: I guess smartphones and easily available full formats killed his business...

Yes, a lot of designers jobs WILL be destroyed through the AI revolution we see since a few years. As will such diverse jobs as technical writers, lawyers, etc.: At the end EVERY job that has a huge amount of routine work WILL be replaced by AI - no question around it.

What will stay are 

  • jobs that are just not worth to automate because that work is too cheap
  • jobs that do have a high amount of creativity and personal involvement

After each of the revolutions mentioned above, the number of total jobs actually grew, even exploded: It will be the same with this one. 

If at one point we actually have not enough meaningful things to do for all humans in the world, the things might get tough: But honestly, I dont see that situation in my or my kids lifetime.

Link to comment
Share on other sites

  • Staff

I appreciate that AI extends beyond the boundaries of Affinity & photo editing, but please try to keep this thread on-topic to Affinity & AI implementation within our apps, many thanks :)

Please Note: I am now out of the office until Tuesday 2nd April on annual leave.

If you require urgent assistance, please create a new thread and a member of our team will be sure to assist asap.

Many thanks :)

Link to comment
Share on other sites

2 minutes ago, N.P.M. said:

This is even more frightening than you think.
Sure disease could be detected early and treatments could be better adjusted.
But dna data could also be retrieved by AI and make you pay more for healthcare based on location ,possible addictions, obesity bloodtype or what your forefathers did.
You could be restricted by an AI to either where you want to live, what you may or may not eat or drink based on health problems or your dna.
Combined with location data and preferences in art, movie style or the like any government or big company like apple/ms/google/amazon can exclude or include you.
Not being cynical but I hope that big tech get's split up and that AI should be restricted by all means.

Tricky: Currently the trend is more understanding your personal DNA to design medicine that is tailored to your problem - e.g. what we call "cancer" is actually extremely specific to the host.

And the advantages are immense - have a look at Alpha Fold - I am not to dramatic to call this the holy grain of medicine: Understanding how proteins are folded is the key for developing treatments for all things involved.

There is a reason I am a firm believer of AI - but I am also a believer for strict control, as AI usage can be dangerous, and will continue to grow more dangerous with the growing power of the models - if you want to read a actually quite realistic SkyNet scenario, just read https://www.washingtonpost.com/opinions/2022/08/31/artificial-intelligence-worst-case-scenario-extinction/ ...

Having said that, generating images with AI is not too dangerous - but fun with no end: Running an anime contest with my daughter at the moment who can create the best short story with the generated pictures - hmm, actually I am loosing ;-) ...

Link to comment
Share on other sites

3 minutes ago, Dan C said:

I appreciate that AI extends beyond the boundaries of Affinity & photo editing, but please try to keep this thread on-topic to Affinity & AI implementation within our apps, many thanks :)

Will do - sorry for my enthusiasm.

Coming back to topic: I experiment with https://barium.ai/dashboard at the moment as I need seamless patterns for some 3D work - would love to hear your thoughts. And exactly this application would be a great first addition to Serifs toolbox, as models for patterns might be less controversial in terms of copyright and actually needed quite often!

Link to comment
Share on other sites

34 minutes ago, N.P.M. said:

just one note(last one I promise)

the way you validate your arguments are that people creating art are to be made obsolete by the AI. nice point in a forum for creatives.

If  I would turn the table on you and don't need a medical education and use AI to doctor my family,how would that make you feel?

Don't answer that but enjoy the game.

No, I dont validate anything with the simple fact that tasks will be replaced by AI. That is a nobrainer, or do you still use paper maps driving around - or do you rely on a navigation system?

Things will simply change: With AI being integrated great tools - such as Serif's - you can work fast, produce more, automate simple tasks.

Just imagine to train your unique style to an AI and produce your own grafical story within days.

The whole point of this forum post is that I believe AI will greatly help designers - and I dont want Serif to miss that train, because simply people will leave.

Link to comment
Share on other sites

You don't use anyone's images when you use Dall-E 2, Midjourney, Dream Studio, Deep AI, Stable Diffusion and Disco Diffusion. You simply type in a 'prompt' of what you want to see eg. 'purple cat singing on a rooftop at night wearing gold earrings, a red fur coat and silver boots, the moon is high in the sky and yellow. The stars are all red'... and you get a selection of images from which you can choose to create other images, upscale them or discard...

Link to comment
Share on other sites

4 hours ago, drstreit said:

Disclaimer: I am complete amateur with creating seamless patterns, so a question to your expertise: Would you mind to have a look at https://barium.ai/dashboard , maybe register and try out some patterns and tell me if that is something you see as being useful?

It uses the Stable Diffusion model (the only one to my knowledge that you can easily deploy locally), so its quite accessible and extendable.

Well, that's new for me.   I have tried it.    Nope, still  useless toy  but at least it tries to do height channel .   The problem  it's blurry as hell , as well as   diffuse textures  too while in videogames the textures should be  like a pixelart , not a single pixel wasted for nothing .     Results are repeating  as hell  either and looking  super surreal .    

In a word   even Adobe Sampler AI is way better , at least it can  do pretty well a limited number of subjects .   A sand with with small pebbles  for example.   For some uncertain  reason it's only matter AI  texture generators could do well.

My guess if AI do well in the field we would hear a huge buzz about it from every game developer and CG artist.  But  all the buzz is about Substance Designer mostly  and how cool it is with its procedural pipeline.    Wish  we would have a taste of it in Aphinity.   Just a few touches to its procedural live filter maybe so we could really use it that way. Not just a name.   IMo it should go ahead of all those AI toys

I recall  there was a buzz few years ago about some game where  they used  AI assisted animation  where  AI did 80% of the game animation.   It looked so crappy  players complained non stop.   Now Nvidia  tries to do something like that .  Perhaps we will see something good  from this field.  

 

Link to comment
Share on other sites

No, that is plain wrong: AI models work a bit like our brain, building billions of weighted connections based on learning.

With this, they are capable to deduct new solutions to problems that they initially were not trained on. As they are missing context, its the task of the data science engineers to steer them, the constantly improving models are the result.

Say I train a model on 90 different species of dogs, it will be able to identify all 300 or so of them, deducting the characteristics.

It gets better: Reinforced learning models constantly create new ways to solve a problem or draw a picture, rate the result and learn from it.

The alpha models from OpenAI beat every chess and every Go player on the planet: But they never actually were fed any strategy books, they derived their strategy themselves from billion of matches against themselves…

Better believe that well trained AI models are creative in the everyday meaning of that, we are there to give them context.

 

 

Link to comment
Share on other sites

And back to Affinity Photo...

Many of its users would really benefit from having access to an AI text to image generator for inspiration (even if some of them can't see it yet)...

It would stir the dust motes, awaken what lies sleeping (and unreachable) in the dark depths of creative minds and yield untold riches...

drstreit, I applaud you for mentioning the unmentionable... 🙂

Link to comment
Share on other sites

8 hours ago, DelN said:

drstreit, I applaud you for mentioning the unmentionable... 🙂

Nah - I am just lazy and I only dabble in the arts as a hobby and thus need easy tools ;-)

But my professional life very much has focus on AI, I lead building global projects using AI models for the life science industry - and I see a frightening change of the people coming from university: They simply DO NOT ACCEPT that there is a long, manual way that gives you 100% of the desired result in many hours of time. Instead there should be an app like way to get your pareto 80% with two clicks: Good enough is the way to go agile.

And with that approach, this huge oecosystem of grafic design tools were created on the various mobile devices - with functionality that is astonishing good nowadays with only seconds of effort.

The easiness how these kids ("kids", in my eyes, I am over 50 now) accept wonders of software such as AI models deliver today - only to request more of it and and faster - led to the disrupting push of AI into every industry.

Mark my words: Either all the software giants (and mini-giants like Serif) adapt - or they will die out with only us dinosaurs doing things "the right" way.

I myself am very impressed creative minds such as Kevin Hess who shows us that an author with self proclaimed small grafical capabilities can produce a full grafical novel by his own with the new AI tools: https://beincrypto.com/ai-art-worlds-first-bot-generated-graphic-novel-hits-the-market/

And sure one can complain about the style and limitations - but Midjourney (the AI he used) has been released mid of July this year: Just imagine what these tools will do in 2, 5 - 10 years from now given that AI complexity has been exponentiell the last decade.

I would dream of Serif trying to spearhead that development with a maximum aggressive move: In my professional opinion (admittingly one with only experience from other industries), Serif does not have the time to wait another year to move forward.

 

Link to comment
Share on other sites

Let me try to be very specific - I got scolded to be too general here - and rightly so: Hi @Serif - here is my user story for your backlog!

"I as a grafic designer want to be able to train a Serif integrated AI model with my own grafical style at a maximum of 10 training pictures. With that locally modified model, I want to be able to go to a customer and create/modify any number of pictures in this newly trained style on the fly according to specifications, examples:

  • Recreate the upper left corner of a given picture but put a stylished car into it seamlessly
  • Draw out (extend the image content) of that image another 30% seamlessly
  • Generate brainstormed images with the topic "future cars" in the newly trained styles
  • ...

 

Link to comment
Share on other sites

I entirely agree with you, drstreit... especially about 'dinosaurs'. When I worked for BT many years ago taking telegrams down over the phone, typing them up, sending the off abroad...

Telex was coming...

Fear! Worry! I could see the death of telegrams. I could see the future... and it was Telex.

And then computers came out. Wow! What are those? I experimented, learnt how to use them...

And then Word, WordPerfect and Multimate (?) came out in DOS not long after and I saw the future again: Word Processing! I leapt on the bandwagon, taught myself those...

Then there came... CorelDraw. Wow! Colour! Being able to draw on the computer... with a (what is that thing? A mouse?) So I purchased CorelDraw and taught myself. Although having learnt Word and (the hated) WordImperfect (with all its codes), when I opened CorelDraw, I just used to stare at the blank page for about 6 weeks wondering how to use it. But I read the manual, found the Help files and taught myself.

Then, when Telex did die (Oh, what a death!), I attended the funeral and was proficient in CorelDraw and other design software that I went to work in the Presentations industry in London (where I lived then) and instantly they dropped all the CorelDraw trainers' work onto my desk and I was promoted - to CorelDraw trainer for the whole team in Deutsche Bank.

It was mad, but I did well, and when a Graphic Designer post came up, I got the job - over other 'trained' graphic designers (outsiders). Again. I was stunned, but the management really were great and encouraged me. The information Memorandums were where I excelled...  because of my imagination, I guess, and my technical skills (self-taught) cos I now knew how to use Photoshop. Painter. Illustrator and Corel Ventura.

And then along came... InDesign. So I purchased that and again taught myself... because I saw the future there of layout for printing brochures, posters, et al immediately. Goodbye, Corel Ventura...

I used to create information memorandum covers, internet and intranet banners (animated in Flash and After Effects by me), but the Information Memorandums I created from scratch. The bankers came up with wild iseas. Then said 'Can you do it?' I said 'Yes' (even though I had no idea how I was going to create it.

Ha-ha! What a challenge....

They call it 'photo-bashing' now, but then I don't think it had a name. It was just my style.

Also I have always written my dreams down since I was a teenager and I swear that dreams are an untapped resource of inspiration and creativity. My imagination aided me. I write too, and I have taken situations, places, and conversations and 'snippets' remembered and used them in my pen and ink illustrations and my writing.

And suddenly, a few weeks ago, I was awakened (like you) to AI generated text-to-image creation. And again, I just saw the future. Big, big change. Wake up, everyone! Or you'll join the zombies...

AI images remind me very much of my dreams and nightmares that. Because I wrote many of them down. I can recall them just by reading a few in one of my dream books...

The newcomers will have to work hard in this new, fastly evolving world of design, gaming, video, film industry etc. Adapt to change, recognise it as the future evolving swiftly before your eyes and embrace it. Or they will, like many of the people I met along the way, join the bones in the Natural History museum in South Kensington... as dinosaur bones.

Affinity Photo and Designer are SO COOL! Affordable too (unlike Adobe software). I know Serif are watching the coming of AI art... and they're smart. I don't think they will ignore it. I know they won't.

Watch what happens... I know you are. Your eyes are open, observing its growth. Its influence on the industry. Its potential.

it's so exciting... What's gonna happen, as you say above, in the next 10 years? Serif will be there, I am sure of it...

2117344931_TheFaceofNature.thumb.jpg.564199cb00136723024fe8f957ba499a.jpg

Link to comment
Share on other sites

Big 👍to DelN!

I got nostalgic when you mentioned CorelDraw - at that time we took the example eye grafic from them as test for new GPUs how long it takes to render all the elements there 🙂

And I think what you did in your career was an early example for what we see today: More and more people learning things like Python even in school and therefore are not afraid for example to set up a local Stable Diffusion instance (but if you are really new to that, I found https://github.com/cmdr2/stable-diffusion-ui to be extremely simple as it mostly hides all the necessary Python prerequisites from you).

So for people with that skillset, AI generated images can already be used in a production line as the APIs are all available and documented.

This post really looks at the many people who do not want to dive into this technologies - only wanting to concentrate on the artistical process, the well known tools: Here an integration is mandatory, is neccessary - and would actually be quite simple for a first minimal viable product - IF we can have a SDK...

Link to comment
Share on other sites

Focusing a little more on the actual mechanics of integration, I'm realizing more and more that AI assistance is really more of a painter tool.  I've been generating incredible cityscapes with vehicles and noticing I would love to add elements.

 

For example,

1. add a new visual feature (e.g. generate a city background via  prompt, or maybe select from an initial list?)

2. draw a crude car on a new layer, or paste an example one in

3. run the AI to upgrade the car on that layer

4. refine via AI and back to step 1

You can argue you can skip all that with the right prompt, but I believe this is where the modern digital artist will want to spend their time.  Not texting your photo editor.

The workflow will shake the foundations of all digital photo editors.  Even Photoshop will have to do some big moves to accommodate a proper AI integration imo.

Edited by ffrm
Link to comment
Share on other sites

Agree, usability needs to be much higher than text prompts, such as save prompts as reusable partsto build your library. Also the possibility to train the model with your styles would be needed.

Outpainting and crude image-to-image also definitely needed: The later you can try out at https://huggingface.co/spaces/huggingface/diffuse-the-rest

 

Link to comment
Share on other sites

10 minutes ago, ffrm said:

The workflow will shake the foundations of all digital photo editors.  Even Photoshop will have to do some big moves to accommodate a proper AI integration imo.

Yes, the models are there - but who will integrate it seamlessly - even invisible - into the workflows? If I select a brush and start drawing hair, should not the AI recognize that and automatically get guidance from my paintstrokes but create the hair according to the model - no texting, no interaction needed at all?

Link to comment
Share on other sites

  • 2 weeks later...

Given that the primary focus of Affinity Photo is on manipulating photos, rather than painting, I would argue that the best aspect of AI technology to integrate into the product would be in the form of selection tools.

Serif has indicated in the Publisher section of the forums that an (unfortunately ECMAscript) scripting API and a C-based plugin API will be available at some point in the future.  When they are, most of the more painting-oriented features being discussed here are probably best handled in the form of plugins.

Link to comment
Share on other sites

This appears more of a threat to existing stock image libraries than to design tools. Canva, Adobe Express, Microsoft Designer, etc provide basic (yet adequate for many) design tools and publishing workflows linked to a vast library of keyword searchable stock imagery. Tools such as DALL-E and Midjourney offer a different approach to stock art in that they take the keyword search and generate images based on that criteria. I suspect much of the heavy lifting for these tools will largely remain on the server, with the actual generation being a relatively simple service accessible via https. If that's the case, and Serif do add plug-in/extensibility support in a future release, I don't see anything preventing someone providing an interface within the Affinity apps to these services (much like Christian Cantrell's Photoshop plugin).

Personally, I don't see this functionality as critical to the Affinity applications as it's extremely likely there will be many competing image generation services vying for new customers. It would make more sense for Affinity to focus on what makes them unique, and enable this functionality for users that want it via extensions.

Link to comment
Share on other sites

45 minutes ago, Bryan Rieger said:

It would make more sense for Affinity to focus on what makes them unique, and enable this functionality for users that want it via extensions.

Agree, that is the point of this thread: Give us an SDK to write plugins and extensions. Providing that all by Serif will be hard given the knowledge needed is extremely limited on the market.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.