Jump to content
You must now use your email address to sign in [click for more info] ×

We need to talk about artificial intelligence.


Recommended Posts

On 8/26/2023 at 8:31 PM, Steve Redmond said:

The book ''Who stole my cheese' seems appropriate here especially those who are poo pooing Generative AI.  Missed opportunities.  

your interpretation of the book title is a textbook freudian slip and very fitting especially with Adobes legal/PR fiasco(s).
The correct title of that book is "Who Moved My Cheese?"

Sketchbook (with Affinity Suite usage) | timurariman.com | artstation store

Windows 11 Pro - 23H2 | Ryzen 5800X3D | RTX 3090 - 24GB | 128GB |
Main SSD with 1TB | SSD 4TB | PCIe SSD 256GB (configured as Scratch disk) |

 

Link to comment
Share on other sites

I thought the inbreeding and artifact generation aspect of generative AI might be of interest to others in this thread. Will it become important to know and understand the genetic background of any images or text being crossbred with your own images or text to produce a final product? 

From
https://pjmedia.com/vodkapundit/2023/08/28/gross-ai-is-turning-into-the-human-centipede-n1722597

“Garbage in, garbage out” (GIGO) was one of the very first computer terms I ever learned, a way of saying that the results you get out of a computer are only as good as the data you put into it. But because of the way AI works — “large language models” is much more accurate than “AI” — the systems are actually creating, out of thin air, the bad data that generates garbage outputs.

The AI/LLM version of GIGO is “generative inbreeding,” according to Louis Rosenberg in a new Venture Beat report. That’s what happens — just like with people and animals — when “members of a population reproduce with other members who are too genetically similar.” Rosenberg writes that “recent studies suggest that generative inbreeding could break AI systems, causing them to produce worse and worse artifacts over time, like making a photocopy of a photocopy of a photocopy.”

Worse:

Generative inbreeding could introduce progressively larger “deformities” into our collective artifacts until our culture is influenced more by AI systems than human creators. And, because a recent U.S. federal court ruling determined that AI-generated content cannot be copyrighted, it paves the way for AI artifacts to be more widely used, copied and shared than human content with legal restrictions.

Affinity Photo 2.4.2 (MSI) and 1.10.6; Affinity Publisher 2.4.2 (MSI) and 1.10.6. Windows 10 Home x64 version 22H2.
Dell XPS 8940, 16 GB Ram, Intel Core i7-11700K @ 3.60 GHz, NVIDIA GeForce RTX 3060

Link to comment
Share on other sites

I think the threat of invisible data being inserted into images created and/or modified by generative AI might be of interest to others in this thread. 

When you use generative AI to alter one of your own images, how can you be sure that the resulting image will not be damaging to yourself in one way or another? How do you know if the AI has not inserted invisible data into your generated image, even data deeply buried in the AI's image library through generative inbreeding?

It turns out that careful visual inspection will not be sufficient to protect your interests.

Companies have developed ways of embedding data into an image that is invisible to the eye but detectable by suitable software. 

This particular article is about invisible digital watermarking and deepfakes. I imagine many other kinds of data, both benign and malignant, might be invisibly embedded in any image you create for yourself using generative AI.

https://spectator.org/saving-ourselves-from-ai-deepfakes/

Affinity Photo 2.4.2 (MSI) and 1.10.6; Affinity Publisher 2.4.2 (MSI) and 1.10.6. Windows 10 Home x64 version 22H2.
Dell XPS 8940, 16 GB Ram, Intel Core i7-11700K @ 3.60 GHz, NVIDIA GeForce RTX 3060

Link to comment
Share on other sites

As apps across the art and design spectrum integrate more AI-based tools, Affinity is going to be left in the dust if they don't adapt. That's just the hard truth.

I'm not pro-AI by any means, but I can read the writing on the wall.

2019 MacBook Pro 16” | Affinity Designer 2 | Affinity Photo 2 | Affinity Publisher 2

2018 iPad Pro 12.9” | Apple Pencil 2 | Affinity Designer for iPad 2 | Affinity Photo for iPad 2 | Affinity Publisher for iPad 2

Years with Affinity: 5 ❤️ https://www.instagram.com/cealcrest/

FEATURE WISH LIST  Vector Mesh Tool    Shape Builder Tool   🥚True Vector Brushes   🥚Vector Pattern Fill  🥚Studio Link in All Apps

APP WISH LIST  Publisher   🥚2D Animation/Video

Link to comment
Share on other sites

10 hours ago, Granddaddy said:

… the threat of invisible data being inserted into images created and/or modified by generative AI …

This reminds me that every PDF gets a unique code which is sent home by Adobe products on opening the file on your computer outside any cloud¹. This can be used for example to show personal or business connections by intensity. Why not do it with all files. (I don’t like that at all and I use Little Snitch app). More than a quarter of Adobe's revenue comes from Adobe Experience for analytics, targeting, marketing … https://en.wikipedia.org/wiki/Adobe_Experience_Cloud

___
1) this information is around 10 years old and I only assume that Adobe still does it. Unfortunately I can’t find the page from the security expert who describes the behavior in detail.

Advertising designer - Austria —  Photo - Publisher - Designer — CS6 d&wP — Mac Pro 5,1 (4,1 2009) 48GB 2x X5690 - RX580 - 970EVO - OS X 10.14.6 - NEC2690wuxi2 - CD20"—  iPad Pro 12.9" gen1 128 GB - Pencil

Link to comment
Share on other sites

Some people can not see or do not want to accept that AI is the future.
@Johannes I really hope affinity will take advantage of the opportunity - better late than never. In my opinion, time is really running out.
I don't know how it will turn out for a software company of this type that won't rely on AI. If it persists, I think it will become a niche product without AI.

If there are no features coming in this regard in the next few months, I expect to leave Affinity Suite as it simply takes me more time to get something done.
I will get hate again for this view on AI here, but that's just my view.

Link to comment
Share on other sites

Here with a little bit of a business perspective.

Be there when it's usable and legally settled, not when the bubble bursts.

There's no point in diverting dev assets to AI stuff.

Most of the development of AI tools is opensource. Companies that try to turn profit on the AI generators are rarely innovative regarding their technologies, with notable exception of data giants, who also have the resources to tackle the lawsuits coming their way.

That brings us to another issue: regulation. While Adobe may offer lawsuit cost coverage to users, I think Affinity would prefer to focus on something else, and artists would prefer to have a safehaven while things settle down.

AI is very problematic to artists and creators right now. Whatever camp you're in, this is a fact and it needs to be recognized. There's regulation coming about, and it seems it will be region-specific, with differences between EU and US visible already. Depending on the outcomes, you may have to phase out the solution, region-lock it, or do other PR-damaging stuff.

Digital asset management tool, as per @Stephen Coyle post from the other thread, would be a much better resource allocation.

And Linux support. There's no competition and no coverage there.

On 7/17/2023 at 12:00 PM, LCamachoDesign said:

I'm just going to add what I think it's an important technical point, rather than discussing thread locking vagaries.

Great post, great analysis, on point. One bit of misinfo: you can deploy image generative tools on consumer GPUs very easily, and you don't need cloud services for fast generation - Nvidia RTX 20xx/30xx series are already capable. Getting a good pipeline for StableDiffusion via plugins sets you miles apart from Adobe quality-wise, although having higher technical requirements from the user, and dodging/dilluting legal support.

Language models are becoming deployable locally as well.

Which is, sadly, reflected in the amount of trolls spewing FUD on this forum 😑

Link to comment
Share on other sites

7 hours ago, michalmph said:

One bit of misinfo: you can deploy image generative tools on consumer GPUs very easily, and you don't need cloud services for fast generation - Nvidia RTX 20xx/30xx series are already capable. Getting a good pipeline for StableDiffusion via plugins sets you miles apart from Adobe quality-wise, although having higher technical requirements from the user, and dodging/dilluting legal support.

That just speaks to exactly what I was saying. The field changes too fast for wasting any time doing custom solutions. When I posted, the only thing you could run locally was Stable Diffusion 1.5. While that works, the quality was pretty substandard, but you only needed a GPU with 4Gb VRAM. Yes, you could use custom checkpoints and LoRAs for waifus, but... that's not the scope here.

Jump forward to today, and now you can run SDXL locally. And the quality is much higher than 1.5, vanilla SDXL is ballpark MidJourney 4 I'd say. With ControlNet and LoRA / LyCORIS I'd say you get MidJourney 5 level of generation. But you also no longer can run with 4Gb VRAM, you need at minimum 8Gb, with 12Gb being the recommended value. Basically, you need a 3070 or 4070 and up. Yes, you can use quantization and other fiddly trickery to run below that. I can run SDXL locally on my M1 iPad Pro, but you're looking at 5 minutes per image, it's just not realistically usable.

But like you said, it doesn't actually change my initial point, it's still better to have a plugin to connect to these generators. Again, looking at back to when I first posted, Automatic1111 was the way to run SD locally. Fast forward to now, and ComfyUI is the way to do it. Even when A1111 tries to catch up, with the 1.6 release a few days ago, it still can't catch Comfy, it doesn't have support for Control LoRA, and that's a massive downside IMHO. If Serif were, back then, to try porting A1111 into the Affinity suite, that time would've been wasted, they'd need to scrap everything and start again with ComfyUI to be competitive.

So yeah, have a plugin system, and let others do that job. This is not speculation either. Just look at Krita, people spent time doing A1111 plugins for it last year... fast forward to today, and both top plugins have gone belly up. The development is now on the ComfyUI plugin hosted at CivitAI.

Link to comment
Share on other sites

  • 4 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.