Jump to content
You must now use your email address to sign in [click for more info] ×

drstreit

Members
  • Posts

    52
  • Joined

  • Last visited

Everything posted by drstreit

  1. I see the challenge in th3 fact that an AI model is not as straightforward like implementing a technical interface: There a HUGE quality differences that will take time and resources. Adobe recognized that and went all in. Serif does not even show a roadmap… Not sure if they can catch up tha t gap.
  2. Photoshop https://github.com/AbdullahAlfaraj/Auto-Photoshop-StableDiffusion-Plugin Krita https://github.com/sddebz/stable-diffusion-krita-plugin Gimp https://github.com/blueturtleai/gimp-stable-diffusion … masses of websites doing the same. Thats for open source, many commercial AIs are also available to integrate, IF you have an interface to do so. Not releasing a plugin SDK is a strange decision, I would argue that PS became a stabdard because of the power to extend it, its more an oecosystem than a single application today
  3. I dont mean to create an AI via an SDK, just need one to integrate the existing ones…
  4. I agree, they have to embrace it. Just saying for some first impressions, an SDK would go a far way, seeing that all AI generators are accessible via API…
  5. At the end, that's why I started this thread: The ABSOLUTE minimum would be to open their SDK for us - someone would implement first results within a week like they did with all other tools who offers an SDK. Not doing anything, not even answering here or showing a roadmap at least is quite strange a behaviour...
  6. Thats not fully true: To generate the model, you need absurd hardware. To use the model, a desktop GPU is enough - with an RTX 3080 (so already quite outdated) it takes me ca. 15 sec to generate a 1k picture locally.
  7. Fun fact: The current generation of generative AI models is what the older of us remember being the first handys: 3kg with battery, sound worse than a walkie talkie and expensive like a car. You all argue like this AI generation is the end result, but its not even a full start, b3cause they are ONLY ONE YEAR OLD. Computer WILL beat experts and artists in terms of creativity, inventions and - always - price.
  8. "Its not Art"? The intention of the human using it is the Art - computer are dump - but powerful. Have a look at this video: https://s23.q4cdn.com/979560357/files/doc_news/videos/Adobe-Photoshop-x-Firefly_Sizzle.mp4 Art? No. Powerful, time saving, soul crushing for everybody using a tool that does not offer it soon? YES!
  9. Sure. But who would want to live in a world without toilet water today? I expect with generative AI, it will be the same, soon… That AI will exceed expert level in most activities within 10 years is already concluded, sorry.
  10. I am with you there, Midjourney serves a certain niche. Much more important is Firefly (https://www.adobe.com/sensei/generative-ai/firefly.html : Adobe will serve content within Google Bard (like ChatGPT, only from Google) soon. Might be that Adobe in some years earns more with KI than with graphical tools. There ARE managers that are not sleeping through industry revolutions, it seems…
  11. Check at least the one pager where they showcase some things Firefly will deliver at start. And then tell me, that this will not make you more productive. Or dont. And be out of jobs because others ARE more productive. My concern is, that they will loose the small market slice they have when they keep sleeping 😴. Would be a shame.
  12. Adobe just announced that they will integrate AI generated content into all of their products as an standard tool: https://www.adobe.com/sensei/generative-ai/firefly.html So I repeat my plea here: Serif, move it and be fast about it. This "we have a timeline that is inflexible" is the Kodak way, please react to an industry trend that will be THE defining topic in content creation for the next years.
  13. I honestly dont expect IP will be much of an issue soon: You can easily train a model to generate concepts with another model checking the result (GAN). That way, you create a model that will create photorealistic images of everything without being fed any real world images. More effort, but totally doable. Better concentrate on the fact that everybody WILL have access to such tools, and they WILL create and modify images without any technical skills needed. And hopefully these tools will also come from Serif.
  14. Must read article: https://www.marktechpost.com/2022/11/29/artificial-intelligence-ai-researchers-at-uc-berkeley-propose-a-method-to-edit-images-from-human-instructions/ And again: The current models are developed less than 2 years, so 3xpect much more to come. And to Serif: Would it not be great to integrate someting like that? Its all open source, so expect that eceryth8ng that has an SDK will have it implemented by enthusiasts…
  15. To rail back the topic: I think we all agree that AI per definition cannot produce art - as art always needs an intention On the other hand - 99% of grafic design is not art, but producing nice stuff - here AI will make a huge difference Examples mentioned are real AI selections/replacements, being able to train your personal style for image variations etc All that above needs an integration into Serif's product line: An dthat was why I started that topic: I see open APIs in other products getting them the integration needed, but I hear SDKs/APIs not even on the roadmap of Serif - that concerns me greatly, as I personally think that the current AI models will become so powerful that they will replace a lot of traditional workflows.
  16. I humbly disagree: You can train the models any style or face desired (Dreambooth), you can create everything imaginable within seconds, the quality is doubling every 4 weeks. Compare it with programming via notepad or Visual Studio: In both you can get results, but there is a reason nobody is doing notepad anymore… Maybe this is a read: https://arstechnica.com/information-technology/2022/11/midjourney-turns-heads-with-quality-leap-in-new-ai-image-generator-version/ „its almost too easy“, and „Considering the progress Midjourney has made over eight months of work, we wonder what next year's progress in image synthesis will bring.“ Do not make the mistake to look at the current stage, that is like the jokers that told everybody that electronic cameras will never make it, because the first generation was worse than mechanical DLSRs. The journey for cameras took 20 years - here we are talking software: It will not even take 20 more months.
  17. Here another grafics editor that integrates AI: https://www.theverge.com/2022/11/8/23447102/photoroom-app-magic-studio-ai-image-tool-generation Time is wasting…
  18. Agree, that is the point of this thread: Give us an SDK to write plugins and extensions. Providing that all by Serif will be hard given the knowledge needed is extremely limited on the market.
  19. Last entry here from me on this topic - it really was a pleasure to discuss this with you all!! Microsoft just announced that they will integrate the leading model (OpenAI's DALL-E) into their Designer app - and will invest a lot to make the integration as smooth as possible! It has started.
  20. Yes, the models are there - but who will integrate it seamlessly - even invisible - into the workflows? If I select a brush and start drawing hair, should not the AI recognize that and automatically get guidance from my paintstrokes but create the hair according to the model - no texting, no interaction needed at all?
  21. Agree, usability needs to be much higher than text prompts, such as save prompts as reusable partsto build your library. Also the possibility to train the model with your styles would be needed. Outpainting and crude image-to-image also definitely needed: The later you can try out at https://huggingface.co/spaces/huggingface/diffuse-the-rest
  22. No problem: .-""""""-. .' '. / O O \ : : | | : ', ,' : \ '-......-' / '. .' '-......-' Joan Stark; ascii art archive
  23. Big 👍to DelN! I got nostalgic when you mentioned CorelDraw - at that time we took the example eye grafic from them as test for new GPUs how long it takes to render all the elements there 🙂 And I think what you did in your career was an early example for what we see today: More and more people learning things like Python even in school and therefore are not afraid for example to set up a local Stable Diffusion instance (but if you are really new to that, I found https://github.com/cmdr2/stable-diffusion-ui to be extremely simple as it mostly hides all the necessary Python prerequisites from you). So for people with that skillset, AI generated images can already be used in a production line as the APIs are all available and documented. This post really looks at the many people who do not want to dive into this technologies - only wanting to concentrate on the artistical process, the well known tools: Here an integration is mandatory, is neccessary - and would actually be quite simple for a first minimal viable product - IF we can have a SDK...
  24. Let me try to be very specific - I got scolded to be too general here - and rightly so: Hi @Serif - here is my user story for your backlog! "I as a grafic designer want to be able to train a Serif integrated AI model with my own grafical style at a maximum of 10 training pictures. With that locally modified model, I want to be able to go to a customer and create/modify any number of pictures in this newly trained style on the fly according to specifications, examples: Recreate the upper left corner of a given picture but put a stylished car into it seamlessly Draw out (extend the image content) of that image another 30% seamlessly Generate brainstormed images with the topic "future cars" in the newly trained styles ...
×
×
  • Create New...

Important Information

Terms of Use | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.