Jump to content
Our response time is longer than usual currently. We're working to answer users as quickly as possible and thank you for your continued patience.


  • Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I honestly dont expect IP will be much of an issue soon: You can easily train a model to generate concepts with another model checking the result (GAN). That way, you create a model that will create photorealistic images of everything without being fed any real world images. More effort, but totally doable. Better concentrate on the fact that everybody WILL have access to such tools, and they WILL create and modify images without any technical skills needed. And hopefully these tools will also come from Serif.
  2. Must read article: https://www.marktechpost.com/2022/11/29/artificial-intelligence-ai-researchers-at-uc-berkeley-propose-a-method-to-edit-images-from-human-instructions/ And again: The current models are developed less than 2 years, so 3xpect much more to come. And to Serif: Would it not be great to integrate someting like that? Its all open source, so expect that eceryth8ng that has an SDK will have it implemented by enthusiasts…
  3. To rail back the topic: I think we all agree that AI per definition cannot produce art - as art always needs an intention On the other hand - 99% of grafic design is not art, but producing nice stuff - here AI will make a huge difference Examples mentioned are real AI selections/replacements, being able to train your personal style for image variations etc All that above needs an integration into Serif's product line: An dthat was why I started that topic: I see open APIs in other products getting them the integration needed, but I hear SDKs/APIs not even on the roadmap of Serif - that concerns me greatly, as I personally think that the current AI models will become so powerful that they will replace a lot of traditional workflows.
  4. I humbly disagree: You can train the models any style or face desired (Dreambooth), you can create everything imaginable within seconds, the quality is doubling every 4 weeks. Compare it with programming via notepad or Visual Studio: In both you can get results, but there is a reason nobody is doing notepad anymore… Maybe this is a read: https://arstechnica.com/information-technology/2022/11/midjourney-turns-heads-with-quality-leap-in-new-ai-image-generator-version/ „its almost too easy“, and „Considering the progress Midjourney has made over eight months of work, we wonder what next year's progress in image synthesis will bring.“ Do not make the mistake to look at the current stage, that is like the jokers that told everybody that electronic cameras will never make it, because the first generation was worse than mechanical DLSRs. The journey for cameras took 20 years - here we are talking software: It will not even take 20 more months.
  5. Here another grafics editor that integrates AI: https://www.theverge.com/2022/11/8/23447102/photoroom-app-magic-studio-ai-image-tool-generation Time is wasting…
  6. Agree, that is the point of this thread: Give us an SDK to write plugins and extensions. Providing that all by Serif will be hard given the knowledge needed is extremely limited on the market.
  7. Last entry here from me on this topic - it really was a pleasure to discuss this with you all!! Microsoft just announced that they will integrate the leading model (OpenAI's DALL-E) into their Designer app - and will invest a lot to make the integration as smooth as possible! It has started.
  8. Yes, the models are there - but who will integrate it seamlessly - even invisible - into the workflows? If I select a brush and start drawing hair, should not the AI recognize that and automatically get guidance from my paintstrokes but create the hair according to the model - no texting, no interaction needed at all?
  9. Agree, usability needs to be much higher than text prompts, such as save prompts as reusable partsto build your library. Also the possibility to train the model with your styles would be needed. Outpainting and crude image-to-image also definitely needed: The later you can try out at https://huggingface.co/spaces/huggingface/diffuse-the-rest
  10. No problem: .-""""""-. .' '. / O O \ : : | | : ', ,' : \ '-......-' / '. .' '-......-' Joan Stark; ascii art archive
  11. Big 👍to DelN! I got nostalgic when you mentioned CorelDraw - at that time we took the example eye grafic from them as test for new GPUs how long it takes to render all the elements there 🙂 And I think what you did in your career was an early example for what we see today: More and more people learning things like Python even in school and therefore are not afraid for example to set up a local Stable Diffusion instance (but if you are really new to that, I found https://github.com/cmdr2/stable-diffusion-ui to be extremely simple as it mostly hides all the necessary Python prerequisites from you). So for people with that skillset, AI generated images can already be used in a production line as the APIs are all available and documented. This post really looks at the many people who do not want to dive into this technologies - only wanting to concentrate on the artistical process, the well known tools: Here an integration is mandatory, is neccessary - and would actually be quite simple for a first minimal viable product - IF we can have a SDK...
  12. Let me try to be very specific - I got scolded to be too general here - and rightly so: Hi @Serif - here is my user story for your backlog! "I as a grafic designer want to be able to train a Serif integrated AI model with my own grafical style at a maximum of 10 training pictures. With that locally modified model, I want to be able to go to a customer and create/modify any number of pictures in this newly trained style on the fly according to specifications, examples: Recreate the upper left corner of a given picture but put a stylished car into it seamlessly Draw out (extend the image content) of that image another 30% seamlessly Generate brainstormed images with the topic "future cars" in the newly trained styles ...
  13. Nah - I am just lazy and I only dabble in the arts as a hobby and thus need easy tools But my professional life very much has focus on AI, I lead building global projects using AI models for the life science industry - and I see a frightening change of the people coming from university: They simply DO NOT ACCEPT that there is a long, manual way that gives you 100% of the desired result in many hours of time. Instead there should be an app like way to get your pareto 80% with two clicks: Good enough is the way to go agile. And with that approach, this huge oecosystem of grafic design tools were created on the various mobile devices - with functionality that is astonishing good nowadays with only seconds of effort. The easiness how these kids ("kids", in my eyes, I am over 50 now) accept wonders of software such as AI models deliver today - only to request more of it and and faster - led to the disrupting push of AI into every industry. Mark my words: Either all the software giants (and mini-giants like Serif) adapt - or they will die out with only us dinosaurs doing things "the right" way. I myself am very impressed creative minds such as Kevin Hess who shows us that an author with self proclaimed small grafical capabilities can produce a full grafical novel by his own with the new AI tools: https://beincrypto.com/ai-art-worlds-first-bot-generated-graphic-novel-hits-the-market/ And sure one can complain about the style and limitations - but Midjourney (the AI he used) has been released mid of July this year: Just imagine what these tools will do in 2, 5 - 10 years from now given that AI complexity has been exponentiell the last decade. I would dream of Serif trying to spearhead that development with a maximum aggressive move: In my professional opinion (admittingly one with only experience from other industries), Serif does not have the time to wait another year to move forward.
  14. No, that is plain wrong: AI models work a bit like our brain, building billions of weighted connections based on learning. With this, they are capable to deduct new solutions to problems that they initially were not trained on. As they are missing context, its the task of the data science engineers to steer them, the constantly improving models are the result. Say I train a model on 90 different species of dogs, it will be able to identify all 300 or so of them, deducting the characteristics. It gets better: Reinforced learning models constantly create new ways to solve a problem or draw a picture, rate the result and learn from it. The alpha models from OpenAI beat every chess and every Go player on the planet: But they never actually were fed any strategy books, they derived their strategy themselves from billion of matches against themselves… Better believe that well trained AI models are creative in the everyday meaning of that, we are there to give them context.
  15. No, I dont validate anything with the simple fact that tasks will be replaced by AI. That is a nobrainer, or do you still use paper maps driving around - or do you rely on a navigation system? Things will simply change: With AI being integrated great tools - such as Serif's - you can work fast, produce more, automate simple tasks. Just imagine to train your unique style to an AI and produce your own grafical story within days. The whole point of this forum post is that I believe AI will greatly help designers - and I dont want Serif to miss that train, because simply people will leave.
  • Create New...

Important Information

Please note there is currently a delay in replying to some post. See pinned thread in the Questions forum. These are the Terms of Use you will be asked to agree to if you join the forum. | Privacy Policy | Guidelines | We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.