AI Art Weekly #12
Hello there my fellow dreamers, welcome to issue #12 of AI Art Weekly 👋
If you enjoy my weekly ramblings, please consider buying me a coffee so I can stay awake and keep the good stuff coming 😉
Here are some of this weeks highlights:
- Stable Diffusion v2.1. Audio-Driven Co-Speech Gesture Video Generation. “Snapchat” Filters on Steroid. Paint by Example.
- Interview with _MemoryMod_.
- Super simple (almost no setup required) Dreambooth and Stable Diffusion web interfaces on HuggingFace
Let’s jump in.
Cover Challenge 🎨
This weeks “ dreamscape ” challenge got 55 submissions from 31 artists and the community decided on the final winner. Congratulations to @_DrFetus_ for making the final round 🥳 And as always a big thank you to everyone who contributed!
Today’s the first day of snowfall in Switzerland. So what better theme for the next cover challenge than “ snowfall ”. Think tranquility, calmness, solitude. Prize is another $50. Rulebook can be found here and images can be submitted here.
I’m looking forward to all of your submissions 🙏
If you want to support the newsletter, this weeks cover is available for collection on objkt as a limited edition of 10 for 2.75ꜩ a piece. Thank you for your support 🙏😘
Reflection: News & Gems
Stable Diffusion v2.1
Just two weeks after releasing Stable Diffusion v2.0, Stability AI releases v2.1. The new model has been trained with a more diverse and wide-ranging dataset and a less aggressive filter for adult content. This results in improved image quality for architecture, interior design, wildlife, and landscape scenes, as well as better anatomy and hands and more diverse art styles. The model also supports non-standard resolutions for wider aspect ratios which is pretty neat. v2 also supports a new prompting style and Stability released a Prompt Book to teach you how to create different types of images using v2.
More ChatGPT examples
This week, my personal bubble was bursting with excitement as I saw the incredible ways people were using ChatGPT. From generating captivating stories to creating thought-provoking art, ChatGPT has proven itself to be a valuable tool for anyone looking to tap into their creative potential. Here are my top five favourite examples of how creatives have been using ChatGPT this week:
Overall, ChatGPT has proven to be an invaluable tool for anyone looking to unleash their creativity and push the boundaries of their artistic abilities. Whether you’re a writer, artist, musician, designer, or filmmaker, ChatGPT has something to offer you. Give it a try and see how it can inspire and enhance your creative process!
Score Jacobian Chaining
You might have come across some spinning hamburger or action figure this week (if not, take a look below). These were created with a newly open-sourced model called Score Jacobian Chaining (SJC) that utilizes pretrained 2D diffusion models to generate 3D renderings. That means you’re not bound to one specific model like Stable Diffusion, but it also means you can use custom Dreambooth models to generate customized results. There is a HuggingFace demo and a Google Colab notebook in case you want to try it out.
ANGIE: Audio-Driven Co-Speech Gesture Video Generation
With all these amazing new papers coming out every week, it’s hard to not build up a resistance against inventions that would have blown my mind 3 months ago. ANGIE is a novel framework which is able to generate high-fidelity co-speech gesture video sequences from audio inputs and a single image. How is this relevant for art you might think? Well, imagine creating a
three-quarter-portrait of a character in MidJourney, generating some voice lines with GPT, voice it and then watch your character come to life by speaking to you using matching gestures based on what they’re saying. That’s pretty mind blowing to me. We can only hope that they release their code
Stitch it in Time: Snapchat Filters on Steroids
Now imagine you’ve created your gesturing speaking AI character from above and you want to fine-tune its facial appearance. Stitch it in Time makes it possible to change the age, expression and gender of a person in a video. Suddenly your character doesn’t have a soulless expression anymore but is smiling and looks 20 years younger or older.
ObjectStitch and Paint by Example
Say Goodbye to Photoshop™️. ObjectStitch lets you composite objects into an image scene by transforming the objects viewpoint, geometry, color and shadow all without manual labeling.
Paint by Example does something similar by allowing you to draw a mask onto the source image and feed it a reference image which then gets painted into the masked area, taking the surrounding context into account. There is HuggingFace demo in case you want to try it out.
In The Dream Tapestry installation, museum guests visualize their dreams using text-to-image AI. The AI then combines those dreams together. It’s the first interactive DALL-E experience at a museum. By @CitizenPlain.
@ThoseSixFaces created a cool AR target tracking demo using a Pokémon card. Tilting the card morphs Ditto into various types of Pokémon which got diffused using Stable Diffusion.
It’s always cool to see AI art in the wild. @lerchekatze produced a video input animation for a DJ set at an @ArtBasel party this week.
Imagination: Interview & Inspiration
Today we talk to AI artist _MemoryMod_. Mod caught my eye a few weeks ago with their beautiful Twitter profile picture and stunning Cyberpunk art. After I found out how those images got created, I knew I wanted to publish an interview with Mod. Enjoy!
[AI Art Weekly] Hey Mod, what’s your background and how did you get into AI art?
I am a CGI artist with a background in classical painting and sculpture. I have done everything: from 3D renderings for products, building VR environments, to digital sculpture for Marvel movies and Netflix shows. Currently I manage projects for an artist doing large-scale sculpture installations around the world. I got into AI art by dabbling with early google Colab documents and then got hooked when the local install of Stable Diffusion was released.
[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?
I am currently working on my Genesis series which I describe as an AI fever dream envisioning the merge of humanity and machines. The idea is inspired from the movies and books I grew up with such as Akira, Ghost in the Shell, and Snowcrash. The fact that the images are created by AI perfectly plays into the aesthetic of the world building I’m exploring.
In addition, I have a few projects on the back burner that have gained recognition via Claire Silver’s second AI art contest. One piece won the 3rd place, and another was a finalist.
[AI Art Weekly] What does your workflow look like?
My current workflow consists in using Stable Diffusion 1.4, which focuses on custom model training. Experimentation and iteration are obvious keys here. I sometimes end up training models on 5-7 different image sets and then experiment with checkpoint, merging the results to other trained models.
Usually, I have a very basic prompt, and use it during the training phase so that I can see how the model is changing from version to version. Then I use the x/y plot in Automatic1111’s webui to iterate and attempt to find outputs that fit the aesthetic I am going for. Once I find a model and prompt that is doing what I want (or something unexpected that I like) I then iterate further using the x/y prompts with smaller more specific changes.
[AI Art Weekly] What is your favourite prompt when creating art?
Cyberpunk, haha! This was a prompt I used for my Genesis series.
I’ve found prompting on trained models to be different than prompting on the vanilla model. Image training in Dreambooth seems to evolve the entire model’s response to prompts and, as a result, my prompts are very simple. For instance — “
portrait of a girl as cyberpunk“ — and then I let the training take the reins. Sometimes, the prompts are weird broken sentences that get interesting results.
[AI Art Weekly] How do you imagine AI will be impacting society in the near future?
Artists throughout history have employed others to create parts of or their entire artwork for them. During the Renaissance apprentices would paint parts of the masters’ paintings in order to assist in commissions. In more recent decades, some of the biggest artists in the world don’t even touch a brush or define a curve of a sculpture with their own hands. Instead, some have become more akin to art directors having tens to hundreds of other people create the work for them. In some ways, this can be seen as an advanced form of prompting! AI art, I think, can be seen as a form of this — with the peculiarity that this particular form is accessible to the general population. AI plays the role of the hired hand executing the art itself. The impact will be widespread, especially because the technology continues to advance. It is easy to foresee AI art changing a few art fields in the near future (concept art, for instance) and changing the way people generally make art and employ references.
[AI Art Weekly] Who is your favourite artist?
This past year I’ve been fascinated with the paintings of Sainer Etam, especially after seeing his show in Rouen, France over the summer. There are so many interesting aspects to his work, from composition and color, to the blending of abstract elements into representational, that are so well executed and inspiring.
[AI Art Weekly] Anything else you would like to share?
Needless to say, I’m looking forward to the future of AI art and technology. It is an exciting time to be exploring and creating with all these tools and there are a lot of people doing amazing things, and more to come I’m sure.
It’s getting cold outside. Each week we share a style that produces some cool results when used in your prompts. This weeks style is based upon a finetuned Stable Diffusion model that generates neat crocheted wool images (using
3d crocheted wool also creates a cool style in MidJourney, example below and here). Trained with Dreambooth by @plasm0.
Creation: Tools & Tutorials
These are some of the most interesting resources I’ve come across this week.
ChatGPT plugins are popping up like wildfire. This one is my most favourite since it lets me send prompts to ChatGPT through the browsers Omnibar or by highlighting text.
@multimodalart setup a @huggingface space to train Dreambooth models. This is likely the easiest way to train your custom SDv1 and SDv2 models. Simply duplicate the Space and you’re ready to go.
@camenduru published the Automatic1111 WebUI on a @huggingface space. No need to own a local GPU or setup a Cloud hosting service. Just simply duplicate the space to run it privately and to use your own preferred checkpoints or Dreambooth models.
Another Prompt Book by @learn_prompting, but this one is not exclusively for Stable Diffusion but in general for LLMs like ChatGPT. If you’re having trouble getting the results you want, this might do the trick for you.
And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:
- Sharing it 🙏❤️
- Buying me a coffee to stay awake for the next issue
- Following me on Twitter: @dreamingtulpa
- Leaving a Review on Product Hunt
- Using one of our affiliate links at https://aiartweekly.com/support
Reply to this email if you have any feedback or ideas for this newsletter.
Thanks for reading and talk to you next week!