AI Art Weekly #5
Welcome to issue #5 of AI Art Weekly. A newsletter by me (dreamingtulpa), to cover some of the latest happenings in the AI Art world.
Each issue will contain three main sections Reflection , Imagination and Creation (checkout one of the earlier issues for the meaning behind those).
Before we begin, I want to thank the AI art community for reading and engaging with my newsletter ❤️. We are currently at 280+ subscribers and are slowly but steadily increasing in numbers. I’m doing this for over a month now and I enjoy the process and will continue to do so. But it is very time consuming and taking me roughly 8 hours a week to put together.
So I’m looking for ways to finance some of the work that goes into this newsletter. If you know anyone or a company who’s willing to sponsor future issues, please send them my way.
Aside from sponsorships I’m also looking into ways for the community to financially support this newsletter directly – if they wish to do so. If you can’t or don’t want to financially support the newsletter, that’s totally fine. It remains free and will remain free as long as I’m capable of doing this. You can also support me by sharing the newsletter with your friends and on social media.
But for those who are willing, one experiment I want to try is by minting the weekly AI Art Weekly covers and offer them to the community. In fact I already minted this weeks cover on the Tezos Blockchain as a collection of 100 for 5ꜩ a piece. So for the price of a large coffee, you will be able to help me and this newsletter keep growing and in return get to collect some hopefully cool art. Win-win.
If you’ve any other ideas, I’m open to suggestions. But that’s it for now. Thanks again for sticking with me and talk to you next week ✌️
– your dreaming Tulpa
If you want to support the newsletter and collect some cool art, this weeks cover is available for collection on objkt as a limited edition of 100 for 5ꜩ a piece. Thank you for your support 🙏😘
Let’s start the week of with some controversy and get this out of the way: RunwayML (pre)-released the Stable Diffusion v1-5 checkpoint to the public. It seems legit, as the release is from the same authors as the v1-4 checkpoint. But apparently it wasn’t theirs to give away, so Stability.AI issued a takedown request (which as of the time of writing, has been removed again). Shortly after that, Daniel Jeffries, the CIO of Stability AI, published a blog post about why v1-5 hasn’t officially released yet. I quote:
We are forming an open source committee to decide on major issues like cleaning data, NSFW policies and formal guidelines for model release. […] So when Stability AI says we have to slow down just a little it’s because if we don’t deal with very reasonable feedback from society and our own communities then there is a chance open source AI simply won’t exist and nobody will be able to release powerful models.
It’s still not exactly clear what happened and I won’t draw any assumptions here, so let’s leave it at that. To me statements like this feel a bit weird though:
We are committed to open source at our very core.
Open source isn’t just about open source models, it’s also about open communication and an open process of building the software. But that (apparently) only happens behind close doors at the moment. Meanwhile they have no issue making v1-5 available within their closed-source DreamStudio interface to which they added CLIP guidance this week. It’s an exciting new feature that enhances the quality and coherency of images, improves inpainting/outpainting and increases overall quality for complex prompts. But to me it seems they think the community can’t be trusted with it just yet. A bummer, but as quickly as things are moving, hiccups should be expected and I think things will clear up soonish.
Let’s switch topics. Let’s talk about music.
Mubert released an MVP where they test Text-To-Audio generation. I’ve played around with it yesterday and it produces some cool tracks. But don’t get confused, sounds are not created directly based on the text prompt, they’re first translated into tags. Based on those tags, Mubert then generates sounds related to them.
Microsoft is also researching AI music. They released a new paper this week called MuseFormer. I didn’t have the time to take a closer look, but a quick glance showed me that their model can generate MIDI files. Which is pretty cool.
And to finish things off this week, there were not one, not two, but three research papers regarding image editing with pure prompts announced this week. Google released prompt-to-prompt. Then there was one called Imagic and another one called UniTune. Except for the one from Google, there is not code yet. But the results look super cool.
In this weeks interview of AI Art Weekly we talk to Artificial Bob. An AI artist who “photographs” madness with words. He was featured in the Culture3 web magazine and runs his own AI Art newsletter where he shares some beautiful art findings he came across during the week.
[AI Art Weekly] Bob, what’s your background and how did you get into AI art?
I’ve been a writer since I was able to write, so I’ve always loved creating worlds with my words. During writer’s block, I turned to visual art – about three and a half years ago.
I began drawing with acrylic paint, then oil paints, watercolors and charcoal. It wasn’t long before I had a room full of canvases, that’s when I decided to try digital art. I started with a scene and a short animation in Unreal Engine and then spent most of the time in Blender. It was like making collages in 3D space. I also really enjoyed digital sculpting and especially making masks.
I tried a lot of things, for example glitch art has a special place in my heart because it’s actually a kind of protest against the pathetic beauty of perfection.
Then I got into AI art almost two years ago (that’s like forever in this world) thanks to Artbreeder. Before the age of text2image, it was the most well-known AI tool. Even then I was fascinated by it, but there was still something missing because it was practically just “remixes” of existing images. It wasn’t until I opened the first VQGAN+CLIP colab notebook that I really felt the power of this new tool in the art world.
[AI Art Weekly] How do you think AI animation will evolve in the near future?
There will definitely be an increase in resolution and therefore an increase in the amount of detail in each artwork.
AI tools will penetrate more areas of the art form. I’m especially looking forward to visiting the music world. There are already music generating tools and they are often hideously overpriced. I can’t wait to see what Emad has in store for us in this regard. Hopefully it will be a similar success story to what Stable Diffusion had on the AI art world.
In general though, it’s hard to say where this will go in the near future. Just look at how much progress we’ve seen since the first text2image tool.
[AI Art Weekly] Which projects are you currently keeping an eye on?
Usually whatever @StLaurentJr and @RedruMxNFT are working on.
Aside from those two, I don’t follow any project, but only separate works that make me stop for a while.
[AI Art Weekly] What is your favourite prompt when creating art?
I love to work with two things – emotions and muscles. In both cases it’s amazing to see how AI will interpret my idea.
You can see a lot of
desperation in my work because I like to describe the state of society this way. These are the actual words I use to get some deeper emotion inside of portraits.
[AI Art Weekly] What does your workflow look like?
Most of my art is a critique of society/elites/“superior men”/toxic culture. So my inspiration comes mostly from myself and all the ideas I want to express and the issues I want to highlight. If you start shouting around, you’ll only be considered insane. But if you show them, they’ll watch.
As for the process of creating my art, I try to do most of it directly in the AI tools. The Colab notebooks are amazing for this task, and these days I use PlaygroundAI a lot. You can play endlessly with one image, a fixed seed, and small prompt changes.
“This character would look amazing with a hat – I’ll change the prompt to add a hat to this character”, and so on. I play a lot with expressions to deliver the idea I have in my head.
I usually end up with something I already like, and play around with color, grain, and mood in Affinity Photo.
Otherwise I have a file full of different versions that I mix together – this one has a nice hand, this expression is exactly what I envisioned – I think you know what I mean.
When it comes to animations, I changed my workflow over time based on my experiences. At first, I had everything ready to go – like 6 different prompts that I had pre-specified what frame they would start on. But I dropped that because for me it’s much better to be more flexible in that regard. I enter the first prompt and if I like the first frame, I let it run. At some point, I turn off the rendering and see what the animation looks like and where it’s going. If I don’t like it, I delete a few frames and repeat the process. Then I add a second prompt and modify the scene. And I do that all the way through. So I have a lot more control over what’s actually happening.
[AI Art Weekly] Who is your favourite artist?
My favourite artists have created their own style with AI tools and every time I see their artwork it’s immediately clear who the artist is.
@DinBurns, @StLaurentJr, @stephanvasement, @phosphor, @RedruMxNFT, @nvnot_
[AI Art Weekly] Anything else you would like to share?
I can see a lot of despair around me and it’s in fact part of the AI hate too. People tend to “attack” when they feel the despair. In both cases – if they spent years mastering something and now they can see that everyone can express themselves and there’s no longer a “skill-barrier”, they feel the despair that they can be replaced if they actually have nothing new to give to the art world (doing the same thing over and over).
And from the other side of the fence there’s the desperation of those that cannot create something even with the complete freedom of AI tools. They want to talk, but they have nothing to tell.
So I would like to say to all artists that they shouldn’t let anyone interfere with their work and simply ignore criticism that is not constructive.
We are part of something new and we have the freedom to create like never before in the history of mankind, isn’t that something amazing?
This weeks featured style was provided by @weirdmomma. She was so kind as to share her entire prompt with us
A scarecrow in a field at night. Block print, letterpress. Poster art by Mary Blair, cubo-futurism, poster art, somber color palette, concert poster, hatch show print. I played around with some variations of the prompt and found that I quite like the style of
mary blair by itself. Have fun!
These are some of the most interesting resources I’ve come across this week.
If you are into pixel art, I got you. @PublicPrompts released a SD model trained on pixel art with dreambooth. The results look pretty cool and usable as game assets for instance.
Speaking about custom models. @proximasan trained a SD model with dreambooth to make some cute icons. The results look pretty cool as well!
@pharmapsychotic published CLIP-Interrogater on huggingface which got assigned a free T4 GPU. You ever wondered how AI would describe your selfie? Now you can find out for free.
An interesting article about how it’s possible to generate 8 images in 8 seconds using Google Colab and a TPU runtime.
Two weeks ago I shared a video from Computerphile on how AI image generators work. This week we go a bit deeper and take a look at AI Image Generation with Stable Diffusion.
And that my friends, concludes this weeks AI Art weekly newsletter. Please consider subscribing and sharing if you liked it and let me know on Twitter if you have any feedback. The more people get to see this, the longer I can keep this up and the more resources I can put into it.
Thanks for reading and see you next week!