AI Art Weekly #16
Happy new year my fellow dreamers 🎉🥳🎊 and welcome to issue #16 of AI Art Weekly 👋
I hope you had a good start into the new year. Let’s jump right in, here are some of this weeks highlights:
- Muse , a new text-to-image transformer model by Google
- Depth to Video models
- Interview with developer and AI artist Stephen Young
- CivitAI , a hub for custom Stable Diffusion models and embeddings
Cover Challenge 🎨
This weeks “ utopia ” challenge got 61 submissions from 38 artists and the community decided on the final winner. Congratulations to @spinnybuns for making the final round and creating such a beautiful cover 🥳 And as always a big thank you to everyone who contributed!
The theme for the next challenge is “ emotions ”. Happiness, sadness, anger, fear, surprise, disgust, love, nostalgia, anything goes. Prize is another $50. Rulebook can be found here and images can be submitted here.
I’m looking forward to all of your submissions 🙏
If you want to support the newsletter, this weeks cover is available for collection on objkt as a limited edition of 10 for 3ꜩ a piece. Thank you for your support 🙏😘
Reflection: News & Gems
Muse – a new text-to-image Transformer model
We haven’t heard or seen a lot from Google since DALL·E 2, Stable Diffusion and ChatGPT took the world by storm last year. Although their Imagen models (text-to-image and text-to-video) look impressive, we can’t actually use them yet. This doesn’t change with Muse unfortunately, but the results look interesting. Compared to MJ, Dalle, and Stable Diffusion this is not a Diffusion, but a Transformer model. It supports inpainting, outpainting but also mask-free image editing and can generate 512x512 images as fast as 1.3 seconds. The most impressive features for me personally is the ability to generate coherent text. Just take a look at the examples.
Depth to Video
Adding depth to images is pretty straightforward these days, but what about adding depth to videos? Well, two new papers RoDynRF and HyperReel got you covered. Adding depth to videos not only lets you rotate the camera, but pan it up and down and move closer into or out from a scene (which basically lets you add a Dolly Zoom to all of your footage). Especially HyperReel looks amazing as it is able to render 1024x1024 videos with 18 fps while allowing you to move the camera around in real time 🤯
We finished last year with Point·E by OpenAI and we start this year with Dream3D, yet another text-to-3D model. But compared to other text-to-3D models this one first generates a high-quality 3D shape from the input text, the shape is then used as input for a NeRF (neural radiance field) before it gets textured by a text-to-image diffusion model.
@enpitsu fine-tuned Stable Diffusion on 10,000 images of Japanese Kanji characters and their meaning. The model came up with “Fake Kanji” for novel concepts like Linux Skyscraper, Pikachu, Elon Musk, Deep Learning, YouTube, Gundam, Singularity and they kind of make sense.
@brockwebb created a short explanatory video about how “Super connectors” in a network tie several things together and thus why certain words and names like “Greg Rutkowski” are becoming shortcuts to produce more better and more coherent concepts.
@somewheresy shared a 16 bar jungle loop which was partially created using Riffusion generated samples. I’m pretty excited for generative music to create an ongoing soundtrack of my daily life.
I’m don’t usually like to promote drama, but this screenshot by @reddit_lies perfectly portrays how silly anti-AI art gatekeeping is. We’ve come full circle 🤡
Imagination: Interview & Inspiration
In this week’s issue of AI Art Weekly, we interview developer and AI artist Stephen Young, also known as @KyrickYoung. I first discovered Stephen while exploring @proximasan‘s parrot zone notion notebook. He is the creator of Prompt Parrot, a custom-trained GPT-2 model that can generate new prompts based on the ones it was trained on, using either Stephen’s prompts or your own.
[AI Art Weekly] Hey Stephen, what’s your background and how did you get into AI art?
I’m a software engineer and amateur photographer. I’ve also dabbled in classic generative art and game development before. Nothing super fancy, just basic simulations and fractals. I got into AI art through the big-sleep repo back in July 2021. It was fun, but it didn’t resonate with me at the time. I just couldn’t see the potential. I underestimated it because I didn’t understand what it was and it ran poorly on my PC (because I didn’t know what I was doing).
Then later on in December, a friend told me about Wombo. The speed of that app allowed me to experiment quickly and it really blew my mind. It opened my eyes to the potential of AI. It’s funny to say that now because the VQGAN images are so incoherent, but it was my gateway into AI art. So from there I joined Twitter for AI news, got into the Colab notebooks, Disco Diffusion and so on. It was an awesome time to join right before the scene blew up with MidJourney, Dalle-2 and Stable Diffusion.
[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?
I don’t have any big projects right now. For me, art is a relaxing hobby, and I like to keep it low key. So I don’t put project labels on anything I do. It would feel too pressuring! But I’ve been revisiting Disco Diffusion and incorporating outputs with MidJourney. It’s been very nostalgic (for a time 10 months ago lol).
[AI Art Weekly] How and why did you come up with Prompt Parrot?
One day back in March of last year I was looking for new things to try. And the idea to fine-tune a language model for prompts came to me. A language model reflecting variations on my prompts back to me to feed into another AI seemed so cool in a sci-fi kind of way. I wasn’t sure it was feasible on such a small dataset (maybe 50-200 lines of text). So of course I had to try it and it worked pretty well!
Editorial note: The Prompt Parrot colab is an excellent example of training a GPT-2 language model in case you’re interested.
But it needed a catchy name to release it into the wild. Parrots are known to repeat phrases, and AIs are often called stochastic parrots. So the name “Prompt Parrot” is a cheeky reference to both (also parrots are amazing birds).
I originally released it back in March, but it didn’t really catch on until Stable Diffusion released in the summer. SD is fast and light enough for both models to be in memory at the same time. So that increased the fun factor. From there, I expanded it with built-in prompts. Later I trained it on a much larger dataset provided by several community members and hosted it on Replicate. So anyone can use it with or without colab. Prompt Parrot started as a fun little science project and grew into a proof of concept prompt generator. I’m pretty tickled that the community likes it so much. It has over 50’000 runs on Replicate which absolutely blows my mind. I never thought anyone would use it and it’s humbling that something I created is so widely used by the community.
[AI Art Weekly] What does your workflow look like?
My workflow is very loose. I start with an idea or aesthetic I want to explore. These days I work almost exclusively with MidJourney for generations. I’ll typically iterate on an initial idea for a while and take my time exploring it. Then I select my favorite images and finalize them with Lightroom and/or Procreate. I used to do more combinations between multiple outputs. But these days, my post process is more akin to finishing touches on a photograph.
Speaking of photography, I’m also quite fond of incorporating my photographs into the final work as an init image or image prompts with MidJourney V4!
[AI Art Weekly] What is your favourite prompt when creating art?
This question inspired an instagram post! My favorite themes are abandoned, crumbling structures contrasted against new vibrant flowers. Decay leading to new life. The basic prompt structure is “
a beautiful painting of an abandoned ruined house in a field of flowers landscape, cinematic lighting, magical realism, vibrant palette”. And of course I add and remove elements to suit the particular mood I’m going for.
[AI Art Weekly] How do you imagine AI (art) will be impacting society in the near future?
The impact on society will be broad. But what I continue to be excited about is the democratization of art! AI has introduced artistic expression to people who felt locked out of it previously. It’s enabled a whole new form of visual expression for a lot of people. And I think we lose sight of how big and fundamental that shift is going to be. Similar to the invention of the camera, it ushers in a new era of visual expression. I’m excited about the potential for people to express themselves in new ways. We’ve already seen it this year and the trend will only increase through 2023 as more AI art products are introduced.
[AI Art Weekly] Who is your favourite artist?
Some of my favorite artists in no particular order: Caspar David Friedrich, Jean-Pierre Ugarte, John Atkinson Grimshaw, Hubert Robert and RHADS!
[AI Art Weekly] You’re a member of the Parrot Zone, can you tell me more about that?
Parrot Zone is our AI study group and the name is derived from Prompt Parrot. The project is headed by @proximasan. It actually started as a Twitter thread where proxima was studying styles in Disco Diffusion and tweeting results. The rest of us joined in, and we organized our efforts into the group that became Parrot Zone! Our group conducted the most in depth prompt studies on Disco Diffusion and Stable Diffusion. There are limited studies on NovelAIand MidJourney as well. It’s quite an impressive database! Parrot Zone is a great resource for discovering words for your prompts.
Each week we share a style that produces some cool results when used in your prompts. This weeks style is based upon a fine-tuned Stable Diffusion model trained on pulp art artists (
pulp art by Glen Orbik,
Earle Bergey ,
Robert Maguire or
Gil Elvgren also creates a cool style in MidJourney).
Creation: Tools & Tutorials
These are some of the most interesting resources I’ve come across this week.
If you’re looking for fine-tuned Stable Diffusion models and embeddings, civitai.com is the place. There are over a 1000 models and some pretty neat gems (among all the unholy stuff) with example images and prompts for you to play around with.
Not all models are on Civitai yet though. Cool Japan Diffusion being one of them. As the name says, the model was fine-tuned for generating cool Japan themed anime and manga images.
Then there is Dreamlike Photoreal 2.0, a stunning photorealistic model based on Stable Diffusion 1.5, created by @dreamlike_art.
And last but not least @_jonghochoi created a @Gradio demo on HuggingFace for pop2piano which lets you convert pop audio to a piano cover and download the result as a MIDI file for further processing.
And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:
- Sharing it 🙏❤️
- Following me on Twitter: @dreamingtulpa
- Leaving a Review on Product Hunt
- Using one of our affiliate links at https://aiartweekly.com/support
Reply to this email if you have any feedback or ideas for this newsletter.
Thanks for reading and talk to you next week!