AI Art Weekly #3
Welcome to the third issue of AI Art Weekly. A newsletter by me (dreamingtulpa), to cover some of the latest happenings in the AI Art world.
Each issue will contain three main sections Reflection , Imagination and Creation (checkout one of the earlier issues for the meaning behind those).
Before we dive into this weeks issue, let me share an update on how this weeks cover art got created. I wanted to try something new this week and setup a poll to let the community decide on the final cover art. Was a fun experiment and I’d love to involve the community even more in the art direction for future AI Art Weekly issues.
So for the fourth issue, I announce a community challenge to create the cover art for the next issue. These are the rules:
- Everyone can participate
- There is no theme for this first challenge, anything can be submitted, except NSFW content
- Only one image per participant
- Images must be vertical. The cover art has an A4 aspect ratio. So between 1:1.4 and 1:1.5 will result in the least amount of cropping
- You can use any AI model to create the image (SD, MJ, Dalle etc) and feel free to post process and enhance with Photoshop or other tools
- Submissions end on Thursday 07:00 UTC
- After that I’ll pick 4 favourites and create a poll to let the community decide on the final cover art
- The winner will be featured in the next issue
- All submitted images will be compiled into a gallery and put onto a website alongside your social profile (the website I’ll create after the challenge is completed)
If you want to participate you can either submit your image publicly through Twitter by tagging me @dreamingtulpa and using the hashtag #aiartweekly or by sending me a DM directly.
Would be cool if we could get a few participants together, so please share. The more the merrier.
So with this out of the way, let’s dive into this weeks announcements.
There have been no major updates this week for the big three (MidJourney, Stable Diffusion and DALL·E) and things seem to have slowed down a bit compared to last week (which isn’t a bad thing). For some, it got so quiet, that they jokingly started to question the work ethos of our dear AI researchers. But nevertheless there have been some updates & announcements.
Google announced their own Text-To-Video paper called Imagen Video, which like Meta’s Make-A-Video, looks incredible. Apparently it only takes 22 seconds to create a 30 seconds 8fps video with a 128x128 resolution and it can create videos up to 1280x768 pixels with 24fps. I wonder what the hardware requirements look like though.
The Deforum team released version 0.5 of their Stable Diffusion notebook which now supports dynamic keyframing using math formulas, prompt weighting, video input masking and more. For the later I created a video tutorial which you can find in the Creation section below.
But aside from that, I haven’t stumbled upon any other interesting announcements. Let me know on Twitter if I’ve missed something important.
In this weeks interview of AI art weekly we’ll talk to TomLikesRobots. A Scotland based Web Developer and Coder who started to explore AI Art, 3D, animation and Digital Art. His latest Dreambooth adventures are interesting to follow and allow us a glimpse into an alternative universe where he might have been an oscar winning actor starring in the Alien movie franchise.
[AI Art Weekly] Tom, what is your favourite prompt when creating art?
This is one of my favourite results and was one of my first successful audio-reactive animations:
geometric portrait, Red Fox, symmetry, identical eyes, fantasy, medium shot, pastel color illustration, ornate, super detailed, cinematic lighting, detailed, intricate (+ artist of your choice). I think the imagery is really strong throughout the animation and works well with the music.
If I’m stuck for inspiration I’ll come back to
Norman Rockwell or
Edward Hopper. Both painted scenes of American life but Rockwell portrays warmth and happiness whereas Hopper portrays loneliness and realism. By including these artists in a prompt it makes it easier to build a scene that focuses on people.
Back in July I posted some of my early Stable Diffusion images and you might be able to see the influence of these artists.
[AI Art Weekly] Who is your favourite artist?
It’s almost cliche as he’s so popular, but Leonardo Da Vinci was someone who wasn’t limited by the thinking of his time.
I love the way he incorporated the natural world into his paintings and drawings. His use of light and shadow to create a sense of depth and movement in his work is inspiring today.
His work represents the perfect balance between art and science.
[AI Art Weekly] How do you think AI Art tools will evolve in the future? What possibilities can you image?
There are so many great tools and workflows. Dalle and MidJourney make it easy to generate great images with a simple prompt, but Stable Diffusion’s open source release has resulted in an explosion of tools and creativity.
My favourite tool for Stable Diffusion is the Deforum notebook which feels very familiar to anyone who ever used Disco Diffusion.
As a very brief overview, for an audio reactive animation I will:
- Select a soundtrack (or generate one with a tool like Mubert)
- Generate the keyframes using a tool like Audio Keyframe Generator
- Create frames using the Deforum notebook on Google Colab
- Download the video and if necessary upscale it with Topaz Video Enhance AI
- Interpolate Frames with FlowFrames
If anything’s not working just start again. It usually doesn’t take long to spot there’s something wrong.
[AI Art Weekly] Anything else you want to share?
We’re at a point where anyone can create great looking images with minimum effort. There are millions of users on MidJourney and DreamStudio. What do we do now? How do we distinguish ourselves from the crowd and create something original?
I spent four years at art school and love drawing, sculpting and painting. I don’t think AI will or should replace that. I’m constantly looking at what can be achieved with generative AI that can’t be done in any other way.
Social media is full of fascinating AI generated tech demos but there must be more we can do. I have stories to tell and I’m constantly exploring ways to tell those stories.
The future of Generative AI is bright.
Each week we share a style that produces some cool results when used in your prompts. This weeks featured style is
These are some of the most interesting tools I’ve come across this week.
If you want to expand your stylistic horizon the Stable Diffusion Artist Style Studies is a great resource to do so. The database includes over 1500 artists and their recognition status (whether the model recognized the name) and example images synthesized by SD.
In this tutorial you’re learning how to generate video input masks to only diffuse parts of a video frame that you want. Example.
If you’ve created portrait images of humans with AI, you certainly have come across weird looking facial features. GFPGAN is a blind face restoration algorithm towards real-world face images that aims to fix that. There is also a colab notebook.
There are already tons of different color palette generators, some of which ML behind the scenes. I played around with Huemint this week to pick the AI Art Weekly cover title color for this weeks issue.
AI image generators are massive, but how are they creating such interesting images? A more technical explanation on how image diffusion models work.
And that my friends, concludes this weeks AI Art weekly newsletter. Please consider subscribing and sharing if you liked it and let me know on Twitter if you have any feedback. The more people get to see this, the longer I can keep this up and the more resources I can put into it.
Thanks for reading and see you next week!