AI Art Weekly #7
Welcome to issue #7 of AI Art Weekly. A newsletter by me (dreamingtulpa), to cover some of the latest happenings in the AI Art world.
Weāve reached the 400 subscriber milestone. Before we begin I just wanted to say thank you for subscribing and reading my newsletter š
If you have any feedback about the newsletter, shoot me an email and Iāll get back to you.
2nd and 3rd Cover Challenge š šØ
The 2nd cover challenge was a full success. 25 artists submitted 43 pieces to this challenge from which I picked 4 finalists. Wasnāt an easy decision, I can tell you that. But the community decided on the final winner. Congratulations @amli_art and a big thank you to everyone who contributed!
With the success of this weeks challenge I want to try and make this a weekly thing for a while. I stumbled across a cool cyberpunk anime model this week (see Creation section below), so the theme for the 3rd challenge is ā cyberpunk ā. Think Akira, Cyberpunk 2077, Bladerunner and so on. Your submission doesnāt have to be in the style of that model, but that could be a fun challenge. Submit your pieces for the 3rd challenge here. Prize is again $50. Rulebook can be found on the website.
Iām looking forward to all of your submissions š
If you want to support the newsletter, this weeks cover is available for collection on objkt as a limited edition of 10 for 2.50ź© a piece. Thank you for your support šš
Reflection: News & Gems
Image Diffusion
MidJourney opened up another V4 image rating round and sheesh, some of those images were mind blowing. Iāve compiled some of the best images Iāve rated into a thread for your enjoyment here. If you want to know when V4 is dropping, I canāt tell you. The current joke on MJās Discord server is āsoonā¢ļøā.
DALLĀ·E just announced their public API.
And there is a new player on the block called eDiffi which compared to Stable Diffusion and DALLĀ·E creates more coherent images based on the given prompts, can transfer styles from reference images and generate images from paint maps.
DPM-Solver++
In the research department weāve had the proposal of DPM-Solver++. If youāre familiar with image diffusion, youāve certainly come across the different samplers like klms, euler, dpm, ddim etc. Those samplers are responsible for converting noise into images. Regular DPM requires around 100-250 steps to produce high quality output. DPM-Solver++ can achieve the same with 10-20 steps. A 10X improvement. Wow.

Video-to-4D
Letās talk about NeRF. Iāve seen some crazy stuff this week like this video from @SirWrender where he converted a 3D scene created with LumaLabs and combined it with camera movements from another video.
Then Iāve stumbled upon NeRFPlayer . A Streamable Dynamic Scene Representation with Decomposed Neural Radiance Fields. Iām gonna be honest, Iām not entirely sure what this is about, but it looks like it is able to create animated 3D scenes using single video cam footage, hence the title Video-To-4D.
And another one Iāve found, similar to the two above, is called Monocular Dynamic View Synthesis: A Reality Check.
Music
In the music department this week, weāve Pop2Piano by @_JonghoChoi. A model that creates piano covers from pop songs.
Then there is Jukebox-WebUI by @vovahimself. A web UI for OpenAIās music generation model Jukebox. A Google Colab notebook can be found here.
And last but not least SDMuse by @sivil_taram. A music generation framework which can not only compose a whole musical piece from scratch, but also modify existing musical pieces in different ways, such as combination, continuation, inpainting, and style transferring.
The (maybe) first AI generated Twitter game. Every hour Prompto! tweets an image based on a GPT-3 generated clue. The goal is to guess the correct word that was used to generate those clues.
A insightful twitter thread by @sergeyglkn on how he used different (AI) tools to create an animated AR character.
This feels like it could be from a scifi movie. āPrompt Battleā is a rap battle inspired competition but with keyboards and AI image generation instead of words. Shared by @alexabruck.
Imagination: Interview & Inspiration
In this weeks interview of AI Art Weekly we talk to Roope Rainisto. Roope continuously grabs my attention with interesting results heās sharing on Twitter. Especially his recent Dreambooth experiments where quite stunning. Letās dive in.
[AI Art Weekly] Hey Roope, whatās your background and how did you get into AI art?
Iāve been working as a designer for the past 25 years, focusing on concepting and UX. Since āUXā is a very broad topic, Iāve also done things very broadly in relation to it: UI design, visual design, prototyping, photography etc.Ā
Photography has been my main hobby for nearly 30 year. What originally drew me into AI-based creation was this somewhat fanciful idea of a āvirtual cameraā. I can go into the real world and point my camera at things in order to be able to tell stories - I should be able to do the same with this virtual camera.
Iāve been doing AI-based creations now for about 15 months, and itās quite stunning to look at how quickly things have evolved during this time. I have a strong professional interest into this, as in being able to utilize various methods to assist me in my commercial work.

āDream Attackā by Roope Rainisto
[AI Art Weekly] Do you have a specific project youāre currently working on? What is it?
My time is split between commercial work (working with several ad agencies, filmmakers, bands, directors etc.) and then the work that I can publish online. For the public stuff, thereās currently no big projectā¦
Well, there is. My big project ultimately is to build up the capabilities to be able to create short stories. Small movies, animations, comics, using every trick in the book: create images, videos, animations, characters, voiceovers, music - Everything I do is kind of in service of this meta-project. Once all the building blocks are in shape, Iāll start putting them together.
But thatāll happen next year.
Creating movies with AI is still kind of a mess. But things are evolving. Checkout this example by @coqui_ai where they use their own platform to create the voices, Stable Diffusion for the images, Googleās AudioLM model for the music and @AiEleutherās GPTJ model to write the script.
[AI Art Weekly] What does your workflow look like?
The workflow is constantly evolving. I get bored of doing the same thing, using the same methods, so I try to challenge myself and learn new tricks by evolving, changing something constantly.Ā
But in a rough sense itās usually a funnel. I have an idea, then I create lots around it, then I look at results, evolve my inputs, look at results again, filter down, edit, publish, rinse, repeat. I create about 100x more content than what I ever publish.Ā
Itās not much different from how I do photography: I shoot lots. The āfilmā here doesnāt cost much.
[AI Art Weekly] What is your favourite prompt when creating art?
Ā Iām not a ābig prompterā, really. I donāt do these complicated chapter-long winding prompts. To each their own, of course. Not my personal style. I find that they narrow down the results too much.Ā
Perhaps the biggest repeating prompt elements are when I go for photographic style. I add some photographic things to the prompts like fuji velvia
or sigma lens
or nikon dslr
ā things you would find with photographic descriptions.Ā
In general I try to vary my prompts as much as possible. I get bored seeing the same thing, so I donāt like using the same words. Thatās also why Iāve been very much into Dreambooth training recently. Running your own model does more to the output style than almost anything Iāve been able to achieve with prompting alone.

āMagic Hoursā by Roope Rainisto
[AI Art Weekly] Can you tell us more about your dreambooth approach?
Sure, Iām using the JoePenna notebook.
I run it myself through Visions of Chaos (wonderful Windows app to run AI code locally), itās integrated into its Stable Diffusion code.Ā
My recent training has been using the v1.5 checkpoint release, training that with custom material. Either using the person
or the style
classes (nothing too surprising!) I have a hunch that thereās lots of undiscovered classes there to train, weāre only scratching the surface.
Now with the Colab pricing change, Iām fortunate enough to have computers at home I can run almost anything locally. I havenāt actually used any colab for the past few weeks.
Shameless editor plug: If youāre like me and donāt own a fast enough GPU yet, Iāve put together a Tutorial on how to setup Automatic1111ās WebUI on Paperspace. The most affordable cloud GPU solution Iāve found and tried so far.
A thread by Roope showcasing Emma Stone as Gollum, in Pirates of the Caribbean, Terminator 2, Blade Runner, Titanic, Tron, The Matrix, as Princess Leia, in Gone with the Wind, Alien, Harry Potter, AmƩlie and as a Teletubby.
[AI Art Weekly] How do you feel AI (art) will be impacting society?
AI will have huge impact. Just about every person whose job nowadays involves sitting in front of a computer, they will get AI to help them get things done faster and easier.
āAI artā - āartā is an endless discussion which perhaps isnāt the time best spent for anyone. āIs this art? Is this not art?ā ā thatās ultimately not a meaningful question. Whatās the point of asking that? What would happen if that question would āget settledā one way or another?
If one looks at photos, photographs, masses of photos are created every day. Itās safe to say that 99% of photos that are created āare not artā, but it doesnāt mean that these photos wouldnāt be valuable. Thereās tons of reasons why people create and send photos to each other. I believe the same will be true with AI creations. Most creations are not art. Some will be. Itās really not for the artist to judge.
[AI Art Weekly] Who is your favourite artist?
Iām drawn to storytellers: @ClaireSilver12, @GlennIsZen, @wizardhead, @paultrillo, @remi_molettee, @PasanenJenni, @vince_fraser, @singlezer0
Itās easy to create nice looking images that tell nothing about anything really.Ā Telling something personal, or trying to tell a story, trying to say something with your creations, trying to make a statement takes more guts. Some people will hate it, laugh at it or ridicule it. Someone might actually be touched by it. Yin and Yang.
As for Non-AI artist Iām heavily into music, with a sweet spot for 90ās alt and indie rock. Hundreds of bands. Movies, television, books, illustrations. I donāt really have a shortlist of favourites ā everyone influences each other. But if pushed, Iāll say David Lynch and Franz Kafka.

āSiren Songā by @PasanenJenni
[AI Art Weekly] Anything else you would like to share?
I think a good question for each of us to ask ourselves is regarding style over substance. Are you spending most of your time focusing on the style or on the substance?
The AI methods ā at least currently! ā donāt yet give us the secrets to substance. Great substance works even with poor style, great style can try to hide the poor substance, but since great style will be accessible to anyone in the near future ā MidJourney in 12 months will create an amazing looking artwork out of anything ā it comes down back to great substance.Ā
Whatās your own substance? What do you want to say to the world?
Each week we share a style that produces some cool results when used in your prompts. This weeks featured style is in the style of shotaro ishinomori and michael ancher
.
Creation: Tools & Tutorials
These are some of the most interesting resources Iāve come across this week.
@DGSpitzer released a stunning Cyberpunk Anime model to recreate the style of the popular Cyberpunk 2077: Edgerunners anime. There are HuggingFace and Google Colab versions. Submit your creations to this weeks cover art challenge and get a chance to win $50.
@kylebrussel created a Stable Diffusion checkpoint which allows you to generate pixel art sprite sheets from four different angles.
With all these fine tunes getting released, I got curious how this works and I stumbled across this tutorial on how to do it using the automatic1111 web-ui. I also found a thread from yet another fine tune (Naturo anime) by @eoluscvka in which he shared @Buntworthyās GitHub guide on fine tuning stable diffusion on to create your own style transfer.
My last deforum music video took me almost 500 computing units on Google Colab. So Iāve decided to make the switch to paperspace.com and recorded a video tutorial to help you do the same.
A hugging space demo that sends an image in toĀ CLIP InterrogatorĀ to generate a text prompt which is then run throughĀ MubertĀ text-to-music to generate music from the input image! By @fffiloni.

āsubstance over style, shotaro ishinomori, michael ancherā by me
And that my fellow dreamers, concludes this weeks AI Art weekly issue. Please consider supporting this newsletter by subscribing and sharing. Let me know on Twitter if you have any feedback and ideas.
Thanks for reading and see you next week!
ā dreamingtulpa