AI Art Weekly #4

Welcome to issue #4 of AI Art Weekly. A newsletter by me (dreamingtulpa), to cover some of the latest happenings in the AI Art world.

Each issue will contain three main sections Reflection , Imagination and Creation (checkout one of the earlier issues for the meaning behind those).

Last week I wanted to try out something new and setup a community challenge to create this weeks cover art that gets shared alongside the issue’s announcement on Twitter. 14 AI artists submitted a piece and from those fourteen, @GanWeaving and I chose four finalists to let the community decide on the final winner. And they did. Congratulations to @dancevatar!

Thank you to everyone who contributed. It was a fun experiment and one I certainly want to repeat in the near future, maybe once a month? What do you think about the cover art contest? Should we repeat it? What could we do different next time? Let me know on Twitter.


Reflection

This week, real time diffusion just got more real. Twitter user @chenlin_meng shared a paper where they managed to get diffusion steps down to 1-4 steps to generate high quality samples.

To quote Stable Diffusion godfather Emad:

To put this in context we usually use 28-50 sampling steps for image creation, this also works for video diffusion. Everything we've seen in the last few months in generative AI really is just a amuse-bouche for what is going to hit next year. Exponentials be crazy. 🚀✨ https://t.co/yCMJQDA6cO
October 10, 2022 10:48

Talking about hype, huh?

The most unexpected announcement this week goes to Microsoft. They’re adding DALL·E to their Office lineup in the form of a new product called Microsoft Designer. As a unix user, I’m not a Microsoft/Office aficionado, but that is a pretty cool addition and will introduce AI to an entire new audience!

MidJourney introduced a new feature called Remix which lets you change the prompt text and/or model (demaster from --test to --v 3) to ‘nudge’ the scene towards a different style when creating variants.

Remember Dreambooth SD? What started with a whopping VRAM requirement of 24GB, now got reduced to 8GB thanks to GitHub user Tti. Making Dreambooth available on even more consumer GPUs.

And last but not least, Meta AI published a paper called AudioGen. A textually guided text-to-audio generation model. Which I’ve to say is pretty impressive and could in combination with other mediums lead to exciting new generative art. For me AI art isn’t just about pretty pictures, but exploring new ways to merge the old with the new. But, there is no code yet. And it’s unlikely Meta is going to open source it. But one can still hope, OpenAI at least managed to do it with Whisper. So come on Zuck, we need more open source models!


Imagination

In this weeks interview of AI art weekly we talk to one of the Deforum developers: ScottieFox. Deforum is a Stable Diffusion notebook that lets you create 2D, 3D, video input and interpolation animations. If you’ve followed this newsletter and my work, you know I’m a bit of a Deforum nerd. So I wanted to dig a little deeper. Enjoy this more lengthy interview with what one could consider a pioneer of our time!

[AI Art Weekly] Scottie, how did you get into being a part of the Deforum team?

Coming from the field of live visual effects and real time video editing, I was part of a community that eventually led to the exposure of pytti.

I was witnessing some wild content displayed on social media that involved looping animations and videos that were to be used by musicians during shows and concerts (whether in person or virtual). When I asked the creators about the tools they were using - they introduced me to AI-generated graphics.

At the time, it took days to weeks to produce a string of frames that could be considered usable. The technology at the time wasn’t open source, nor really tested. It was fresh, new and unpredictable. I saw the opportunity for a new tool I could add to my ever growing set of software - so naturally, I did everything I could to absorb and read upon the material.

As new breakthroughs arrived, I started meeting fellow developers along the way, that in turn showed me different approaches to creating the same wild content I had been exposed to. Things moved fast, and soon enough resources became publicly available. With this tech being suddenly accessible to all, a community formed and gathered around this new frontier of art. I found myself quickly becoming the go-to for new users asking the golden question “how is that done!?” - the very same reason I came to AI in the first place.

At the time, I didn’t have any leaders or mentors to guide me - so as a service to the ever growing community surrounding Deforum, I took the informal position of being the helper. I found comfort in being an advocate between the user and the developer - connecting the community members to the development, while alleviating the dev team to focus on the behind-the-scenes. I’ve always felt that the success of a product or software is directly related to how well the user feels engaged in its features and functionality.

As the community grew, and Deforum became successful, the development team welcomed me aboard in hopes that I could now organize the community’s needs and questions, and help to better the overall experience of Deforum as a powerful tool in their workflow. Having spent so much time with each user, allowed me to create the documentation and quick-guides available, so the members can further their enrichment at their own pace. The experience has been greatly positive!

[AI Art Weekly] I saw you recently shared some AR/VR demos on your Twitter feed. Can you tell us something about what you’re currently working on?

As a hobbyist, I have always enjoyed repurposing and plugging in tools to see how they react. The same way a guitarist tries out a chain of pedals and amplifiers and effects to get “that sound” - I too, experiment in the same fashion.

I have an interesting background in real-time visual dynamics, including 3D rendering and particle simulations. I’ve used my base software TouchDesigner to add different elements of dance, movement arts and visual spectacle in my art.

The variable that changes is specific “what” it is that I plug into my workflow. When this opportunity of AI came into play, I soon thought “what would it look like in real time?” followed by “is anyone else doing this, there must be! Right?”. It turns out that all of these tools are available - yet no one had thought to chain them together in such a fashion to create an immersive experience of AI in real time. Sure, people have projected videos onto spheres/domes at festivals. And of course, there’s been improvements to AI becoming fast enough to generate content in such a way that real time playback is almost achieved.

At the time of this article, we’re on the frontier of that being a normal feature. I tried a different approach to diffuse content into a scene while avoiding the re-rendering of the entire projection. We perceive that as real-time despite the background constantly queuing and blending new content on demand. 

[AI Art Weekly] How do you think AI animation will evolve in the near future?

AI is driven by trends. Just as all technology, it follows current demands. When we entered the pandemic, classrooms became virtual, and thus webcams were absorbed by the educational field - something we wouldn’t have predicted.

When motion capture/skeleton tracking devices and software became commonplace, the medical field utilized the technology for posture and orthopedic studies. What once was in the game room, found its place in greater occupations. Once those industries adopted that technology, they breathed life into it, and forced its evolution, backed by immense funding and even higher demand.

Currently, AI mostly exists in dataset analysis - however one might guess it’s all about the Arts. It’s not! The Art is just what we see posted on social media outlets, so we’d only assume that must be the extent of it. The presentation of AI as an art tool has grabbed more attention than any other use case - so we’re only led to believe that AI is meant for it, and all future endeavors will be in the art field. However, I feel AI will soon trend into more social and societal applications. Currently, dataset training is being automated by AI to train AI. This increases the development, while removing the human bottleneck element. We will probably see AI now move to the medical and food service industry. We already see gimmicks and cartoons about a “robot that serves a glass of wine” - but it goes deeper than that. I’m eager to see the evolution, wherever it may take us.

As for the future of Deforum, we’re looking into even more ease-of-use as well as focusing on running the environment on local machines, as a response to the inaccessibility of cloud based services. Some users literally want a “one-click-art” solution, so we have our sights set on that as well as a possible UI for users to easily navigate.

Human-like ‘Bot Handy’ robot can pour you a glass of wine, do dishes, and help with laundry

[AI Art Weekly] What do you think is the most important technical topic to learn, if let’s say, someone wanted to contribute to AI in a meaningful way?

The best way I feel that people can learn in a field that is such a frontier of technology is to share with others and discuss experiences.

Very often when making something for the first time, the reaction I receive from community is “where’s the tutorial?”, or “Can I download this?”. The nature of pioneering tech and software is that the experiments come first - then beta release, then supporting documentation. A lot of users are familiar with just “clicking a link in the description”, however the advancements of AI work faster than our ability to publish papers on them.

[AI Art Weekly] What other AI projects do you think are interesting / worth keeping an eye on?

I’ve really been into the recent developments of nVidia’s SDK. I’ve seen some wild content produced from an iPhone and some NeRF tests. I’ve worked previously with Zed2i’ AI capture features that will build mesh and 3D elements from stereo camera input and depth maps. I’m almost certain that as AI art ebbs and flows - people will seek more AR solutions. Even guided marketing that adapts to the user viewing the advertised content. We see that in commercials now, but perhaps in a more streamlined way. It’s rather hard to predict where AI will take us. History has shown us some strange outcomes. The telephone was almost trashed as an idea.

[AI Art Weekly] What is one of your favourite prompts when creating images/animations?

I’ll always have a soft spot for the first prompt I ever used to make an image.

A desert market in a steampunk city

At the time, I didn’t really understand how that would come into play, or even how the dataset assumes “steampunk” to predominantly be a purple and yellow theme, unexpectedly.

I also like what “golden hour” does to a scene. In photography, an hour before the sun sets usually casts an amber glow and low shadow across the subject. It creates a very dynamically lit scene. I like to build a prompt, and then as a last measure, include “golden hour” to see how it enhances the overall feel of the image output.

“A desert market in a steampunk city” by ScottieFox

[AI Art Weekly] Who is your favourite artist?

Interesting enough, I tend to not use artists in my prompts. I know there’s a lot of buzz in the community about the ethics of using the style of another artist, but personally, I feel adding an artist name to a scene creates way too much of a character to scene. Once that heavy element has been added, it’s very difficult to tame and dial back.

Lisa Frank” for instance will make everything overly-rainbow like - and it just doesn’t fit my vision in art. If I wanted rainbows, I’d type in “rainbow color scheme”.

As for art previous to AI, I’ve been fond of Alex Grey having seen his works on music album cover art.

I think through AI, I’ve learned more about artists that I would have never encountered beforehand. But I just choose to not use their work in my output.

“New Man New Woman” by Alex Grey

[AI Art Weekly] Anything else you would like to share?

I hope as the AI community grows, that people find it in themselves to add to the enrichment of art by helping each other. A lot of us are pioneers. We get all the credit of being the first to do something, at the cost of having no peers to bounce ideas off of. The birthplace of invention is usually an island. Hopefully in the future, we will find better ways to connect. I wish I had a guide when going in new directions and trying different aspects of AI - but such is the nature of what we do. 


Creation

These are some of the most interesting resources I’ve come across this week.

“The birthplace of invention is usually an island, golden hour, surreal” by me making use of MJ’s new Remix feature

And that my friends, concludes this weeks AI Art weekly newsletter. Please consider subscribing and sharing if you liked it and let me know on Twitter if you have any feedback. The more people get to see this, the longer I can keep this up and the more resources I can put into it.

Thanks for reading and see you next week!

– dreamingtulpa

by @dreamingtulpa