ScottieFox
@ScottieFoxTTV Published October 14, 2022

[AI Art Weekly] Scottie, how did you get into being a part of the Deforum team?

Coming from the field of live visual effects and real time video editing, I was part of a community that eventually led to the exposure of pytti.

I was witnessing some wild content displayed on social media that involved looping animations and videos that were to be used by musicians during shows and concerts (whether in person or virtual). When I asked the creators about the tools they were using - they introduced me to AI-generated graphics.

At the time, it took days to weeks to produce a string of frames that could be considered usable. The technology at the time wasn’t open source, nor really tested. It was fresh, new and unpredictable. I saw the opportunity for a new tool I could add to my ever growing set of software - so naturally, I did everything I could to absorb and read upon the material.

As new breakthroughs arrived, I started meeting fellow developers along the way, that in turn showed me different approaches to creating the same wild content I had been exposed to. Things moved fast, and soon enough resources became publicly available. With this tech being suddenly accessible to all, a community formed and gathered around this new frontier of art. I found myself quickly becoming the go-to for new users asking the golden question “how is that done!?” - the very same reason I came to AI in the first place.

At the time, I didn’t have any leaders or mentors to guide me - so as a service to the ever growing community surrounding Deforum, I took the informal position of being the helper. I found comfort in being an advocate between the user and the developer - connecting the community members to the development, while alleviating the dev team to focus on the behind-the-scenes. I’ve always felt that the success of a product or software is directly related to how well the user feels engaged in its features and functionality.

As the community grew, and Deforum became successful, the development team welcomed me aboard in hopes that I could now organize the community’s needs and questions, and help to better the overall experience of Deforum as a powerful tool in their workflow. Having spent so much time with each user, allowed me to create the documentation and quick-guides available, so the members can further their enrichment at their own pace. The experience has been greatly positive!

[AI Art Weekly] I saw you recently shared some AR/VR demos on your Twitter feed. Can you tell us something about what you’re currently working on?

As a hobbyist, I have always enjoyed repurposing and plugging in tools to see how they react. The same way a guitarist tries out a chain of pedals and amplifiers and effects to get “that sound” - I too, experiment in the same fashion.

I have an interesting background in real-time visual dynamics, including 3D rendering and particle simulations. I’ve used my base software TouchDesigner to add different elements of dance, movement arts and visual spectacle in my art.

The variable that changes is specific “what” it is that I plug into my workflow. When this opportunity of AI came into play, I soon thought “what would it look like in real time?” followed by “is anyone else doing this, there must be! Right?”. It turns out that all of these tools are available - yet no one had thought to chain them together in such a fashion to create an immersive experience of AI in real time. Sure, people have projected videos onto spheres/domes at festivals. And of course, there’s been improvements to AI becoming fast enough to generate content in such a way that real time playback is almost achieved.

At the time of this article, we’re on the frontier of that being a normal feature. I tried a different approach to diffuse content into a scene while avoiding the re-rendering of the entire projection. We perceive that as real-time despite the background constantly queuing and blending new content on demand. 

[AI Art Weekly] How do you think AI animation will evolve in the near future?

AI is driven by trends. Just as all technology, it follows current demands. When we entered the pandemic, classrooms became virtual, and thus webcams were absorbed by the educational field - something we wouldn’t have predicted.

When motion capture/skeleton tracking devices and software became commonplace, the medical field utilized the technology for posture and orthopedic studies. What once was in the game room, found its place in greater occupations. Once those industries adopted that technology, they breathed life into it, and forced its evolution, backed by immense funding and even higher demand.

Currently, AI mostly exists in dataset analysis - however one might guess it’s all about the Arts. It’s not! The Art is just what we see posted on social media outlets, so we’d only assume that must be the extent of it. The presentation of AI as an art tool has grabbed more attention than any other use case - so we’re only led to believe that AI is meant for it, and all future endeavors will be in the art field. However, I feel AI will soon trend into more social and societal applications. Currently, dataset training is being automated by AI to train AI. This increases the development, while removing the human bottleneck element. We will probably see AI now move to the medical and food service industry. We already see gimmicks and cartoons about a “robot that serves a glass of wine” - but it goes deeper than that. I’m eager to see the evolution, wherever it may take us.

As for the future of Deforum, we’re looking into even more ease-of-use as well as focusing on running the environment on local machines, as a response to the inaccessibility of cloud based services. Some users literally want a “one-click-art” solution, so we have our sights set on that as well as a possible UI for users to easily navigate.

Human-like ‘Bot Handy’ robot can pour you a glass of wine, do dishes, and help with laundry

[AI Art Weekly] What do you think is the most important technical topic to learn, if let’s say, someone wanted to contribute to AI in a meaningful way?

The best way I feel that people can learn in a field that is such a frontier of technology is to share with others and discuss experiences.

Very often when making something for the first time, the reaction I receive from community is “where’s the tutorial?”, or “Can I download this?”. The nature of pioneering tech and software is that the experiments come first - then beta release, then supporting documentation. A lot of users are familiar with just “clicking a link in the description”, however the advancements of AI work faster than our ability to publish papers on them.

[AI Art Weekly] What other AI projects do you think are interesting / worth keeping an eye on?

I’ve really been into the recent developments of nVidia’s SDK. I’ve seen some wild content produced from an iPhone and some NeRF tests. I’ve worked previously with Zed2i’ AI capture features that will build mesh and 3D elements from stereo camera input and depth maps. I’m almost certain that as AI art ebbs and flows - people will seek more AR solutions. Even guided marketing that adapts to the user viewing the advertised content. We see that in commercials now, but perhaps in a more streamlined way. It’s rather hard to predict where AI will take us. History has shown us some strange outcomes. The telephone was almost trashed as an idea.

[AI Art Weekly] What is one of your favourite prompts when creating images/animations?

I’ll always have a soft spot for the first prompt I ever used to make an image.

A desert market in a steampunk city

At the time, I didn’t really understand how that would come into play, or even how the dataset assumes “steampunk” to predominantly be a purple and yellow theme, unexpectedly.

I also like what “golden hour” does to a scene. In photography, an hour before the sun sets usually casts an amber glow and low shadow across the subject. It creates a very dynamically lit scene. I like to build a prompt, and then as a last measure, include “golden hour” to see how it enhances the overall feel of the image output.

“A desert market in a steampunk city” by ScottieFox

[AI Art Weekly] Who is your favourite artist?

Interesting enough, I tend to not use artists in my prompts. I know there’s a lot of buzz in the community about the ethics of using the style of another artist, but personally, I feel adding an artist name to a scene creates way too much of a character to scene. Once that heavy element has been added, it’s very difficult to tame and dial back.

Lisa Frank” for instance will make everything overly-rainbow like - and it just doesn’t fit my vision in art. If I wanted rainbows, I’d type in “rainbow color scheme”.

As for art previous to AI, I’ve been fond of Alex Grey having seen his works on music album cover art.

I think through AI, I’ve learned more about artists that I would have never encountered beforehand. But I just choose to not use their work in my output.

“New Man New Woman” by Alex Grey

[AI Art Weekly] Anything else you would like to share?

I hope as the AI community grows, that people find it in themselves to add to the enrichment of art by helping each other. A lot of us are pioneers. We get all the credit of being the first to do something, at the cost of having no peers to bounce ideas off of. The birthplace of invention is usually an island. Hopefully in the future, we will find better ways to connect. I wish I had a guide when going in new directions and trying different aspects of AI - but such is the nature of what we do. 

by @dreamingtulpa