AI Art Weekly #19

Hi fellow dreamers and welcome to Issue #19 of AI Art Weekly! 👋

This week was pretty intense with regards to new paper and model releases. We didn’t just get one, two, or three new audio models, but SEVEN! Excuse my language, but what the heck! 😄

And if that’s not enough, there were a lot of other mind-blowing things that came out this week. This is the most packed AI Art Weekly issue to date. So, without further ado, let’s dive in. This week is full of new and exciting things:

  • Absolutely mind-blowing audio and music models
  • Consistent video-editing and video-generationg models
  • New aspect-ratios for Midjourney v4
  • Interview with MetaMushrooms

Cover Challenge 🎨

The challenge for this weeks cover was “patterns” and we received 35 submissions from 21 artists. As usual, the community decided on the final winner with a stunning 340 votes. And these are the results:

  1. @arvizu_la 🏆
  2. @mertcobanov 🥈
  3. @spiritform 🥉
  4. @weird_momma_x ❤️

Congratulations to @arvizu_la for winning the cover art challenge for a THIRD time 🤯🎉 I’ve no idea how you do it, but you do it well 🥳. And as always a big thank you to everyone who contributed!

Inspired by Todays interview, the next challenge is all about exploring the fascinating kingdom of “fungus”. From mushrooms to molds, anything related to fungi is up for grabs. Get creative! The reward is another $50. Rulebook can be found here and images can be submitted here.

I’m looking forward to all of your trippy submissions 🙏


Reflection: News & Gems

The Week of Audio models

As mentioned within the introduction, this week was crazy regarding the release of audio models. We got seven new shiny papers or models with actual code released. Writing a synopsis for each and every one of them would took me forever so I’m just gonna list them and summarize what they could be used for in one sentence. With the ones I find most interesting at the top.

  • MusicLM is a text-to-music model by Google which is able to generate longform as well as chained sequence clips. The MusicCaps which the model is based upon, contains 5,521 music examples, each of which is labeled with an English aspect list and a free text caption written by musicians.
  • AudioLDM is able to generate both audio and music from text prompts. The examples are mind-blowing.
  • SingSong is a model that is able to add instrumental music to accompany input vocals. You sing, the model adds the instrumentals. Super crazy 🤯
  • Make-An-Audio is a model that is able to generate audio, like a horse galloping, a cat meowing, fireworks popping. If that’s not enough, the model can also inpaint corrupted audio data and generate audio clips from images and videos.
  • Noise2Music is able to generate high-quality 30 seconds music clips from text prompts
  • RAVE2 – A variational autoencoder for fast and high-quality neural audio synthesis. I have no energy left to research this one, but there are some demos at the bottom of the Github Readme 😅
  • audio-diffusion-pytorch – A fully featured audio diffusion library, for PyTorch. Includes models for unconditional audio generation, text-conditional audio generation, diffusion autoencoding, upsampling, and vocoding.

MAV3D: Text-To-4D Dynamic Scene Generation

As if all those impressive audio / music models above weren’t enough, MAV3D comes along. The model by @ShellySheynin and the Meta AI team is able to generate animated 3D scenes from a pure text or image-prompt. I listened to a Emad Mostaque interview this week in which he states, that the world (and he himself) are not ready to comprehend the technological progress that lies ahead, and I think he most certainly is right. Please checkout the examples and tell me I’m not dreaming.

Make-A-Video3D example

GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis

Remember ANGIE? That model that is able take a single input image and audio file and generate a video of a talking and gesturing person? GeneFace is similar (minus the gestures), but looks extremely impressive as well.

GeneFace comparison with other methods

Text-Video-Edit: Shape-aware Text-driven Layered Video Editing

We’re probably all longing for more consistency when it comes to generating animations. Text-Video-Edit is not able to generate videos from only text prompts, but it is able to edit the shapes of objects within existing video frames. Results look like from the Playstation 2 era, but the consistency looks amazing 😙🤌. We’re definitely getting closer.

Text-Video-Edit example

Dreamix: Video Diffusion Models are General Video Editors

Because one video-editing method is not enough, here is another mind blowing model called Dreamix. Given an image and a text prompt, Dreamix is able to edit the input video while maintaining fidelity to color, posture, object size and camera pose, resulting in a temporally consistent video (minus the Playstation 2 aesthetics).

Dreamix turtle example

SceneScape: Text-Driven Consistent Scene Generation

Talking about consistency, take a look at SceneScape. This method is able to synthesize long videos of arbitrary scenes solely from an input text describing the scene and camera poses – with great consistency. I love these zooming out effects and I can’t wait to get my hands on this.

SceneScape example

SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections

But what if we were not only able to generate an endless zooming out cooridor, but an endless 3D landscape? You know, similar to what Minecraft does when generating a new world, but based on a few input images. Enter SceneDreamer. The model is able to generate unbounded 3D scenes from in-the-wild 2D image collections. Our personal 3D worlds just got a step closer.

SceneDreamer styles and scenes example

New MidJourney aspect ratios

And if all of the above wasn’t enough, MidJourney finally added more dynamic aspect ratios to their v4 and Niji models. The aspect ratio limit has been raised to 2:1 or 1:2 for both landscape and portrait orientation. All aspect ratios from square to 2:1 are supported, but will be rounded to a resolution divisible by 32 pixels for improved composition and quality. And the results look amazing. Now can we please get an outpainting feature as well? 😅

“idyllic forest cliff, made in abyss, by hayao miyazaki, concept art –ar 2:1” using MJv4


Imagination: Interview & Inspiration

In this week’s edition of AI Art Weekly, we talk to @MetaMushrooms, a multimedia artist famous for his 1970s trippy art style that focuses mainly on mushrooms. the cover art challenge for this week was inspired by his work and I’m thrilled that he agreed to be interviewed for this issue. Enjoy!

[AI Art Weekly] What’s your background and how did you get into AI art?

I have a background in music and computers and have been interested in AI art for a few years now. I began by experimenting with various AI algorithms, playing with their generated output. The potential of AI and the possibilities it presents for creating beautiful and expressive works of art fascinated me. After several experiments, I decided to delve deeper into the subject and began researching and studying AI art more seriously. My current focus is creating AI art using Generative Adversarial Networks (GANs) and exploring AI’s creative potential.

“THE ARTIST’S STUDIO” by @MetaMushrooms. Watch the full version with sound.

[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?

The Magic Bus is a rare, limited edition Digital Art Collectible Series that features digital artwork of vintage camper vans and buses. Everyone was “on the road” at some point in their lives, and this is a reminiscence of that time. The series will include both static and animated versions, as well as collaborations with known artists in the Digital Collectible Art Space. Collectors will also enjoy non-exclusive rights to the IP and are allowed to sell the image as physical merchandise.

“Magic Bus” by @MetaMushrooms

[AI Art Weekly] What does your workflow look like?

I am an improviser. I jam on my equipment until I get into the flow and then allow my creativity to freely explore and see where it takes me.

I use a variety of tools like Photoshop, Cameras, After Effects, AI, to create a unique look each time.

[AI Art Weekly] What is your favourite prompt when creating art?

Mushrooms are my favorite prompts, and I enjoy trying every possible combination with them, because I’m “MetaMushrooms”, it’s my brand! Sometimes my pieces do not contain mushrooms, but that is rare.

“KANDINSKY’S DREAM” by @MetaMushrooms. Watch the full version with sound.

[AI Art Weekly] Why mushrooms?

I have a holistic approach towards life, and this extends to my views on mushrooms. I see them as an integral part of who we are, as we all have fungal elements present within us, from our origins in the universe to our eventual passing. Fungi should not be overlooked or disregarded, as they have their own unique kingdom. Of course, I also acknowledge and appreciate the role of mushrooms as a powerful tool for opening the mind and inducing spiritual experiences through their use as medicine. They are all around us, constantly present in the air, our food, and even on our skin, though often so tiny that we cannot see them. But despite their small size, they have a profound impact on our lives. I understand that I may be preaching to the choir here, but I cannot stress enough the importance and sacredness of these fascinating organisms.

[AI Art Weekly] Can you recommend any resources around that topic?

Sure:

If you’re not familiar with any of that, it’s a good place to start for where I’m coming from. I was on the road with Ken in the mid 1990s.

“tiny bus tickets” by @MetaMushrooms

[AI Art Weekly] How do you imagine AI (art) will be impacting society in the near future?

AI has the power to tap into the innate artistic ability of those who have never received formal training, unlocking new avenues of self-expression which might bring them immense joy and satisfaction.

[AI Art Weekly] Who is your favourite artist?

Jerry Garcia was a visionary in the realm of sound and digital art during the mid-1990s, constantly pushing the boundaries of what was possible. He was at the forefront of experimentation, using technology to create unique and captivating works of art. It’s well worth checking out Garcia’s work and exploring his contribution to the field.

“New York at Night” by Jerry Garcia

[AI Art Weekly] Anything else you would like to share?

Let’s have fun and spread joy, love, and art! We’re not performing brain surgery, so let’s appreciate the opportunity to create and connect through art. To quote the great Bill and Ted: “Be EXCELLENT to each other”. This is a chance for us to come together and build a supportive community through the power of creativity.


Creation: Tools & Tutorials

These are some of the most interesting resources I’ve come across this week.

“from our origins in the universe to our eventual passing, ethereal mushroom, trippy painting by jerry garcia” by me

And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:

Reply to this email if you have any feedback or ideas for this newsletter.

Thanks for reading and talk to you next week!

– dreamingtulpa

by @dreamingtulpa