AI Art Weekly #17
How’s it going my fellow dreamers and welcome to issue #17 of AI Art Weekly 👋
I was busy this week migrating the newsletter from Revue to my own distrubution solution. Didn’t want to go down this road at first but I wasn’t satisfied with all the existing products. So, here we are 🤷♂️
Things might look a little wonky, as I didn’t have the time to test each and every email client. Please tell me if that’s the case.
I’m also sending from a new address, so please make sure to add email@example.com to your contacts list so my emails don’t land in your spam folder.
And just a quick heads up that there will be no issue next week. I’m taking a week off and will be back on the 27th of January with all the latest AI art gossip.
With that out of the way, let’s jump in. Highlights of the week are:
- VALLE-E text-to-speech
- Diffused Heads: first animated faces with image diffusion
- Interview with AI artist illustrata
- First Tune-A-Video implemenation
- State of the Art Pixel Art models
Cover Challenge 🎨
This weeks “emotions” challenge got 61 submissions from 37 artists and the community decided on the final winner. Congratulations to second-time winner @arvizu_la for creating another beautiful cover 🥳 And as always a big thank you to everyone who contributed!
The theme for the next challenge is “dualism”. Think mind-body, light-dark, good-evil, nature-culture, self-other, idealism-materialism, freewill-determinism and so on. Becuase I’m taking next week off, this challenge will run for two weeks instead of one and the prize is doubled to $100. Rulebook can be found here and images can be submitted here.
I’m looking forward to all of your submissions 🙏
If you want to support the newsletter, this weeks cover is available for collection on objkt as a limited edition of 10 for 3ꜩ a piece. Thank you for your support 🙏😘
Reflection: News & Gems
Microsoft introduced a new Text-To-Speech called VALL-E this week. The model is capable of mimicing voices and synthesize speech from text with only 3-seconds of audio input. That’s pretty crazy advancement as previous models like tortoise-tts required at least an audio clip of around 60 seconds to achieve consistent results. Granted, we haven’t seen how the model performs on long-form content yet. VALLE-E is capable of maintaining a speaker’s emotion and enrionmental acoustics from the input recording as well as generating different takes prompted with a different seed.
My favourite find of the week.
Diffused Heads is the first method successfully using a diffusion model to generate talking faces. Given a single identity frame and an audio clip containing speech, the model samples consecutive frames in an autoregressive manner, preserving the identity, and modeling lip and head movement to match the audio input. Contrary to other methods, no additional guidance is required.
Although the model is somewhat limited to short sequences as facial and pose animation can not be maintained over a longer period of time without additional input, the results are still very impressive.
We’ve already seen first image-to-3D avatar methods last year, so this isn’t something completely new, but apart from higher quality avatars, the exciting part about 3DAvatarGAN is its ability to modify different facial features like hair, eyebrows, cheeks, smile and headshape of generated avatars on the fly. Scroll down to the bottom of the project page to find some interesting examples.
@daveranan is working on a short film titled Daedalus. All visuals for this scene have been generated using MidJourney. Can’t wait to see the final cut.
@nptacek is creating 3D environments with Stable Diffusion by prompting terms like
3d panorama|stereoscopic Stereo Equirectangular. Checkout the tutorial thread for more details.
@rainisto is killing it lately with his WarpFusion renderings of popular music videos. WarpFusion is definitely on my list of things to experiment with in 2023.
Imagination: Interview & Inspiration
[AI Art Weekly] Hey illustrata, what’s your background and how did you get into AI art?
Although not professionally trained, I’ve always been drawn to the world of art, and I’ve enjoyed experimenting with different mediums over the years. In 2021, while browsing TikTok, I came across a stunning image created with AI. I was completely enthralled and knew I had to give it a try. I spent some time researching different tools and eventually stumbled upon Neural Blender, Night Cafe, and Artbreeder. As I delved deeper into this new world, I was struck by the strange, abstract creations that emerged from my prompts. It was as if I had finally found my true calling. Since then, I’ve been fully immersed in the world of AI art, constantly seeking out new ways to challenge and expand my abilities.
[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?
Currently, I am working on a project for an upcoming Nifty Gateway drop, curated by Ivona Tau (@ivonatau). The theme is post-photography, and I am thrilled to be part of such a talented group of artists. In addition to this, I am also considering my next solo drop and have a few ideas swirling around in my head, though nothing has been set in stone just yet.
[AI Art Weekly] Do you have any tips on creating one’s first art drop?
When starting out, I recommend launching a small collection on objkt as it tends to be less intimidating for beginners. Keep in mind that it’s rare for a first collection to sell out immediately, even if the artist is well-known in their community. Instead, focus on building relationships and creating high-quality pieces that are cohesive and well-done. Don’t get bogged down in trying to make everything perfect, as it’s better to get something good out there than to delay indefinitely. When it comes to determining what your first collection will be, keep in mind social media likes can be fickle - in my experience they are not the best indicators of what will sell. It’s more important that you are proud and excited about what you are putting out.
[AI Art Weekly] What does your workflow look like?
As an artist, my inspiration comes from a variety of sources - lyrics, poetry, and lines from books are just a few examples. When I first started exploring the world of AI art, these were the primary influences for my work. In fact, my name is a nod to this, as I was essentially “illustrating” my favorite words. However, as I’ve grown and evolved as an artist, I’ve come to find inspiration in the results of the AI itself. I enjoy the randomness and unpredictability of the AI, and use it as a springboard for my own creativity.
In terms of tools, I’ve tried my hand at many different ones over the years. Lately, I’ve been particularly drawn to Midjourney. Version 4 has really impressed me with its quality. While I still create animations in SD from time to time, I also have custom fine-tuned models that I plan to utilize more in the future.
My workflow is generally a process of exploration and experimentation. I like to brainstorm and play around with idea and often come across something magical that I then develop further. As for post-processing, I usually stick to minimal edits such as color correction in Lightroom and occasionally some minor tweaks in Photoshop. When it comes to upscaling, I rely on Topaz Labs Photo AI. Overall, my approach to creating AI art is all about embracing the unknown and letting the process guide me.
[AI Art Weekly] What is your favourite prompt when creating art?
One of my all-time favorite prompts is
expired film. I’ve used this term in a number of my pieces, even in works that don’t necessarily resemble photographs. I find that it gives my art a certain something that I just can’t get enough of.
[AI Art Weekly] How do you imagine AI (art) will be impacting society in the near future?
In the near future, I believe AI art will become more and more mainstream. TikTok’s popular filters are certainly helping to make this happen. However, with the growing popularity of AI art comes a certain level of misunderstanding and fear. I’ve seen some disturbing conversations online about people using AI filters to detect “evil spirits” in their homes based on the results the AI produces. It’s not uncommon for people to be fearful of what they don’t understand, but it’s important that we work to educate and demystify these technologies in an accessible way. As AI art continues to evolve and become more prevalent in society, it’s crucial that we do our best to foster a deeper understanding of these tools and their capabilities.
[AI Art Weekly] Who is your favourite artist?
I have a deep appreciation for the work of so many talented individuals. When it comes to traditional artists, I am particularly drawn to the work of John Willam Waterhouse, Zinaida Serebriakova, Dora Maar, Tom Bagshaw, and Mark Ryden. Each of these artists has unique styles and visions, and I am constantly inspired by their work.
In the world of AI art, there are so many talented artists that it’s hard to pick just a few favorites. However, I have to give a shoutout to the talented individuals in the AIIA (AI Infused Art) and Tez Girls communities. These artists are constantly pushing the boundaries of what’s possible with AI art, and I am always in awe of their creations. Overall, I am grateful to be part of such a vibrant and talented community of artists.
[AI Art Weekly] Anything else you would like to share?
Thank you for giving me the opportunity to share my thoughts and experiences as an AI artist. I am constantly inspired by the endless creative possibilities that this medium offers, and I am always seeking new ways to challenge and expand my abilities. AI art allows me to bring my wildest creative visions to life from the strange and surreal to the familiar and nostalgic. It’s an honor to be able to share my work and my perspective with your audience, and I am grateful for the chance to connect with others who share my passion for this exciting and evolving field. Thank you for your interest in my work and for the opportunity to share my thoughts!
Each week we share a style that produces some cool results when used in your prompts.
@weird_momma_x shared an interesting find this week by adding the words
art brut and/or
outsider art to your prompts.
Technically not a style, but still fun experiment to explore with your traditional prompts.
Creation: Tools & Tutorials
These are some of the most interesting resources I’ve come across this week.
A few weeks ago I wrote about the Tune-A-Video paper, a method that is able to generate a video by providing a video-text pair as input prompt. bryandlee on GitHub now released a first unofficial but promising first implementation of the paper.
Reddit user u/UnavailableUsername_ put together a visual Automatic1111 WebUI guide for stepping up your Stable Diffusion art generation game. The guide covers model merging, prompt weighting, matrices, prompt transformation and more.
If you’re into Pixel Art, this is by far the best model I’ve found so far. But it comes with some caveats. First of all it only works on Windows as an Aseprite extension as there are some additional features built on top of it to make it this good. Second of all it costs $65. Now if you’re serious about creating Pixel Art, that price tag shouldn’t be an issue and is certainly a good investment in my opinion.
Now if you don’t want to shell out money for a fine-tuned Pixel Art model, there is PXL8. The generated examples look fantastic as well and PXL8 comes with an extension for the Automatic1111 WebUI.
@ErtoemeArt stumbled upon a useful MJv4 artist reference spreadsheet which covers a lot of different topics like characters, landscapes, paintings, anime and more. Best used after duplicating the sheet ot your own GDrive.
And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:
- Sharing it 🙏❤️
- Following me on Twitter: @dreamingtulpa
- Leaving a Review on Product Hunt
- Using one of our affiliate links at https://aiartweekly.com/support
Reply to this email if you have any feedback or ideas for this newsletter.
Thanks for reading and talk to you next week!