AI Art Weekly #37
Hello there, my fellow dreamers, and welcome to issue #37 of AI Art Weekly! 👋
After last weeks choke full issue, which caused Apple’s mail servers to flat out ISP block me (sorry about that 🙈), we finally got a bit of a quieter week. This also got me thinking about trimming down the news section to a maximum of five main items per week to keep things more focused. So here are the highlight of the week:
- Gen-2 is now publicly available
- StabilityAI released Uncrop
- VideoComposer brings next level controllability to text-to-video
- ColorDiffuser lets you colorize videos
- Neuralangelo lets you turn videos into 3D objects
- AI Surrealism interviews with Jules Design & Kaeli VanFossen
- New Prompt generator, Stable Diffusion training guides & more
Cover Challenge 🎨
The theme for next weeks cover is summer vibes. The reward is another $50. Rulebook can be found here and images can be submitted here. Come join our Discord to talk challenges. I’m looking forward to your submissions 🙏
This weeks style prompt got inspired by the theme of above’s cover challenge: new poster of <subject>, alex katz, hypercolorful, mural painting, óscar domínguez, kay sage
.
News & Papers
Gen-2 is now publicly available
Runway released Gen-2 to their platform which lets you create text-to-video directly from within their website. Apparently everyone gets 90 seconds worth of credits for free to test it out. Generation is quite fast and it’s definitely the easiest way to experience the magic of text-to-video at the moment. The release also lets you use image prompts which wasn’t possible within the closed Discord beta phase.
Clipdrop releases “Uncrop”
After Adobe’s success of Photoshop’s Generative Fill feature, StabilityAI released a new Clipdrop feature called Uncrop. The feature lets you outpaint images directly within the Browser. I still prefer Photoshop right now, because it lets me inpaint as well. But if Clipdrop would add an Unified Canvas feature similar to what InvokeAI offers, they would win me over.
VideoComposer: Compositional Video Synthesis with Motion Controllability
We’ve seen a few attempts at video controllability over the last few months. VideoComposer hits different. The model enables to combine multiple modalities like text, sketch, style and even motion to drive video generation. The results look amazing.
ColorDiffuser: Video Colorization with Pre-trained Text-to-Image Diffusion Models
ColorDiffuser brings the power of text-to-image models to video colorization. This not only makes it possible to colorize black-and-white footage, but also to recolorize videos. I wonder how this compares to the tech Peter Jackson used for his movie “They Shall Not Grow Old” in which he restored and colorized WWI footage shot between 1914 to 1918.
Neuralangelo: High-Fidelity Neural Surface Reconstruction
Creating 3D content is definitely much harder than creating 2D. But it certainly will get a lot easier with Neuralangelo, a novel framework by NVIDIA that lets you turn videos into high-fidelity 3D objects and scenes. With the announcement of Apple’s Vision Pro AR/VR goggles, a lof the success, apart from a version for us normies that we can or want to afford, will be the amount of content available. Generative AI might be the tipping point that finally makes this a viable product segment.
More gems
- HeadSculpt: Crafting 3D Head Avatars with Text
- ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image Collections
- USCD: Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models
- OmniMotion: Tracking Everything Everywhere All at Once
- SyncDiffusion: Coherent Montage via Synchronized Joint Diffusions
- ReliableSwap: Boosting General Face Swapping Via Reliable Supervision
@0xFramer shared a thread in which he goes through the process of bringing AI pictures to life.
@alon_farchy made a Unity plugin to generate UI for his game. The tool lets him build UI with ChatGPT-like conversations.
@MartinNebelong showcased how he utilized Dreams for PS5 to sculpt and animate characters and then uses the scene as a video input to transform it with Gen-1. Looks like fun!
@paultrillo created the most fun Gen-2 short I’ve seen all week. Feels like a TV ad that could play in a @panoscosmatos movie in the background.
Interviews
In Today’s issue I don’t have one, but two interviews for you. Talking about cutting down, I know 😅 But because the AI Surrealism exhibition is ongoing and got extended until the 24th of June, Anna Dart, who is one of the curators of the exhibition, and I thought it’s a great opportunity to highlight a few of the 100 AI artists through AI Art Weekly interviews. So here we go!
Tools & Tutorials
These are some of the most interesting resources I’ve come across this week.
@fofrAI built a prompter generator that lets you generate a lot of prompts at once based on precompiled lists.
I’m a big fan of @spiritform’s custom embeddings and he was so kind as to write a great community post on how to train your own using the Stable Diffusion WebUI.
@romero_erzede shared a LoRA training guide for Stable Diffusion 1.5 and 2.1. Although the guide focuses on training passages, the guide gives a great basic overview on what is important if you want to train a LoRA with your own concepts.
After coming across ColorDiffuser above, I went to look for a solution that was already available. And I found @DeOldify. The projects lets you colorize and restore old images and film footage.
@ArtVenture_ created an Automatic1111/Vladmandic plugin that lets you queue multiple tasks, tweak prompts & models on the fly, monitor tasks in real time, re-prioritize and stop or delete them.
And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:
- Sharing it 🙏❤️
- Following me on Twitter: @dreamingtulpa
- Buying me a coffee (I could seriously use it, putting these issues together takes me 8-12 hours every Friday 😅)
Reply to this email if you have any feedback or ideas for this newsletter.
Thanks for reading and talk to you next week!
– dreamingtulpa