Hello there, my fellow dreamers, and welcome to issue #37 of AI Art Weekly! 👋
After last weeks choke full issue, which caused Apple’s mail servers to flat out ISP block me (sorry about that 🙈), we finally got a bit of a quieter week. This also got me thinking about trimming down the news section to a maximum of five main items per week to keep things more focused. So here are the highlight of the week:
- Gen-2 is now publicly available
- StabilityAI released Uncrop
- VideoComposer brings next level controllability to text-to-video
- ColorDiffuser lets you colorize videos
- Neuralangelo lets you turn videos into 3D objects
- AI Surrealism interviews with Jules Design & Kaeli VanFossen
- New Prompt generator, Stable Diffusion training guides & more
Cover Challenge 🎨
News & Papers
Gen-2 is now publicly available
Runway released Gen-2 to their platform which lets you create text-to-video directly from within their website. Apparently everyone gets 90 seconds worth of credits for free to test it out. Generation is quite fast and it’s definitely the easiest way to experience the magic of text-to-video at the moment. The release also lets you use image prompts which wasn’t possible within the closed Discord beta phase.
Clipdrop releases “Uncrop”
After Adobe’s success of Photoshop’s Generative Fill feature, StabilityAI released a new Clipdrop feature called Uncrop. The feature lets you outpaint images directly within the Browser. I still prefer Photoshop right now, because it lets me inpaint as well. But if Clipdrop would add an Unified Canvas feature similar to what InvokeAI offers, they would win me over.
VideoComposer: Compositional Video Synthesis with Motion Controllability
We’ve seen a few attempts at video controllability over the last few months. VideoComposer hits different. The model enables to combine multiple modalities like text, sketch, style and even motion to drive video generation. The results look amazing.
ColorDiffuser: Video Colorization with Pre-trained Text-to-Image Diffusion Models
ColorDiffuser brings the power of text-to-image models to video colorization. This not only makes it possible to colorize black-and-white footage, but also to recolorize videos. I wonder how this compares to the tech Peter Jackson used for his movie “They Shall Not Grow Old” in which he restored and colorized WWI footage shot between 1914 to 1918.
Neuralangelo: High-Fidelity Neural Surface Reconstruction
Creating 3D content is definitely much harder than creating 2D. But it certainly will get a lot easier with Neuralangelo, a novel framework by NVIDIA that lets you turn videos into high-fidelity 3D objects and scenes. With the announcement of Apple’s Vision Pro AR/VR goggles, a lof the success, apart from a version for us normies that we can or want to afford, will be the amount of content available. Generative AI might be the tipping point that finally makes this a viable product segment.
- HeadSculpt: Crafting 3D Head Avatars with Text
- ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image Collections
- USCD: Unsupervised Compositional Concepts Discovery with Text-to-Image Generative Models
- OmniMotion: Tracking Everything Everywhere All at Once
- SyncDiffusion: Coherent Montage via Synchronized Joint Diffusions
- ReliableSwap: Boosting General Face Swapping Via Reliable Supervision
In Today’s issue I don’t have one, but two interviews for you. Talking about cutting down, I know 😅 But because the AI Surrealism exhibition is ongoing and got extended until the 24th of June, Anna Dart, who is one of the curators of the exhibition, and I thought it’s a great opportunity to highlight a few of the 100 AI artists through AI Art Weekly interviews. So here we go!
Tools & Tutorials
These are some of the most interesting resources I’ve come across this week.
And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:
- Sharing it 🙏❤️
- Following me on Twitter: @dreamingtulpa
- Buying me a coffee (I could seriously use it, putting these issues together takes me 8-12 hours every Friday 😅)
Reply to this email if you have any feedback or ideas for this newsletter.
Thanks for reading and talk to you next week!