Hello there my fellow dreamers and welcome to issue #27 of AI Art Weekly! 👋
Things somewhat “slowed down” a bit this week without any major announcements, but we still got a lot of great stuff to cover. So lets keep this short and straight to the point. The highlights of this week are:
- HyperDiffusion can generate 3D and 4D shapes using a unified diffusion model
- Make-It-3D can create high-fidelity 3D content from just a single image
- Interview with artist @PurzBeats
- ModelScope Text-To-Video fine tuning
- Want to become a rock star? SoftVC VITS fork lets you train your own singing voice conversion model
Cover Challenge 🎨
Reflection: News & Gems
HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion
We’ve seen a lot in the last 6 months, so it’s hard to get excited about new ground breaking techniques. What we haven’t seen yet is a model that can generate 3D shapes and animate them simulatinously. Enter HyperDiffusion, a novel method that can generate 3D and 4D shapes using a unified diffusion model. This combined with the AvatarCraft paper below brings us a step closer to generate an animate 3D characters with just a few simple words 🤯
Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior
Make-It-3D is a new technique that creates high-fidelity 3D content from just a single image. The process involves estimating 3D geometry and hallucinating unseen textures. By leveraging a well-trained 2D diffusion model as 3D-aware supervision, Make-It-3D is able to generate textured point clouds with impressive visual quality that surpasses previous work and opens up new possibilities for text-to-3D creation and texture editing.
PAniC-3D: Stylized Single-view 3D Reconstruction from Portraits of Anime Characters
Last week I failed turning my avatar into a 3D image, but the future looks bright. PAniC-3D is a system designed to reconstruct stylized 3D character heads directly from illustrated portraits of anime characters by using a line-filling model and a volumetric radiance field to bridge the gap between illustration and 3D domains. The results look stunning and although my PFP isn’t an anime illustration, I wonder how it would perform on it.
HQ3DAvatar: High Quality Controllable 3D Head Avatar
Speaking of 3D avatars, HQ3DAvatar is a new approach to creating highly photorealistic digital head avatars that can capture complex geometric details like mouth interiors, hair, and dynamic changes from a single video. The method outperforms existing approaches and is can also handle tricky facial expressions and is able to render an avatar from different angles in real-time. Pretty neat!
AvatarCraft: Transforming Text into Neural Human Avatars with Parameterized Shape and Pose Control
But we’re not done with 3D avatars yet, AvatarCraft is a novel method that can turn a single text prompt into a 3D human avatar with a specific identity and artistic style that’s easy to animate. By using diffusion models and an optimization framework, it generates high-quality geometry and texture for the avatar and makes animating and reshaping the avatar simple by using an explicit warping field and controlling pose as well as shape parameters. With impressive results, AvatarCraft shows great potential in creating and animating unique human avatars from just a text description. As Netflix is involved in this, I can see this being used in future tv-shows and movie productions.
DiffCollage: Parallel Generation of Large Content with Diffusion Models
But enough avatars for this week, last but not least there was DiffCollage. The method is able to generate large infinite images, 360 panorama images as well as extend human motion animations from diffusion models. DiffCollage uses a factor graph representation, allowing for parallel generation of content in any size or shape without relying on an autoregressive procedure. Can’t wait to try and combine this with my duck-taped collage algorithm I used for the cover of issue #23.
Other interesting papers
- GridNerRF: Grid-guided Neural Radiance Fields for Large Urban Scenes. Basically Google or Apple 3D Maps on steroids.
- VIVE3D: Viewpoint-Independent Video Editing using 3D-Aware GANs. Similar to StyleGANEX, but I feel like results here are less convincing.
- GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents. A new method that can generate 3D gestures from speech and style descriptions. YouTube demo.
Imagination: Interview & Inspiration
In this week’s issue of AI Art Weekly, we talk to Purz.xyz, an artist known for his mesmerizing AI and procedurally generated animations. Not only a visual artist, Purz is also a musician, producing spellbinding music videos for both himself and other bands. Let’s plunge in!
[AI Art Weekly] Purz, what’s your background and how did you get into AI art?
With 25 years of experience as a drummer and music producer, I have consistently integrated visuals into live shows and served as a VJ for various gigs. Prior to the pandemic, I delved into projection installations, which subsequently led me to explore 3D design and generative art. My work has always heavily relied on style transfer techniques, so immersing myself in emerging AI technologies was a captivating progression. As a veteran AI animator, I have created thousands of animations using an array of AI animation systems, both released and unreleased. Having beta-tested for major AI companies, I have generated over 20,000 images and consistently remained at the forefront of AI technologies.
[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?
At present, I am producing videos for several confidential projects, while also developing collections for my crypto art. Additionally, I am collaborating on new music with a few of my bands, eagerly anticipating multimedia releases, complete with full videos, in the near future.
[AI Art Weekly] What drives you to create?
TikTok suggests that I might have ADHD, which could explain my inability to sit still. I have found that channeling my energy into creation is the most productive way to utilize my restlessness. I am passionate about starting with a blank canvas and bringing something entirely new into existence. This drive likely stems from years of jamming with fellow musicians and producing songs, as the sensation of creating something unique is truly incomparable.
[AI Art Weekly] What does your workflow look like?
Due to my aphantasia, I generally don’t plan my work in advance. Instead, I dive right in by opening up an app and experimenting, or if I lack a clear idea, I’ll hone my skills by following a tutorial or exploring geometry or shader node methods in Blender. As an AI animator and AI audio enthusiast, I dedicate a significant amount of time to teaching and creating educational content, focusing on demystifying the technology for others and providing entry points for those who may not have access to the necessary resources.
Once a project takes shape, I iterate on it until it reaches a point where I no longer dislike it. I prefer to render and post-process within the same session, if possible. I have an extensive graveyard of unfinished projects, so I make an effort to render at least a loop or a meaningful element that can be repurposed later. This approach often results in my work being a blend of past samples combined with fresh ideas.
[AI Art Weekly] What is your favourite prompt when creating art?
I am particularly inspired by themes such as astral plane traveling, 1970s fashion, tech retrofuturism, and the illustrative style of ’70s sci-fi book cover art. Many of my prompts are derived from these areas, especially when I am trying to familiarize myself with new technologies, models, or checkpoints.
[AI Art Weekly] How do you imagine AI (art) will be impacting society in the near future?
The potential impact of AI art on our everyday lives is likely beyond our current comprehension. I am incredibly excited about the prospect of everyone having access to tools that can bring their ideas to life, regardless of their financial means. As someone deeply invested in education and community engagement, I look forward to witnessing the creative explosion that will undoubtedly result from increased accessibility to AI technologies.
[AI Art Weekly] Who is your favourite artist?
Miles Davis possessed a remarkable talent for recognizing the chemistry between individuals before they ever performed together. This intuitive ability is something I have long admired and sought to emulate, both within and beyond the realm of music. By carefully selecting the right samples, loops, and sounds, I strive to harmoniously blend these elements in service of the song. This approach extends to my visual creations, as I endeavor to craft environments and worlds that embody the same spirit of synergy and collaborative harmony that Miles so masterfully achieved.
[AI Art Weekly] Anything else you would like to share?
My fascination with computer graphics and on-screen illustrations has always been deeply rooted in my appreciation for old-school video games, television shows, and movies on VHS, as well as the charmingly inefficient and oversized technology of the past. Collaborating with machines has been an incredibly enjoyable experience for me, which is why I am so drawn to procedural generation systems and AI technologies. This dynamic partnership feels like a true team effort, where both human and computer work together, iterating and refining to produce something that neither could have achieved independently. The experience is akin to playing in bands, where each member contributes their unique talents to create a harmonious whole.
Creation: Tools & Tutorials
These are some of the most interesting resources I’ve come across this week.
And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:
- Sharing it 🙏❤️
- Following me on Twitter: @dreamingtulpa
- Buying me a coffee (I could seriously use it, putting these issues together takes me 8-12 hours every Friday 😅)
- Leaving a Review on Product Hunt
- Using one of our affiliate links at https://aiartweekly.com/support
Reply to this email if you have any feedback or ideas for this newsletter.
Thanks for reading and talk to you next week!