3D Avatar Generation
Free 3D avatar generation AI tools for creating personalized characters for games, animations, and virtual experiences effortlessly.
DreamWaltz-G can generate high-quality 3D avatars from text and animate them using SMPL-X motion sequences. It improves avatar consistency with Skeleton-guided Score Distillation and is useful for human video reenactment and creating scenes with multiple subjects.
RodinHD can generate high-fidelity 3D avatars from a portrait image. The method is able to capture intricate details such as hairstyles and can generalize to in-the-wild portrait input.
MeshAvatar can generate high-quality triangular human avatars from multi-view videos. The avatars can be edited, manipulated, and relit.
InstructHumans can edit existing 3D human textures using text prompts. It maintains avatar consistency pretty well and enables easy animation.
AiOS can estimate human poses and shapes in one step, combining body, hand, and facial expression recovery.
SplattingAvatar can generate photorealistic real-time human avatars using a mix of Gaussian Splatting and triangle mesh geometry. It achieves over 300 FPS on modern GPUs and 30 FPS on mobile devices, allowing for detailed appearance modeling and various animation techniques.
GALA can turn a single-layer clothed 3D human mesh and decompose it into complete multi-layered 3D assets. The outputs can then be combined with other assets to create new clothed human avatars with any pose.
En3D can generate high-quality 3D human avatars from 2D images without needing existing assets.
RelightableAvatar is another method that can create relightable and animatable neural avatars from monocular video.
ASH can render photorealistic and animatable 3D human avatars in real time.
Relightable Gaussian Codec Avatars can generate high-quality, relightable 3D head avatars that show fine details like hair strands and pores. They work well in real-time under different lighting conditions and are optimized for consumer VR headsets.
Head360 can generate a parametric 3D full-head model you can view from any angle! It works from just one picture, letting you change expressions and hairstyles quickly.
TECA can generate realistic 3D avatars from text descriptions. It combines traditional 3D meshes for faces and bodies with neural radiance fields (NeRF) for hair and clothing, allowing for high-quality, editable avatars and easy feature transfer between them.
X-Avatar can capture the full expressiveness of digital humans for lifelike experiences in telepresence and AR/VR. It uses full 3D scans or RGB-D data and outperforms other methods in animation tasks, supported by a new dataset with 35,500 high-quality frames.
EVA3D can generate high-quality 3D human models from 2D image collections. It uses a method called compositional NeRF for detailed shapes and textures, and it improves learning with pose-guided sampling.