3D Avatar Generation
Free 3D avatar generation AI tools for creating personalized characters for games, animations, and virtual experiences effortlessly.
D3-Human can reconstruct detailed 3D human figures from single videos. It separates clothing and body shapes, handles occlusions well, and is useful for clothing transfer and animation.
Disco4D can generate and animate 4D human models from a single image by separating clothing from the body. It uses diffusion models for detailed 3D representations and can model parts that are not visible in the input image.
SOAP can generate rigged 3D avatars from a single portrait image.
Textoon can generate diverse 2D cartoon characters in the Live2D format from text descriptions. It allows for real-time editing and controllable appearance generation, making it easy for users to create interactive characters.
StdGEN can generate high-quality 3D characters from a single image in just three minutes. It breaks down characters into parts like body, clothes, and hair, using a transformer-based model for great results in 3D anime character generation.
DressRecon can create 3D human body models from single videos. It handles loose clothing and objects well, achieving high-quality results by combining general human shapes with specific video movements.
LHM can generate high-quality, animatable 3D human avatars from a single image in seconds. It preserves details like clothing geometry and texture without needing extra processing for faces and hands.
DreamWaltz-G can generate high-quality 3D avatars from text and animate them using SMPL-X motion sequences. It improves avatar consistency with Skeleton-guided Score Distillation and is useful for human video reenactment and creating scenes with multiple subjects.
RodinHD can generate high-fidelity 3D avatars from a portrait image. The method is able to capture intricate details such as hairstyles and can generalize to in-the-wild portrait input.
MeshAvatar can generate high-quality triangular human avatars from multi-view videos. The avatars can be edited, manipulated, and relit.
InstructHumans can edit existing 3D human textures using text prompts. It maintains avatar consistency pretty well and enables easy animation.
AiOS can estimate human poses and shapes in one step, combining body, hand, and facial expression recovery.
SplattingAvatar can generate photorealistic real-time human avatars using a mix of Gaussian Splatting and triangle mesh geometry. It achieves over 300 FPS on modern GPUs and 30 FPS on mobile devices, allowing for detailed appearance modeling and various animation techniques.
GALA can turn a single-layer clothed 3D human mesh and decompose it into complete multi-layered 3D assets. The outputs can then be combined with other assets to create new clothed human avatars with any pose.
En3D can generate high-quality 3D human avatars from 2D images without needing existing assets.
RelightableAvatar is another method that can create relightable and animatable neural avatars from monocular video.
ASH can render photorealistic and animatable 3D human avatars in real time.
Relightable Gaussian Codec Avatars can generate high-quality, relightable 3D head avatars that show fine details like hair strands and pores. They work well in real-time under different lighting conditions and are optimized for consumer VR headsets.
Head360 can generate a parametric 3D full-head model you can view from any angle! It works from just one picture, letting you change expressions and hairstyles quickly.
TECA can generate realistic 3D avatars from text descriptions. It combines traditional 3D meshes for faces and bodies with neural radiance fields (NeRF) for hair and clothing, allowing for high-quality, editable avatars and easy feature transfer between them.