3D AI Tools
Free 3D AI tools for creating, optimizing, and manipulating 3D assets for games, films, and design projects, boosting your creative process.
MagicArticulate can rig static 3D models and make them ready for animation. Works on both humanoid and non-humanoid objects.
MEGASAM can estimate camera parameters and depth maps from casual monocular videos.
Cycle3D can generate high-quality and consistent 3D content from a single unposed image. This approach enhances texture consistency and multi-view coherence, significantly improving the quality of the final 3D reconstruction.
LIFe-GoM can create animatable 3D human avatars from sparse multi view images in under 1 second. It renders high-quality images at 95.1 frames per second.
DressRecon can create 3D human body models from single videos. It handles loose clothing and objects well, achieving high-quality results by combining general human shapes with specific video movements.
Dora can generated 3D assets from images which are ready for diffusion-based character control in modern 3D engines, such as Unity 3D, in real-time.
LayerPano3D can generate immersive 3D scenes from a single text prompt by breaking a 2D panorama into depth layers.
TeSMo is a method for text-controlled scene-aware motion generation and is able to generate realistic and diverse human-object interactions, such as navigation and sitting, in different scenes with various object shapes, orientations, initial body positions, and poses.
MotionLab can generate and edit human motion and supports text-based and trajectory-based motion creation.
OmniPhysGS can generate realistic 3D dynamic scenes by modeling objects with Constitutive 3D Gaussians.
GestureLSM can generate real-time co-speech gestures by modeling how different body parts interact.
Wonderland can generate high-quality 3D scenes from a single image using a camera-guided video diffusion model. It allows for easy navigation and exploration of 3D spaces, performing better than other methods, especially with images it hasn’t seen before.
DiffSplat can generate 3D Gaussian splats from text prompts and single-view images in 1-2 seconds.
DELTA can track dense 3D motion from single-camera videos with high accuracy. It uses advanced techniques to speed up the process, making it over 8 times faster than older methods while maintaining pixel-level precision.
MoRAG can generate and retrieve human motion from text by improving motion diffusion models.
Hunyuan3D 2.0 can generate high-resolution textured 3D assets. It allows users to create and animate detailed 3D models efficiently, with improved geometry detail and texture quality compared to previous models.
GaussianDreamerPro can generate 3D Gaussian assets from text that can be seamlessly integrated into downstream manipulation pipelines, such as animation, composition, and simulation.
Coin3D can generate and edit 3D assets from a basic input shape. Similar to ControlNet, this enables precise part editing and responsive 3D object previewing within a few seconds.
FabricDiffusion can transfer high-quality fabric textures from a 2D clothing image to 3D garments of any shape.
DAS3R can decompose scenes and rebuild static backgrounds from videos.