3D AI Tools
Free 3D AI tools for creating, optimizing, and manipulating 3D assets for games, films, and design projects, boosting your creative process.
Director3D can generate real-world 3D scenes and adaptive camera trajectories from text prompts. The method is able to generate pixel-aligned 3D Gaussians as an immediate 3D scene representation for consistent denoising.
Portrait3D can generate high-quality 3D heads with accurate geometry and texture from a single in-the-wild portrait image.
LiveScene can identify and control multiple objects in complex scenes. It is able to locate individual objects in different states and enables control of them using natural language.
MeshAnything can convert 3D assets in any 3D representation into meshes. This can be used to enhance various 3D asset production methods and significantly improve storage, rendering, and simulation efficiencies.
GradeADreamer is yet another text-to-3D method. This one is capable of producing high-quality assets with a total generation time of under 30 minutes using only a single RTX 3090 GPU.
MagicPose4D can generate 3D objects from text or images and transfer precise motions and trajectories from objects and characters in a video or mesh sequence.
RemoCap can reconstruct 3D human bodies from motion sequences. It’s able to capture occluded body parts with greater fidelity, resulting in less model penetration and distorted motion.
NOVA-3D can generate 3D anime characters from non-overlapped front and back views.
CondMDI can generate precise and diverse motions that conform to flexible user-specified spatial constraints and text descriptions. This enables the creation of high-quality animations from just text prompts and inpainting between keyframes.
Toon3D can generate 3D scenes from two or more cartoon drawings. It’s far from perfect, but still pretty cool!
Dual3D is yet another text-to-3D method that can generate high-quality 3D assets from text prompts in only 1 minute.
StableMoFusion is a method for human motion generation that is able to eliminate foot-skating and create stable and efficient animations. The method is based on diffusion models and can be used for real-time scenarios such as virtual characters and humanoid robots.
DreamScene4D can generate dynamic 4D scenes from single videos. It tracks object motion and handles complex movements, allowing for accurate 2D point tracking by converting 3D paths to 2D.
X-Oscar can generate high-quality 3D avatars from text prompts. It uses a step-by-step process for geometry, texture, and animation, while addressing issues like low quality and oversaturation through advanced techniques.
Invisible Stitch can inpaint missing depth information in a 3D scene, resulting in improved geometric coherence and smoother transitions between frames.
DGE is a Gaussian Splatting method that can be used to edit 3D objects and scenes based on text prompts.
Make-it-Real can recognize and describe materials using GPT-4V, helping to build a detailed material library. It aligns materials with 3D object parts and creates SVBRDF materials from albedo maps, improving the realism of 3D assets.
And on the pose reconstruction front we have had TokenHMR, which can extract human poses and shapes from a single image.
PhysDreamer is a physics-based approach that enables you to poke, push, pull and throw objects in a virtual 3D environment and they will react in a physically plausible manner.
DG-Mesh is able to reconstruct high-quality and time-consistent 3D meshes from a single video. The method is also able to track the mesh vertices over time, which enables texture editing on dynamic objects.