3D AI Tools
Free 3D AI tools for creating, optimizing, and manipulating 3D assets for games, films, and design projects, boosting your creative process.
RemoCap can reconstruct 3D human bodies from motion sequences. It’s able to capture occluded body parts with greater fidelity, resulting in less model penetration and distorted motion.
NOVA-3D can generate 3D anime characters from non-overlapped front and back views.
CondMDI can generate precise and diverse motions that conform to flexible user-specified spatial constraints and text descriptions. This enables the creation of high-quality animations from just text prompts and inpainting between keyframes.
Toon3D can generate 3D scenes from two or more cartoon drawings. It’s far from perfect, but still pretty cool!
Dual3D is yet another text-to-3D method that can generate high-quality 3D assets from text prompts in only 1 minute.
StableMoFusion is a method for human motion generation that is able to eliminate foot-skating and create stable and efficient animations. The method is based on diffusion models and can be used for real-time scenarios such as virtual characters and humanoid robots.
DreamScene4D can generate dynamic 4D scenes from single videos. It tracks object motion and handles complex movements, allowing for accurate 2D point tracking by converting 3D paths to 2D.
X-Oscar can generate high-quality 3D avatars from text prompts. It uses a step-by-step process for geometry, texture, and animation, while addressing issues like low quality and oversaturation through advanced techniques.
Invisible Stitch can inpaint missing depth information in a 3D scene, resulting in improved geometric coherence and smoother transitions between frames.
DGE is a Gaussian Splatting method that can be used to edit 3D objects and scenes based on text prompts.
Make-it-Real can recognize and describe materials using GPT-4V, helping to build a detailed material library. It aligns materials with 3D object parts and creates SVBRDF materials from albedo maps, improving the realism of 3D assets.
And on the pose reconstruction front we have had TokenHMR, which can extract human poses and shapes from a single image.
PhysDreamer is a physics-based approach that enables you to poke, push, pull and throw objects in a virtual 3D environment and they will react in a physically plausible manner.
DG-Mesh is able to reconstruct high-quality and time-consistent 3D meshes from a single video. The method is also able to track the mesh vertices over time, which enables texture editing on dynamic objects.
InFusion can inpaint 3D Gaussian point clouds to restore missing 3D points for better visuals. It lets users change textures and add new objects, achieving high quality and efficiency.
in2IN is a motion generation model that factors in both the overall interaction’s textual description and individual action descriptions of each person involved. This enhances motion diversity and enables better control over each person’s actions while preserving interaction coherence.
Video2Game can turn real-world videos into interactive game environments. It uses a neural radiance fields (NeRF) module for capturing scenes, a mesh module for faster rendering, and a physics module for realistic object interactions.
LoopGaussian can convert multi-view images of a stationary scene into authentic 3D cinemagraphs. The 3D cinemagraphs can be rendered from a novel viewpoint to obtain a natural seamless loopable video.
InstantMesh can generate high-quality 3D meshes from a single image in under 10 seconds. It uses advanced methods like multiview diffusion and sparse-view reconstruction, and it significantly outperforms other tools in both quality and speed.
Speaking of reconstruction. Key2Mesh is yet another model that takes on 3D human mesh reconstruction, this time by utilizing 2D human pose keypoints as input instead of relying on visual data due to scarcity in image datasets with 3D labels.