3D AI Tools
Free 3D AI tools for creating, optimizing, and manipulating 3D assets for games, films, and design projects, boosting your creative process.
GaussianDreamerPro can generate 3D Gaussian assets from text that can be seamlessly integrated into downstream manipulation pipelines, such as animation, composition, and simulation.
Coin3D can generate and edit 3D assets from a basic input shape. Similar to ControlNet, this enables precise part editing and responsive 3D object previewing within a few seconds.
FabricDiffusion can transfer high-quality fabric textures from a 2D clothing image to 3D garments of any shape.
DAS3R can decompose scenes and rebuild static backgrounds from videos.
REACTO can reconstruct articulated 3D objects by capturing the motion and shape of objects with flexible deformation from a single video.
TEXGen can generate high-resolution UV texture maps in texture space using a 700 million parameter diffusion model. It supports text-guided texture inpainting and sparse-view texture completion, making it versatile for creating textures for 3D assets.
YouDream can generate high-quality 3D animals from a single image and a text prompt. The method is able to preserve anatomic consistency and is capable of generating and combining commonly found animals.
PRM can create high-quality 3D meshes from a single image using photometric stereo techniques. It improves detail and handles changes in lighting and materials, allowing for features like relighting and material editing.
Tactile DreamFusion can improve 3D asset generation by combining high-resolution tactile sensing with diffusion-based image priors. Supports both text-to-3D and image-to-3D generation.
Trellis 3D generates high-quality 3D assets in formats like Radiance Fields, 3D Gaussians, and meshes. It supports text and image conditioning, offering flexible output format selection and local 3D editing capabilities.
Dessie can estimate the 3D shape and pose of horses from single images. It also works with other large animals like zebras and cows.
L4GM is a 4D Large Reconstruction Model that can turn a single-view video into an animated 3D object.
D3GA is the first 3D controllable model for human bodies rendered with Gaussian splats in real-time. This lets us turn ourselves or others with a multi-cam setup into a Gaussian splat which can be animated, even allowing to decompose the avatar into its different cloth layers.
Material Anything can generate realistic materials for 3D objects, including those without textures. It adapts to different lighting and uses confidence masks to improve material quality, ensuring outputs are ready for UV mapping.
CAT4D can create dynamic 4D scenes from single videos. It uses a multi-view video diffusion model to generate videos from different angles, allowing for strong 4D reconstruction and high-quality images.
SuperMat can quickly break down images of materials into three important maps: albedo, metallic, and roughness. It does this in about 3 seconds while keeping high quality, making it efficient for 3D object material estimation.
SelfSplat can create 3D models from multiple images without needing specific poses. It uses self-supervised methods for depth and pose estimation, resulting in high-quality appearance and geometry from real-world data.
GarVerseLOD can generate high-quality 3D garment meshes from a single image. It handles complex cloth movements and poses well, using a large dataset of 6,000 garment models to improve accuracy.
UniHair can create 3D hair models from single-view portraits, handling both braided and un-braided styles. It uses a large dataset and advanced techniques to accurately capture complex hairstyles and generalize well to real images.
GaussianAnything can generate high-quality 3D objects from single images or text prompts. It uses a Variational Autoencoder and a cascaded latent diffusion model for effective 3D editing.