3D AI Tools
Free 3D AI tools for creating, optimizing, and manipulating 3D assets for games, films, and design projects, boosting your creative process.
DreamCraft3D can create high-quality 3D objects from a single prompt. It uses a 2D reference image to guide the sculpting of the 3D object and then improves texture fidelity by running it through a fine-tuned Dreambooth model.
Zero123++ can generate high-quality, 3D-consistent multi-view images from a single input image using an image-conditioned diffusion model. It fixes common problems like blurry textures and misaligned shapes, and includes a ControlNet for better control over the image creation process.
Wonder3D is able to convert a single image into a high-fidelity 3D model, complete with textured meshes and color. The entire process takes only 2 to 3 minutes.
Head360 can generate a parametric 3D full-head model you can view from any angle! It works from just one picture, letting you change expressions and hairstyles quickly.
Progressive3D can generate detailed 3D content from complex prompts by breaking the process into smaller editing steps. It lets users focus on specific areas for editing and improves results by highlighting differences in meaning.
DREAM can reconstruct images seen by a person from their brain activity using an fMRI-to-image method. It decodes important details like color and depth, and it performs better than other models in keeping the appearance and structure consistent.
HumanNorm is a novel approach for high-quality and realistic 3D human generation by leveraging normal maps which enhances the 2D perception of 3D geometry. The results are quite impressive and comparable with PS3 games.
DreamGaussian can generate high-quality textured meshes from a single-view image in just 2 minutes. It uses a 3D Gaussian Splatting model for fast mesh extraction and texture refinement.
Generative Repainting can paint 3D assets using text prompts. It uses pretrained 2D diffusion models and 3D neural radiance fields to create high-quality textures for various 3D shapes.
TECA can generate realistic 3D avatars from text descriptions. It combines traditional 3D meshes for faces and bodies with neural radiance fields (NeRF) for hair and clothing, allowing for high-quality, editable avatars and easy feature transfer between them.
Semantics2Hands can retarget realistic hand motions between different avatars while keeping the details of the movements. It uses an anatomy-based semantic matrix and a semantics reconstruction network to achieve high-quality hand motion transfer.
PlankAssembly can turn 2D line drawings from three views into 3D CAD models. It effectively handles noisy or incomplete inputs and improves accuracy using shape programs.
3D Gaussian Splatting can create high-quality 3D scenes in real-time at 1080p resolution with over 30 frames per second. It uses 3D Gaussians for efficient scene representation and a fast rendering method, achieving competitive training times while maintaining great visual quality.
Similar like ControlNet scribble for images, SketchMetaFace brings sketch guidance to the 3D realm and makes it possible to turn a sketch into a 3D face model. Pretty excited about progress like this, as this will bring controllability to 3D generations and make generating 3D content way more accessible.
NIS-SLAM can reconstruct high-fidelity surfaces and geometry from RGB-D frames. It also learns 3D consistent semantic representations during this process.
MotionGPT can generate, caption, and predict human motion by treating it like a language. It achieves top performance in these tasks, making it useful for various motion-related applications.
It’s said that our eyes hold the universe. When it comes to the method discussed in the paper Seeing the World through Your Eyes, they at least hold a 3D scene. The method discussed in the paper is able to reconstruct 3D scenes beyond the camera’s line-of-sight using portrait images containing eye reflections.
Neuralangelo can reconstruct detailed 3D surfaces from RGB video captures. It uses multi-resolution 3D hash grids and neural surface rendering, achieving high fidelity without needing extra depth inputs.
Now motion capturing is cool. But what if you want your 3D characters to move in new and unique ways? GenMM is able to generate a variety of movements from just a single or few example sequences. Unlike other methods, it doesn’t need exhaustive training and can create new motions with complex skeletons in fractions of a second. It’s also a whiz at jobs you couldn’t do with motion matching alone, like motion completion, guided generation from keyframes, infinite looping, and motion reassembly.
[Humans in 4D] can track and reconstruct humans in 3D from a single video. It handles unusual poses and poor visibility well, using a transformer-based network called HMR 2.0 to improve action recognition.