3D AI Tools
Free 3D AI tools for creating, optimizing, and manipulating 3D assets for games, films, and design projects, boosting your creative process.
Doodle Your 3D can turn abstract sketches into precise 3D shapes. The method can even edit shapes by simply editing the sketch. Super cool. Sketch-to-3D-print isn’t that far away now.
WonderJourney lets you wander through your favourite paintings, peoms and haikus. The method can generate a sequence of diverse yet coherently connected 3D scenes from a single image or text prompt.
Relightable Gaussian Codec Avatars can generate high-quality, relightable 3D head avatars that show fine details like hair strands and pores. They work well in real-time under different lighting conditions and are optimized for consumer VR headsets.
4D-fy can generate high-quality 4D scenes from text prompts. It combines the strengths of text-to-image and text-to-video models to create dynamic scenes with great visual quality and realistic motion.
LucidDreamer can generate navigatable 3D Gaussian Splat scenes out of a single text prompt of a single image. Text prompts can also be chained for more output control. Can’t wait until they can also be animated.
PhysGaussian is a simulation-rendering pipeline that can simulate the physics of 3D Gaussian Splats while simultaneously render photorealistic results. The method supports flexible dynamics, a diverse range of materials as well as collisions.
LucidDreamer is a text-to-3D generation framework that is able to generate 3D models with high-quality textures and shapes. Higher quality means longer inference. This one takes 35 minutes on an A100 GPU.
3D Paintbrush can automatically add textures to specific areas on 3D models using text descriptions. It produces detailed localization and texture maps, enhancing the quality of graphics in various projects.
Consistent4D is an approach for generating 4D dynamic objects from uncalibrated monocular videos. With the speed we’re progressing, it looks like dynamic 3D scenes from single-cam videos will be here sooner than I’ve expected the last few weeks.
Mesh Neural Cellular Automata (MeshNCA) is a method for directly synthesizing dynamic textures on 3D meshes without requiring any UV maps. The model can be trained using different targets such as images, text prompts, and motion vector fields. Additionally, MeshNCA allows several user interactions including texture density/orientation control, a grafting brush, and motion speed/direction control.
ZeroNVS is a 3D-aware diffusion model that is able to generate novel 360-degree views of in-the-wild scenes from a single real image.
DreamCraft3D can create high-quality 3D objects from a single prompt. It uses a 2D reference image to guide the sculpting of the 3D object and then improves texture fidelity by running it through a fine-tuned Dreambooth model.
Zero123++ can generate high-quality, 3D-consistent multi-view images from a single input image using an image-conditioned diffusion model. It fixes common problems like blurry textures and misaligned shapes, and includes a ControlNet for better control over the image creation process.
Wonder3D is able to convert a single image into a high-fidelity 3D model, complete with textured meshes and color. The entire process takes only 2 to 3 minutes.
Head360 can generate a parametric 3D full-head model you can view from any angle! It works from just one picture, letting you change expressions and hairstyles quickly.
Progressive3D can generate detailed 3D content from complex prompts by breaking the process into smaller editing steps. It lets users focus on specific areas for editing and improves results by highlighting differences in meaning.
DREAM can reconstruct images seen by a person from their brain activity using an fMRI-to-image method. It decodes important details like color and depth, and it performs better than other models in keeping the appearance and structure consistent.
HumanNorm is a novel approach for high-quality and realistic 3D human generation by leveraging normal maps which enhances the 2D perception of 3D geometry. The results are quite impressive and comparable with PS3 games.
DreamGaussian can generate high-quality textured meshes from a single-view image in just 2 minutes. It uses a 3D Gaussian Splatting model for fast mesh extraction and texture refinement.
Generative Repainting can paint 3D assets using text prompts. It uses pretrained 2D diffusion models and 3D neural radiance fields to create high-quality textures for various 3D shapes.