3D AI Tools
Free 3D AI tools for creating, optimizing, and manipulating 3D assets for games, films, and design projects, boosting your creative process.
AvatarCraft can turn a text prompt into a high-quality 3D human avatar. It allows users to control the avatar’s shape and pose, making it easy to animate and reshape without retraining.
HyperDiffusion can generate high-quality 3D shapes and 4D mesh animations using a unified diffusion model. This method allows for the creation of complex objects and dynamic scenes from a single framework, making it versatile and efficient.
PAniC-3D can reconstruct 3D character heads from single-view anime portraits. It uses a line-filling model and a volumetric radiance field, achieving better results than previous methods and setting a new standard for stylized reconstruction.
Make-It-3D can create high-quality 3D content from a single image by estimating 3D shapes and adding textures. It uses a two-step process with a trained 2D diffusion model, allowing for text-to-3D creation and detailed texture editing.
Vox-E can edit 3D objects by changing their shape and appearance based on text prompts. It uses a special method to keep the edited object connected to the original, allowing for both big and small changes.
MeshDiffusion can generate realistic 3D meshes using a score-based diffusion model with deformable tetrahedral grids. It is great for creating detailed 3D shapes from single images and can also add textures, making it useful for various applications.
3DFuse can improve 3D scene generation by adding 3D awareness to 2D diffusion models. It builds a rough 3D structure from text prompts and uses depth maps for better realism in reconstructions.
X-Avatar can capture the full expressiveness of digital humans for lifelike experiences in telepresence and AR/VR. It uses full 3D scans or RGB-D data and outperforms other methods in animation tasks, supported by a new dataset with 35,500 high-quality frames.
PriorMDM can generate long human motion sequences of up to 10 minutes using a pre-trained diffusion model. It allows for controlled transitions between prompted intervals and can create two-person motions with just 14 training examples, using techniques like DiffusionBlending for better control.
Single Motion Diffusion can generate realistic animations from one input motion sequence. It allows for motion expansion, style transfer, and crowd animation, while using a lightweight design to create diverse motions efficiently.
TEXTure can generate and edit seamless textures for 3D shapes using text prompts. It uses a depth-to-image diffusion model to create consistent textures from different angles and allows for refinement based on user input.
SceneDreamer can generate endless 3D scenes from 2D image collections. It creates photorealistic images with clear depth and allows for free camera movement in the environments.
RecolorNeRF can change colors in 3D scenes while keeping the view consistent. It breaks scenes into pure-colored layers, allowing for easy color adjustments and producing realistic results that are better than other methods.
Robust Dynamic Radiance Fields can estimate both static and dynamic radiance fields along with camera settings. It improves view synthesis from difficult videos, achieving better quality and accuracy than current top methods.
Point-E can generate 3D point clouds from text prompts in 1-2 minutes on a single GPU. It uses a text-to-image diffusion model to create a view and then a second diffusion model to produce the point cloud, offering a faster option for 3D object generation.
3D Neural Field Generation using Triplane Diffusion can create high-quality 3D models from 2D images. It uses a diffusion model to turn ShapeNet meshes into continuous occupancy fields, achieving top results in 3D generation for various object types.
TextureDreamer can transfer detailed textures from just 3 to 5 images to any 3D shape. It uses a method called geometry-aware score distillation to improve texture quality beyond previous techniques.
Latent-NeRF can generate 3D shapes and textures by combining text and shape guidance. It uses latent score distillation to apply this guidance directly on 3D meshes, allowing for high-quality textures on specific geometries.
One-2-3-45 can generate a complete 360-degree 3D textured mesh from a single image in just 45 seconds. It uses a view-conditioned 2D diffusion model to create multiple images, resulting in better geometry and consistency than other methods.
EVA3D can generate high-quality 3D human models from 2D image collections. It uses a method called compositional NeRF for detailed shapes and textures, and it improves learning with pose-guided sampling.