3D AI Tools
Free 3D AI tools for creating, optimizing, and manipulating 3D assets for games, films, and design projects, boosting your creative process.
Sin3DM can generate high-quality variations of 3D objects from a single textured shape. It uses a diffusion model to learn how parts of the object fit together, enabling retargeting, outpainting, and local editing.
Text2NeRF can generate 3D scenes from text descriptions by combining neural radiance fields (NeRF) with a text-to-image diffusion model. It creates high-quality textures and detailed shapes without needing extra training data, achieving better photo-realism and multi-view consistency than other methods.
DragonDiffusion can edit images by moving, resizing, and changing the appearance of objects without needing to retrain the model. It lets users drag points on images for easy and precise editing.
Shap-E can generate complex 3D assets by producing parameters for implicit functions. It creates both textured meshes and neural radiance fields, and it works faster with better quality than the Point-E model.
Patch-based 3D Natural Scene Generation from a Single Example can create high-quality 3D natural scenes from just one image by working at the patch level. It allows users to edit scenes by removing, duplicating, or modifying objects while keeping realistic shapes and appearances.
AvatarCraft can turn a text prompt into a high-quality 3D human avatar. It allows users to control the avatar’s shape and pose, making it easy to animate and reshape without retraining.
HyperDiffusion can generate high-quality 3D shapes and 4D mesh animations using a unified diffusion model. This method allows for the creation of complex objects and dynamic scenes from a single framework, making it versatile and efficient.
PAniC-3D can reconstruct 3D character heads from single-view anime portraits. It uses a line-filling model and a volumetric radiance field, achieving better results than previous methods and setting a new standard for stylized reconstruction.
Make-It-3D can create high-quality 3D content from a single image by estimating 3D shapes and adding textures. It uses a two-step process with a trained 2D diffusion model, allowing for text-to-3D creation and detailed texture editing.
Vox-E can edit 3D objects by changing their shape and appearance based on text prompts. It uses a special method to keep the edited object connected to the original, allowing for both big and small changes.
MeshDiffusion can generate realistic 3D meshes using a score-based diffusion model with deformable tetrahedral grids. It is great for creating detailed 3D shapes from single images and can also add textures, making it useful for various applications.
3DFuse can improve 3D scene generation by adding 3D awareness to 2D diffusion models. It builds a rough 3D structure from text prompts and uses depth maps for better realism in reconstructions.
X-Avatar can capture the full expressiveness of digital humans for lifelike experiences in telepresence and AR/VR. It uses full 3D scans or RGB-D data and outperforms other methods in animation tasks, supported by a new dataset with 35,500 high-quality frames.
PriorMDM can generate long human motion sequences of up to 10 minutes using a pre-trained diffusion model. It allows for controlled transitions between prompted intervals and can create two-person motions with just 14 training examples, using techniques like DiffusionBlending for better control.
Single Motion Diffusion can generate realistic animations from one input motion sequence. It allows for motion expansion, style transfer, and crowd animation, while using a lightweight design to create diverse motions efficiently.
TEXTure can generate and edit seamless textures for 3D shapes using text prompts. It uses a depth-to-image diffusion model to create consistent textures from different angles and allows for refinement based on user input.
SceneDreamer can generate endless 3D scenes from 2D image collections. It creates photorealistic images with clear depth and allows for free camera movement in the environments.
RecolorNeRF can change colors in 3D scenes while keeping the view consistent. It breaks scenes into pure-colored layers, allowing for easy color adjustments and producing realistic results that are better than other methods.
Robust Dynamic Radiance Fields can estimate both static and dynamic radiance fields along with camera settings. It improves view synthesis from difficult videos, achieving better quality and accuracy than current top methods.
Point-E can generate 3D point clouds from text prompts in 1-2 minutes on a single GPU. It uses a text-to-image diffusion model to create a view and then a second diffusion model to produce the point cloud, offering a faster option for 3D object generation.