Text-to-3D
Free text-to-3D AI tools for quickly generating 3D assets for games, films, and virtual environments, optimizing your creative projects.
DragonDiffusion can edit images by moving, resizing, and changing the appearance of objects without needing to retrain the model. It lets users drag points on images for easy and precise editing.
Shap-E can generate complex 3D assets by producing parameters for implicit functions. It creates both textured meshes and neural radiance fields, and it works faster with better quality than the Point-E model.
AvatarCraft can turn a text prompt into a high-quality 3D human avatar. It allows users to control the avatar’s shape and pose, making it easy to animate and reshape without retraining.
3DFuse can improve 3D scene generation by adding 3D awareness to 2D diffusion models. It builds a rough 3D structure from text prompts and uses depth maps for better realism in reconstructions.
Point-E can generate 3D point clouds from text prompts in 1-2 minutes on a single GPU. It uses a text-to-image diffusion model to create a view and then a second diffusion model to produce the point cloud, offering a faster option for 3D object generation.
Latent-NeRF can generate 3D shapes and textures by combining text and shape guidance. It uses latent score distillation to apply this guidance directly on 3D meshes, allowing for high-quality textures on specific geometries.