Image-to-3D
Free image-to-3D AI tools for transforming images into 3D assets for games, films, and design projects, optimizing your creative process.
MeshFormer can generate high-quality 3D textured meshes from just a few 2D images in seconds.
LGM can generate high-resolution 3D models from text prompts or single-view images. It uses a fast multi-view Gaussian representation, producing models in under 5 seconds while maintaining high quality.
En3D can generate high-quality 3D human avatars from 2D images without needing existing assets.
Doodle Your 3D can turn abstract sketches into precise 3D shapes. The method can even edit shapes by simply editing the sketch. Super cool. Sketch-to-3D-print isn’t that far away now.
WonderJourney lets you wander through your favourite paintings, peoms and haikus. The method can generate a sequence of diverse yet coherently connected 3D scenes from a single image or text prompt.
ZeroNVS is a 3D-aware diffusion model that is able to generate novel 360-degree views of in-the-wild scenes from a single real image.
Zero123++ can generate high-quality, 3D-consistent multi-view images from a single input image using an image-conditioned diffusion model. It fixes common problems like blurry textures and misaligned shapes, and includes a ControlNet for better control over the image creation process.
Wonder3D is able to convert a single image into a high-fidelity 3D model, complete with textured meshes and color. The entire process takes only 2 to 3 minutes.
DreamGaussian can generate high-quality textured meshes from a single-view image in just 2 minutes. It uses a 3D Gaussian Splatting model for fast mesh extraction and texture refinement.
PlankAssembly can turn 2D line drawings from three views into 3D CAD models. It effectively handles noisy or incomplete inputs and improves accuracy using shape programs.
Similar like ControlNet scribble for images, SketchMetaFace brings sketch guidance to the 3D realm and makes it possible to turn a sketch into a 3D face model. Pretty excited about progress like this, as this will bring controllability to 3D generations and make generating 3D content way more accessible.
PAniC-3D can reconstruct 3D character heads from single-view anime portraits. It uses a line-filling model and a volumetric radiance field, achieving better results than previous methods and setting a new standard for stylized reconstruction.
Make-It-3D can create high-quality 3D content from a single image by estimating 3D shapes and adding textures. It uses a two-step process with a trained 2D diffusion model, allowing for text-to-3D creation and detailed texture editing.
SceneDreamer can generate endless 3D scenes from 2D image collections. It creates photorealistic images with clear depth and allows for free camera movement in the environments.
EVA3D can generate high-quality 3D human models from 2D image collections. It uses a method called compositional NeRF for detailed shapes and textures, and it improves learning with pose-guided sampling.
Adobe is entering the image-to-3D game. LRM can create high-fidelity 3D object meshes from a single image in just 5 seconds. The model is trained on massive multi-view data containing around 1 million objects. The results are pretty impressive and the method is able to generalize well to real-world pictures and images from generative models.