Image-to-3D
Free image-to-3D AI tools for transforming images into 3D assets for games, films, and design projects, optimizing your creative process.
MeshPad can create and edit 3D meshes from 2D sketches. Users can easily add or delete mesh parts through simple sketch changes.
StyleSculptor can generate 3D assets from a content image and style images without needing extra training.
Lyra can generate 3D scenes from a single image or video. It uses a method that allows real-time rendering and dynamic scene generation without needing multiple views for training.
Matrix-3D can generate 3D worlds from a single image or text prompt. It allows users to explore these environments in any direction and supports both quick and detailed scene creation.
Reflect3D can detect 3D reflection symmetry from a single RGB image and improve 3D generation.
PhysX can generate 3D assets with detailed physical properties, which labels assets in five key areas: scale, material, affordance, kinematics, and function.
PartPacker can generate high-quality 3D objects with many meaningful parts from a single image.
UniTEX can generate high-quality textures for 3D assets without using UV mapping. It maps 3D points to texture values based on surface proximity and uses a transformer-based model for better texture quality.
Direct3D-S2 can generate high-resolution 3D shapes.
4K4DGen can turn a single panorama image into an immersive 4D environment with 360-degree views at 4K resolution. The method is able to animate the scene and optimize a set of 4D Gaussians using efficient splatting techniques for real-time exploration.
SVAD can generate high-quality 3D avatars from a single image. It keeps the person’s identity and details consistent across different poses and angles while allowing for real-time rendering.
HORT can create detailed 3D point clouds of hand-held objects from just one photo.
LVSM can generate high-quality 3D views of objects and scenes from a few input images.
DiffPortrait360 can create high-quality 360-degree views of human heads from single images.
MVGenMaster can generate up to 100 new views from a single image using a multi-view diffusion model.
StdGEN can generate high-quality 3D characters from a single image in just three minutes. It breaks down characters into parts like body, clothes, and hair, using a transformer-based model for great results in 3D anime character generation.
Phidias can generate high-quality 3D assets from text, images, and 3D references. It uses a method called reference-augmented diffusion to improve quality and speed, achieving results in just a few seconds.
Cycle3D can generate high-quality and consistent 3D content from a single unposed image. This approach enhances texture consistency and multi-view coherence, significantly improving the quality of the final 3D reconstruction.
DiffSplat can generate 3D Gaussian splats from text prompts and single-view images in 1-2 seconds.
FabricDiffusion can transfer high-quality fabric textures from a 2D clothing image to 3D garments of any shape.