3D Object Generation
Free 3D object generation AI tools for quickly creating assets for games, films, and animations, optimizing your creative projects effortlessly.
Interactive3D can generate high-quality 3D objects that users can easily modify. It allows for adding and removing parts, dragging objects, and changing shapes.
ClickDiff can generate controllable grasps for 3D objects. It employs a Dual Generation Framework to produce realistic grasps based on user-specified or algorithmically predicted contact maps.
SV4D can generate dynamic 3D content from a single video. It ensures that the new views are consistent across multiple frames and achieves high-quality results in video synthesis.
DreamCar can reconstruct 3D car models from just a few images or single-image inputs. It uses Score Distillation Sampling and pose optimization to enhance texture alignment and overall model quality, significantly outperforming existing methods.
3DWire can generate 3D house wireframes from text! The wireframes can be easily segmented into distinct components, such as walls, roofs, and rooms, reflecting the semantic essence of the shape.
An Object is Worth 64x64 Pixels can generate 3D models from 64x64 pixel images! It creates realistic objects with good shapes and colors, working as well as more complex methods.
GeneFace can generate high-quality 3D talking face videos from any speech audio. It solves the head-torso separation problem and provides better lip synchronization and image quality than earlier methods.
BRDF-Uncertainty can estimate the properties of the materials on an object’s surface in seconds given its geometry and a lighting environment.
Portrait3D can generate high-quality 3D heads with accurate geometry and texture from a single in-the-wild portrait image.
MeshAnything can convert 3D assets in any 3D representation into meshes. This can be used to enhance various 3D asset production methods and significantly improve storage, rendering, and simulation efficiencies.
MagicPose4D can generate 3D objects from text or images and transfer precise motions and trajectories from objects and characters in a video or mesh sequence.
RemoCap can reconstruct 3D human bodies from motion sequences. It’s able to capture occluded body parts with greater fidelity, resulting in less model penetration and distorted motion.
NOVA-3D can generate 3D anime characters from non-overlapped front and back views.
DreamScene4D can generate dynamic 4D scenes from single videos. It tracks object motion and handles complex movements, allowing for accurate 2D point tracking by converting 3D paths to 2D.
X-Oscar can generate high-quality 3D avatars from text prompts. It uses a step-by-step process for geometry, texture, and animation, while addressing issues like low quality and oversaturation through advanced techniques.
Invisible Stitch can inpaint missing depth information in a 3D scene, resulting in improved geometric coherence and smoother transitions between frames.
And on the pose reconstruction front we have had TokenHMR, which can extract human poses and shapes from a single image.
PhysDreamer is a physics-based approach that enables you to poke, push, pull and throw objects in a virtual 3D environment and they will react in a physically plausible manner.
InFusion can inpaint 3D Gaussian point clouds to restore missing 3D points for better visuals. It lets users change textures and add new objects, achieving high quality and efficiency.
in2IN is a motion generation model that factors in both the overall interaction’s textual description and individual action descriptions of each person involved. This enhances motion diversity and enables better control over each person’s actions while preserving interaction coherence.