3D Object Generation
Free 3D object generation AI tools for quickly creating assets for games, films, and animations, optimizing your creative projects effortlessly.
ObjectCarver can segment, reconstruct, and separate 3D objects from a single view using just user-input clicks, eliminating the need for segmentation masks.
LegoGPT can generate stable and buildable LEGO designs from text prompts. It uses physics-aware techniques to ensure designs are safe for manual assembly and robotic construction, and it can create colored and textured models.
PrimitiveAnything can generate high-quality 3D shapes from 3D models, text and images by breaking down complex forms into simple geometric parts. It uses a shape-conditioned primitive transformer to ensure that the shapes remain accurate and diverse.
DiffLocks can generate detailed 3D hair geometry from a single image in 3 seconds.
HoloPart can break down 3D shapes into complete and meaningful parts, even if they are hidden. It also supports numerous downstream applications such as Geometry Editing, Geometry Processing, Material Editing and Animation.
AniSDF can reconstruct high-quality 3D shapes with improved surface geometry. It can handle complex, luminous, reflective as well as fuzzy objects.
Pixel3DMM can reconstruct 3D human faces from a single RGB image.
GARF can reassemble 3D objects from real-world fractured parts.
HORT can create detailed 3D point clouds of hand-held objects from just one photo.
LVSM can generate high-quality 3D views of objects and scenes from a few input images.
StdGEN can generate high-quality 3D characters from a single image in just three minutes. It breaks down characters into parts like body, clothes, and hair, using a transformer-based model for great results in 3D anime character generation.
So far it has been tough to imagine the benefits of AI agents. Most of what we’ve seen from that domain has been focused on NPC simulations or solving text-based goals. 3D-GPT is a new framework that utilizes LLMs for instruction-driven 3D modeling by breaking down 3D modeling tasks into manageable segments to procedurally generate 3D scenes. I recently started to dig into Blender and I pray this gets open sourced at one point.
LIFe-GoM can create animatable 3D human avatars from sparse multi view images in under 1 second. It renders high-quality images at 95.1 frames per second.
Google DeepMind has been researching 4DiM, a cascaded diffusion model for 4D novel view synthesis. It can generate 3D scenes with temporal dynamics from a single image and a set of camera poses and timestamps.
Dora can generated 3D assets from images which are ready for diffusion-based character control in modern 3D engines, such as Unity 3D, in real-time.
OmniPhysGS can generate realistic 3D dynamic scenes by modeling objects with Constitutive 3D Gaussians.
Hunyuan3D 2.0 can generate high-resolution textured 3D assets. It allows users to create and animate detailed 3D models efficiently, with improved geometry detail and texture quality compared to previous models.
GaussianDreamerPro can generate 3D Gaussian assets from text that can be seamlessly integrated into downstream manipulation pipelines, such as animation, composition, and simulation.
Coin3D can generate and edit 3D assets from a basic input shape. Similar to ControlNet, this enables precise part editing and responsive 3D object previewing within a few seconds.
REACTO can reconstruct articulated 3D objects by capturing the motion and shape of objects with flexible deformation from a single video.