3D Object Generation
Free 3D object generation AI tools for quickly creating assets for games, films, and animations, optimizing your creative projects effortlessly.
HumanNorm is a novel approach for high-quality and realistic 3D human generation by leveraging normal maps which enhances the 2D perception of 3D geometry. The results are quite impressive and comparable with PS3 games.
DreamGaussian can generate high-quality textured meshes from a single-view image in just 2 minutes. It uses a 3D Gaussian Splatting model for fast mesh extraction and texture refinement.
PlankAssembly can turn 2D line drawings from three views into 3D CAD models. It effectively handles noisy or incomplete inputs and improves accuracy using shape programs.
Similar like ControlNet scribble for images, SketchMetaFace brings sketch guidance to the 3D realm and makes it possible to turn a sketch into a 3D face model. Pretty excited about progress like this, as this will bring controllability to 3D generations and make generating 3D content way more accessible.
NIS-SLAM can reconstruct high-fidelity surfaces and geometry from RGB-D frames. It also learns 3D consistent semantic representations during this process.
Neuralangelo can reconstruct detailed 3D surfaces from RGB video captures. It uses multi-resolution 3D hash grids and neural surface rendering, achieving high fidelity without needing extra depth inputs.
Now motion capturing is cool. But what if you want your 3D characters to move in new and unique ways? GenMM is able to generate a variety of movements from just a single or few example sequences. Unlike other methods, it doesn’t need exhaustive training and can create new motions with complex skeletons in fractions of a second. It’s also a whiz at jobs you couldn’t do with motion matching alone, like motion completion, guided generation from keyframes, infinite looping, and motion reassembly.
[Humans in 4D] can track and reconstruct humans in 3D from a single video. It handles unusual poses and poor visibility well, using a transformer-based network called HMR 2.0 to improve action recognition.
Sin3DM can generate high-quality variations of 3D objects from a single textured shape. It uses a diffusion model to learn how parts of the object fit together, enabling retargeting, outpainting, and local editing.
Shap-E can generate complex 3D assets by producing parameters for implicit functions. It creates both textured meshes and neural radiance fields, and it works faster with better quality than the Point-E model.
Patch-based 3D Natural Scene Generation from a Single Example can create high-quality 3D natural scenes from just one image by working at the patch level. It allows users to edit scenes by removing, duplicating, or modifying objects while keeping realistic shapes and appearances.
AvatarCraft can turn a text prompt into a high-quality 3D human avatar. It allows users to control the avatar’s shape and pose, making it easy to animate and reshape without retraining.
HyperDiffusion can generate high-quality 3D shapes and 4D mesh animations using a unified diffusion model. This method allows for the creation of complex objects and dynamic scenes from a single framework, making it versatile and efficient.
PAniC-3D can reconstruct 3D character heads from single-view anime portraits. It uses a line-filling model and a volumetric radiance field, achieving better results than previous methods and setting a new standard for stylized reconstruction.
MeshDiffusion can generate realistic 3D meshes using a score-based diffusion model with deformable tetrahedral grids. It is great for creating detailed 3D shapes from single images and can also add textures, making it useful for various applications.
X-Avatar can capture the full expressiveness of digital humans for lifelike experiences in telepresence and AR/VR. It uses full 3D scans or RGB-D data and outperforms other methods in animation tasks, supported by a new dataset with 35,500 high-quality frames.
Single Motion Diffusion can generate realistic animations from one input motion sequence. It allows for motion expansion, style transfer, and crowd animation, while using a lightweight design to create diverse motions efficiently.
3D Neural Field Generation using Triplane Diffusion can create high-quality 3D models from 2D images. It uses a diffusion model to turn ShapeNet meshes into continuous occupancy fields, achieving top results in 3D generation for various object types.
Latent-NeRF can generate 3D shapes and textures by combining text and shape guidance. It uses latent score distillation to apply this guidance directly on 3D meshes, allowing for high-quality textures on specific geometries.
[Temporal Residual Jacobians] can transfer motion from one 3D mesh to another without needing rigging or shape keyframes. It uses two neural networks to predict changes, allowing for realistic motion transfer across different body shapes.