3D Object Generation
Free 3D object generation AI tools for quickly creating assets for games, films, and animations, optimizing your creative projects effortlessly.
SIGNeRF is a new approach for fast and controllable NeRF scene editing and scene-integrated object generation. The method is able to generate new objects into an existing NeRF scene or edit existing objects within the scene in a controllable manner by either proxy object placement or shape selection.
DreamGaussian4D can generate animated 3D meshes from a single image. The method is able to generate diverse motions for the same static model and do that in 4.5 minutes instead of several hours compared to other methods.
Paint-it can generate high-fidelity physically-based rendering (PBR) texture maps for 3D meshes from a text description. The method is able to relight the mesh by changing High-Dynamic Range (HDR) environmental lighting and control the material properties at test-time.
DreamTalk is able to generate talking heads conditioned on a given text prompt. The model is able to generate talking heads in multiple languages and can also manipulate the speaking style of the generated video.
MinD-3D can reconstruct high-quality 3D objects from fMRI brain signals. It uses a three-stage framework to decode 3D visual information, showing strong connections between the brain’s processing and the created objects.
Doodle Your 3D can turn abstract sketches into precise 3D shapes. The method can even edit shapes by simply editing the sketch. Super cool. Sketch-to-3D-print isn’t that far away now.
PhysGaussian is a simulation-rendering pipeline that can simulate the physics of 3D Gaussian Splats while simultaneously render photorealistic results. The method supports flexible dynamics, a diverse range of materials as well as collisions.
DreamCraft3D can create high-quality 3D objects from a single prompt. It uses a 2D reference image to guide the sculpting of the 3D object and then improves texture fidelity by running it through a fine-tuned Dreambooth model.
Progressive3D can generate detailed 3D content from complex prompts by breaking the process into smaller editing steps. It lets users focus on specific areas for editing and improves results by highlighting differences in meaning.
HumanNorm is a novel approach for high-quality and realistic 3D human generation by leveraging normal maps which enhances the 2D perception of 3D geometry. The results are quite impressive and comparable with PS3 games.
DreamGaussian can generate high-quality textured meshes from a single-view image in just 2 minutes. It uses a 3D Gaussian Splatting model for fast mesh extraction and texture refinement.
PlankAssembly can turn 2D line drawings from three views into 3D CAD models. It effectively handles noisy or incomplete inputs and improves accuracy using shape programs.
Similar like ControlNet scribble for images, SketchMetaFace brings sketch guidance to the 3D realm and makes it possible to turn a sketch into a 3D face model. Pretty excited about progress like this, as this will bring controllability to 3D generations and make generating 3D content way more accessible.
NIS-SLAM can reconstruct high-fidelity surfaces and geometry from RGB-D frames. It also learns 3D consistent semantic representations during this process.
Neuralangelo can reconstruct detailed 3D surfaces from RGB video captures. It uses multi-resolution 3D hash grids and neural surface rendering, achieving high fidelity without needing extra depth inputs.
Now motion capturing is cool. But what if you want your 3D characters to move in new and unique ways? GenMM is able to generate a variety of movements from just a single or few example sequences. Unlike other methods, it doesn’t need exhaustive training and can create new motions with complex skeletons in fractions of a second. It’s also a whiz at jobs you couldn’t do with motion matching alone, like motion completion, guided generation from keyframes, infinite looping, and motion reassembly.
[Humans in 4D] can track and reconstruct humans in 3D from a single video. It handles unusual poses and poor visibility well, using a transformer-based network called HMR 2.0 to improve action recognition.
Sin3DM can generate high-quality variations of 3D objects from a single textured shape. It uses a diffusion model to learn how parts of the object fit together, enabling retargeting, outpainting, and local editing.
Shap-E can generate complex 3D assets by producing parameters for implicit functions. It creates both textured meshes and neural radiance fields, and it works faster with better quality than the Point-E model.
Patch-based 3D Natural Scene Generation from a Single Example can create high-quality 3D natural scenes from just one image by working at the patch level. It allows users to edit scenes by removing, duplicating, or modifying objects while keeping realistic shapes and appearances.