3D Scene Generation
Free 3D scene generation AI tools for creating immersive environments for games, films, and virtual experiences with ease.
DreamCraft3D can create high-quality 3D objects from a single prompt. It uses a 2D reference image to guide the sculpting of the 3D object and then improves texture fidelity by running it through a fine-tuned Dreambooth model.
3D Gaussian Splatting can create high-quality 3D scenes in real-time at 1080p resolution with over 30 frames per second. It uses 3D Gaussians for efficient scene representation and a fast rendering method, achieving competitive training times while maintaining great visual quality.
NIS-SLAM can reconstruct high-fidelity surfaces and geometry from RGB-D frames. It also learns 3D consistent semantic representations during this process.
It’s said that our eyes hold the universe. When it comes to the method discussed in the paper Seeing the World through Your Eyes, they at least hold a 3D scene. The method discussed in the paper is able to reconstruct 3D scenes beyond the camera’s line-of-sight using portrait images containing eye reflections.
Neuralangelo can reconstruct detailed 3D surfaces from RGB video captures. It uses multi-resolution 3D hash grids and neural surface rendering, achieving high fidelity without needing extra depth inputs.
[Humans in 4D] can track and reconstruct humans in 3D from a single video. It handles unusual poses and poor visibility well, using a transformer-based network called HMR 2.0 to improve action recognition.
Patch-based 3D Natural Scene Generation from a Single Example can create high-quality 3D natural scenes from just one image by working at the patch level. It allows users to edit scenes by removing, duplicating, or modifying objects while keeping realistic shapes and appearances.
HyperDiffusion can generate high-quality 3D shapes and 4D mesh animations using a unified diffusion model. This method allows for the creation of complex objects and dynamic scenes from a single framework, making it versatile and efficient.
MeshDiffusion can generate realistic 3D meshes using a score-based diffusion model with deformable tetrahedral grids. It is great for creating detailed 3D shapes from single images and can also add textures, making it useful for various applications.
Robust Dynamic Radiance Fields can estimate both static and dynamic radiance fields along with camera settings. It improves view synthesis from difficult videos, achieving better quality and accuracy than current top methods.
3D Neural Field Generation using Triplane Diffusion can create high-quality 3D models from 2D images. It uses a diffusion model to turn ShapeNet meshes into continuous occupancy fields, achieving top results in 3D generation for various object types.