3D AI Tools
Free 3D AI tools for creating, optimizing, and manipulating 3D assets for games, films, and design projects, boosting your creative process.
DiffPortrait360 can create high-quality 360-degree views of human heads from single images.
MVGenMaster can generate up to 100 new views from a single image using a multi-view diffusion model.
TexGaussian can generate high-quality PBR materials for 3D meshes in one step. It produces albedo, roughness, and metallic maps quickly and with great visual quality, ensuring better consistency with the input geometry.
InterMimic can learn complex human-object interactions from imperfect motion capture data. It enables realistic simulations of full-body interactions with dynamic objects and works well with kinematic generators for better modeling.
DIDiffGes can generate high-quality gestures from speech in just 10 sampling steps.
DeepMesh can generate high-quality 3D meshes from point clouds and images.
ARTalk can generate realistic 3D head motions, including lip synchronization, blinking, and facial expressions, from audio in real-time.
MotionStreamer can generate human motions based on text prompts and supports motion composition and longer motion generation. Also has a Blender plugin.
InterMask can generate high-quality 3D human interactions from text descriptions. It captures complex movements between two people while also allowing for reaction generation without changing the model.
TreeMeshGPT can generate detailed 3D meshes from point clouds using Autoregressive Tree Sequencing. This technique allows for better mesh detail and achieves a 22% reduction in data size during processing.
DART can generate high-quality human motions in real-time, achieving over 300 frames per second on a single RTX 4090 GPU. It combines text inputs with spatial constraints, allowing for tasks like reaching waypoints and interacting with scenes.
Make-It-Animatable can auto-rig any 3D humanoid model for animation in under one second. It generates high-quality blend weights and bones, and works with various 3D formats, ensuring accuracy even for non-standard skeletons.
StdGEN can generate high-quality 3D characters from a single image in just three minutes. It breaks down characters into parts like body, clothes, and hair, using a transformer-based model for great results in 3D anime character generation.
So far it has been tough to imagine the benefits of AI agents. Most of what we’ve seen from that domain has been focused on NPC simulations or solving text-based goals. 3D-GPT is a new framework that utilizes LLMs for instruction-driven 3D modeling by breaking down 3D modeling tasks into manageable segments to procedurally generate 3D scenes. I recently started to dig into Blender and I pray this gets open sourced at one point.
Phidias can generate high-quality 3D assets from text, images, and 3D references. It uses a method called reference-augmented diffusion to improve quality and speed, achieving results in just a few seconds.
EventEgo3D++ can capture 3D human motion using a monocular event camera with a fisheye lens. It works well in low-light and high-speed conditions, providing real-time 3D pose updates at 140Hz with high accuracy compared to RGB-based methods.
Cyberpunk brain dances are becoming a thing! D-NPC can turn videos into dynamic neural point clouds aka 4D scenes which makes it possible to watch a scene from another perspective.
MagicArticulate can rig static 3D models and make them ready for animation. Works on both humanoid and non-humanoid objects.
MEGASAM can estimate camera parameters and depth maps from casual monocular videos.
Cycle3D can generate high-quality and consistent 3D content from a single unposed image. This approach enhances texture consistency and multi-view coherence, significantly improving the quality of the final 3D reconstruction.