3D Mesh Generation
Free 3D mesh generation AI tools for quickly creating 3D assets for games, films, and virtual environments to boost your creative projects.
PRM can create high-quality 3D meshes from a single image using photometric stereo techniques. It improves detail and handles changes in lighting and materials, allowing for features like relighting and material editing.
GarVerseLOD can generate high-quality 3D garment meshes from a single image. It handles complex cloth movements and poses well, using a large dataset of 6,000 garment models to improve accuracy.
SPARK can create high-quality 3D face avatars from regular videos and track expressions and poses in real time. It improves the accuracy of 3D face reconstructions for tasks like aging, face swapping, and digital makeup.
Scaling Mesh Generation via Compressive Tokenization can generate high-quality meshes with over 8,000 faces.
MeshAnything V2 can generate 3D meshes from point clouds, meshes, images, text and more.
XHand can generate high-fidelity hand shapes and textures in real-time, enabling expressive hand avatars for virtual environments.
MeshAnything can convert 3D assets in any 3D representation into meshes. This can be used to enhance various 3D asset production methods and significantly improve storage, rendering, and simulation efficiencies.
DG-Mesh is able to reconstruct high-quality and time-consistent 3D meshes from a single video. The method is also able to track the mesh vertices over time, which enables texture editing on dynamic objects.
MonoHair can create high-quality 3D hair from a single video. It uses a two-step process for detailed hair reconstruction and achieves top performance across various hairstyles.
AiOS can estimate human poses and shapes in one step, combining body, hand, and facial expression recovery.
DreamGaussian can generate high-quality textured meshes from a single-view image in just 2 minutes. It uses a 3D Gaussian Splatting model for fast mesh extraction and texture refinement.
Shap-E can generate complex 3D assets by producing parameters for implicit functions. It creates both textured meshes and neural radiance fields, and it works faster with better quality than the Point-E model.