Motion Generation
Free 3D motion generation AI tools for animating characters and objects, perfect for games, films, and virtual reality projects.
MaskedMimic can generate diverse motions for interactive characters using a physics-based controller. It supports various inputs like keyframes and text, allowing for smooth transitions and adaptation to complex environments.
CondMDI can generate precise and diverse motions that conform to flexible user-specified spatial constraints and text descriptions. This enables the creation of high-quality animations from just text prompts and inpainting between keyframes.
StableMoFusion is a method for human motion generation that is able to eliminate foot-skating and create stable and efficient animations. The method is based on diffusion models and can be used for real-time scenarios such as virtual characters and humanoid robots.
PhysDreamer is a physics-based approach that enables you to poke, push, pull and throw objects in a virtual 3D environment and they will react in a physically plausible manner.
ProbTalk is a method for generating lifelike holistic co-speech motions for 3D avatars. The method is able to generate a wide range of motions and ensures a harmonious alignment among facial expressions, hand gestures, and body poses.
RoHM can reconstruct complete, plausible 3D human motions from monocular videos with support for recognizing occluded joints! So, basically motion tracking on steroids but without the need for an expensive setup.
MotionGPT can generate, caption, and predict human motion by treating it like a language. It achieves top performance in these tasks, making it useful for various motion-related applications.
PriorMDM can generate long human motion sequences of up to 10 minutes using a pre-trained diffusion model. It allows for controlled transitions between prompted intervals and can create two-person motions with just 14 training examples, using techniques like DiffusionBlending for better control.
[Temporal Residual Jacobians] can transfer motion from one 3D mesh to another without needing rigging or shape keyframes. It uses two neural networks to predict changes, allowing for realistic motion transfer across different body shapes.