Controllable Video Generation
Free controllable video generation AI tools for creating personalized animations and dynamic scenes for films, games, and marketing projects.
Go-with-the-Flow can control motion patterns in video diffusion models using real-time warped noise from optical flow fields. It allows users to manipulate object movements and camera motions while keeping high image quality and not needing changes to existing models.
ObjCtrl-2.5D enables object control in image-to-video generation using 3D trajectories from 2D inputs with depth information.
3DTrajMaster can control the 3D motions of multiple objects in videos using user-defined 6DoF pose sequences.
SG-I2V can control object and camera motion in image-to-video generation using bounding boxes and trajectories
HumanVid can generate videos from a character photo while allowing users to control both human and camera motions. It introduces a large-scale dataset that combines high-quality real-world and synthetic data, achieving state-of-the-art performance in camera-controllable human image animation.
Motion Prompting can control video generation using motion paths. It allows for camera control, motion transfer, and drag-based image editing, producing realistic movements and physics.
DragAnything can control the motion of any object in videos by letting users draw trajectory lines. It allows for separate motion control of multiple objects, including backgrounds.
Magic-Me can generate identity-specific videos from a few reference images while keeping the person’s features clear.
Direct-a-Video can individually or jointly control camera movement and object motion in text-to-video generations. This means you can generate a video and tell the model to move the camera from left to right, zoom in or out and move objects around in the scene.
MagicDriveDiT can generate high-resolution street scene videos for self-driving cars.