Video AI Tools
Free video AI tools for editing, generating animations, and analyzing footage, perfect for filmmakers and content creators seeking efficiency.
VACE basically adds ControlNet support to video models like Wan and LTX. It handle various video tasks like generating videos from references, video inpainting, pose control, sketch to video and more.
Perception-as-Control can achieve fine-grained motion control for image animation by creating a 3D motion representation from a reference image.
AccVideo can speed up video diffusion models by reducing the number of steps needed for video creation. It achieves an 8.5x faster generation speed compared to HunyuanVideo, producing high-quality videos at 720x1280 resolution and 24fps, which makes text-to-video generation way more efficient.
FloVD can generate camera-controllable videos using optical flow maps to show motion.
MotionMatcher can customize text-to-video diffusion models using a reference video to transfer motion and camera framing to different scenes.
LayerAnimate can animate single anime frames from text prompts or interpolate between two frames with or without sketch-guidance. It allows users to adjust foreground and background elements separately.
StyleMaster can stylize videos by transferring artistic styles from images while keeping the original content clear.
PP-VCtrl can turn text-to-video models into customizable video generators. It uses control signals like Canny edges and segmentation masks to improve video quality and control without retraining the models, making it great for character animation and video editing.
MagicMotion can animate objects in videos by controlling their paths with masks, bounding boxes, and sparse boxes.
KDTalker can generate high-quality talking portraits from a single image and audio input. It captures fine facial details and achieves excellent lip synchronization using a 3D keypoint-based approach and a spatiotemporal diffusion model.
Mobius can generate seamlessly looping videos from text descriptions.
MovieAgent can generate long-form videos with multiple scenes and shots from a script and character bank. It ensures character consistency and synchronized subtitles while reducing the need for human input in movie production.
Chrono can track points in videos with an understanding of time.
VIRES can repaint, replace, generate, and remove objects in videos using sketches and text.
Diffusion VAS can generate masks for hidden parts of objects in videos.
VideoMaker can generate personalized videos from a single subject reference image.
InsTaG can generate realistic 3D talking heads from just a few seconds of video.
MatAnyone can generate stable and high-quality human video matting masks.
MEGASAM can estimate camera parameters and depth maps from casual monocular videos.
Step-Video-T2V can generate high-quality videos up to 204 frames long using a 30B parameter text-to-video model.