Video AI Tools
Free video AI tools for editing, generating animations, and analyzing footage, perfect for filmmakers and content creators seeking efficiency.
Follow-Your-Click can animate specific regions of an image with a simple user click and a short motion prompt, and allows to control the speed of the animation.
Animate-X++ can animate characters from a single image and a pose sequence while creating dynamic backgrounds.
HuMo can generate high-quality human-centric videos from text, images, and audio. It ensures that the subjects are preserved and the audio matches the visuals, using advanced training methods for better control.
Diffuman4D can generate high-quality, 4D-consistent videos of human performances from just a few input videos. It uses a spatio-temporal diffusion model to improve the quality of the videos, making them more realistic and consistent than other methods.
3DV-TON can generate high-quality videos for trying on clothes using 3D models. It handles complex clothing patterns and different body poses well, and it has a strong masking method to reduce errors.
CanonSwap can transfer identities from images to videos while keeping natural movements like head poses and facial expressions.
Hunyuan-GameCraft can generate interactive game videos by combining keyboard and mouse inputs into a shared camera view.
Vivid-VR can restore and enhance videos using a text-to-video diffusion transformer. It achieves realistic textures and smooth motion while preserving content and giving users control over the video generation process.
Lumen can replace video backgrounds while adjusting the lighting of the foreground for a consistent look.
MyTimeMachine can change faces to look older or younger using a global aging model. It needs just 50 selfies to keep the person’s identity, making it great for visual effects and realistic age transformations.
FantasyPortrait can generate high-quality animations from static images for both single and multi-character scenes.
ShoulderShot can generate over-the-shoulder dialogue videos that keep characters looking the same and maintain a smooth flow between shots. It allows for longer conversations and offers more flexibility in how shots are arranged.
VideoColorGrading can generate a look-up table (LUT) for matching colors between reference scenes and input videos.
SyncTalk++ can generate high-quality talking head videos with synchronized lip movements and facial expressions. It uses Gaussian Splatting for consistent subject identity and can render up to 101 frames per second.
Pusa V1.0 can generate high-quality videos from images and text prompts. It achieves a VBench-I2V score of 87.32% with only $500 in training costs and supports features like video transitions and extensions.
ACTalker can generate talking head videos by combining audio and facial motion to control specific facial areas.
Tora can generate high-quality videos with precise control over motion trajectories by integrating textual, visual, and trajectory conditions. It achieves high motion fidelity and allows for diverse video durations, aspect ratios, and resolutions, making it a versatile tool for video generation.
Tora2 can generate videos with customized motion and appearance for multiple entities.
LongAnimation can create long-term animations with consistent colors.
Depth Anything at Any Condition can estimate depth from a single image in different lighting and weather conditions.