Video AI Tools
Free video AI tools for editing, generating animations, and analyzing footage, perfect for filmmakers and content creators seeking efficiency.
MatAnyone can generate stable and high-quality human video matting masks.
MEGASAM can estimate camera parameters and depth maps from casual monocular videos.
Step-Video-T2V can generate high-quality videos up to 204 frames long using a 30B parameter text-to-video model.
Magic 1-For-1 can generate one-minute video clips in just one minute.
VD3D enables camera control for video diffusion models and can transfer the camera trajectory from a reference video.
Diffusion as Shader can generate high-quality videos from 3D tracking inputs.
Lumina-Video can generate high-quality videos with synchronized sound from text prompts.
Light-A-Video can relight videos without flickering.
FlashVideo can generate videos from text prompts and upscale them to 1080p.
Video Alchemist can generate personalized videos using text prompts and reference images. It supports multiple subjects and backgrounds without long setup times, achieving high-quality results with better subject fidelity and text alignment.
Imagine360 can generate high-quality 360° videos from monologue single-view videos.
DELTA can track dense 3D motion from single-camera videos with high accuracy. It uses advanced techniques to speed up the process, making it over 8 times faster than older methods while maintaining pixel-level precision.
Video Depth Anything can estimate depth in long videos while keeping a fast speed of 30 frames per second.
RepVideo can improve video generation by making visuals look better and ensuring smooth transitions.
VISION-XL can deblur and upscale videos using SDXL. It supports different aspect ratios and can produce HD videos in under 2.5 minutes on a single NVIDIA 4090 GPU, using only 13GB of VRAM for 25-frame videos.
Splatter a Video can turn a video into a 3D Gaussian representation, allowing for enhanced video tracking, depth prediction, motion and appearance editing, and stereoscopic video generation.
Go-with-the-Flow can control motion patterns in video diffusion models using real-time warped noise from optical flow fields. It allows users to manipulate object movements and camera motions while keeping high image quality and not needing changes to existing models.
Kinetic Typography Diffusion Model can generate kinetic typography videos with legible and artistic letter motions based on text prompts.
TransPixar can generate RGBA videos, enabling the creation of transparent elements like smoke and reflections that blend seamlessly into scenes.
SVFR can restore high-quality video faces from low-quality inputs. It combines video face restoration, inpainting, and colorization to improve the overall quality and coherence of the restored videos.