AI Toolbox
A curated collection of 790 free cutting edge AI papers with code and tools for text, image, video, 3D and audio generation and manipulation.





PrimitiveAnything can generate high-quality 3D shapes by breaking down complex forms into simple geometric parts. It uses a shape-conditioned primitive transformer to ensure that the shapes remain accurate and diverse.
HunyuanCustom can generate customized videos with specific subjects while keeping their identity consistent across frames. It supports various inputs like images, audio, video, and text, and it excels in realism and matching text to video.
FlexiAct can transfer actions from a video to a target image while keeping the person’s identity while adapting to different layouts and viewpoints.
AnyStory can generate consistent single- and multi-subject images from text.
KeySync can achieve strong lip synchronization for videos. It addresses issues like timing, facial expressions, and blocked faces, using a unique masking strategy and a new metric called LipLeak to improve visual quality.
SwiftSketch can generate high-quality vector sketches from images in under a second. It uses a diffusion model to create editable sketches that work well for different object types and are not limited by resolution.
FantasyTalking can generate talking portraits from a single image, making them look realistic with accurate lip movements and facial expressions. It uses a two-step process to align audio and video, allowing users to control how expressions and body motions appear.
Textoon can generate diverse 2D cartoon characters in the Live2D format from text descriptions. It allows for real-time editing and controllable appearance generation, making it easy for users to create interactive characters.
GPS-Gaussian+ can render high-resolution 3D scenes from 2 or more input images in real-time.
Step1X-Edit can perform advanced image editing tasks by processing reference images and user instructions.
Describe Anything can generate detailed descriptions for specific areas in images and videos using points, boxes, scribbles, or masks.
SwiftBrush v2 can improve the quality of images generated by one-step text-to-image diffusion models. Results look great, and apparently it ranks better than all GAN-based and multi-step Stable Diffusion models in benchmarks. No code though 🤷♂️
InstantCharacter can generate high-quality images of personalized characters from a single reference image with FLUX. It supports different styles and poses, ensuring identity consistency and allowing for text-based edits.
ID-Patch can generate personalized group photos by matching faces with specific positions. It reduces problems like identity leakage and visual errors, achieving high accuracy and speed—seven times faster than other methods.
Phantom can generate videos that keep the subject’s identity from images while matching them with text prompts.
SkyReels-V2 can generate infinite-length videos by combining a Diffusion Forcing framework with Multi-modal Large Language Models and Reinforcement Learning.
SCW-VTON can fit in-shop clothing to a person’s image while keeping their pose consistent. It improves the shape of the clothing and reduces distortions in visible limb areas, making virtual try-on results look more realistic.
Ev-DeblurVSR can turn low-resolution and blurry videos into high-resolution ones.