AI Toolbox
A curated collection of 931 free cutting edge AI papers with code and tools for text, image, video, 3D and audio generation and manipulation.





IntrinsiX can generate high-quality PBR maps from text descriptions. It helps with re-lighting, material editing, and texture generation, producing detailed and coherent images.
MeshMosaic can generate high-resolution 3D meshes with over 100,000 triangles. It breaks shapes into smaller patches for better detail and accuracy, outperforming other methods that usually handle only 8,000 faces.
Manipulation by Analogy can change audio textures by learning from paired speech examples. It allows users to add, remove, or replace sounds, and it works well in real-world situations beyond just speech.
Bokeh Diffusion can control defocus blur in text-to-image diffusion models by using a physical defocus blur parameter. It allows for flexible blur adjustments while preserving scene structure and supports real image editing through inversion.
Lyra can generate 3D scenes from a single image or video. It uses a method that allows real-time rendering and dynamic scene generation without needing multiple views for training.
RealisMotion can generate human videos with realistic motions by separating four key elements: the subject, background, movement path, and actions. It uses a 3D world coordinate system for better motion editing and employs text-to-video diffusion models for high-quality results.
CapStARE can achieve high accuracy in gaze estimation. It works in real-time at about 8ms per frame and handles extreme head poses well, making it ideal for interactive systems.
Follow-Your-Click can animate specific regions of an image with a simple user click and a short motion prompt, and allows to control the speed of the animation.
Animate-X++ can animate characters from a single image and a pose sequence while creating dynamic backgrounds.
HuMo can generate high-quality human-centric videos from text, images, and audio. It ensures that the subjects are preserved and the audio matches the visuals, using advanced training methods for better control.
Diffuman4D can generate high-quality, 4D-consistent videos of human performances from just a few input videos. It uses a spatio-temporal diffusion model to improve the quality of the videos, making them more realistic and consistent than other methods.
InstantRestore can restore badly damaged face images in near real-time. It uses a single-step image diffusion model and a small set of reference images to keep the person’s identity.
ByteDance published a new low-step method called PeRFlow which accelerates diffusion models like Stable Diffusion to generate images faster. PeRFlow is compatible with various fine-tuned stylized SD models as well as SD-based generation/editing pipelines such as ControlNet, Wonder3D and more.
3DHM can animate people with 3D camera control from a single image and a given target video motion sequence.
SemLayoutDiff can generate diverse 3D indoor scenes by creating detailed semantic maps and placing furniture while considering doors and windows.
3DV-TON can generate high-quality videos for trying on clothes using 3D models. It handles complex clothing patterns and different body poses well, and it has a strong masking method to reduce errors.
CanonSwap can transfer identities from images to videos while keeping natural movements like head poses and facial expressions.
Hunyuan-GameCraft can generate interactive game videos by combining keyboard and mouse inputs into a shared camera view.
Vivid-VR can restore and enhance videos using a text-to-video diffusion transformer. It achieves realistic textures and smooth motion while preserving content and giving users control over the video generation process.
Lumen can replace video backgrounds while adjusting the lighting of the foreground for a consistent look.