AI Toolbox
A curated collection of 833 free cutting edge AI papers with code and tools for text, image, video, 3D and audio generation and manipulation.





LayoutVLM can generate 3D layouts from text instructions. It improves how well layouts match the intended design and works effectively in crowded spaces.
AnchorCrafter can generate high-quality 2D videos of people interacting with a reference product.
MIMO can create controllable character videos from a single image. It allows users to animate characters with complex motions in real-world scenes by encoding 2D videos into 3D spatial codes for flexible control.
GEN3C can generate photorealistic videos from single or sparse-view images while keeping camera control and 3D consistency.
LinGen can generate high-resolution minute-length videos on a single GPU.
MultiTalk can generate videos of multiple people talking by using audio from different sources, a reference image, and a prompt.
D3-Human can reconstruct detailed 3D human figures from single videos. It separates clothing and body shapes, handles occlusions well, and is useful for clothing transfer and animation.
Any-to-Bokeh can turn videos into bokeh effects that show depth and focus.
MeshArt can generate 3D meshes with clean shapes.
Disco4D can generate and animate 4D human models from a single image by separating clothing from the body. It uses diffusion models for detailed 3D representations and can model parts that are not visible in the input image.
GCC can inpaint color checkers into images to improve lighting and color accuracy.
RepText can render multilingual visual text in user-chosen fonts without needing to understand the text. It allows for customization of text content, font, and position.
Synergizing Motion and Appearance can generate high-quality talking head videos by combining facial identity from a source image with motion from a driving video.
After NeRFs and Gaussian Splatting we got Triangle Splatting. A new method that can render real-time radiance fields at over 2,400 FPS with a 1280x720 resolution. It combines triangle representations with differentiable rendering for better visual quality and faster results than Gaussian splatting methods.
UniTEX can generate high-quality textures for 3D assets without using UV mapping. It maps 3D points to texture values based on surface proximity and uses a transformer-based model for better texture quality.
Generative Omnimatte can break down videos into meaningful layers, isolating objects, shadows, and reflections without needing static backgrounds. It uses a video diffusion model for high-quality results and can fill in hidden areas, enhancing video editing options.
Direct3D-S2 can generate high-resolution 3D shapes.
MiniMax-Remover can remove objects from videos efficiently with just 6 sampling steps.
EPiC can control video cameras in image-to-video and video-to-video tasks without needing many camera path details.
SceneFactor generates 3D scenes from text using an intermediate 3D semantic map. This map can be edited to add, remove, resize, and replace objects, allowing for easy regeneration of the final 3D scene.