AI Toolbox
A curated collection of 630 free cutting edge AI papers with code and tools for text, image, video, 3D and audio generation and manipulation.
RepVideo can improve video generation by making visuals look better and ensuring smooth transitions.
VISION-XL can deblur and upscale videos using SDXL. It supports different aspect ratios and can produce HD videos in under 2.5 minutes on a single NVIDIA 4090 GPU, using only 13GB of VRAM for 25-frame videos.
Splatter a Video can turn a video into a 3D Gaussian representation, allowing for enhanced video tracking, depth prediction, motion and appearance editing, and stereoscopic video generation.
Go-with-the-Flow can control motion patterns in video diffusion models using real-time warped noise from optical flow fields. It allows users to manipulate object movements and camera motions while keeping high image quality and not needing changes to existing models.
Chat2SVG can generate and edit SVG vector graphics from text prompts. It combines Large Language Models and image diffusion models to create detailed SVG templates and allows users to refine them with simple language instructions.
GaussianDreamerPro can generate 3D Gaussian assets from text that can be seamlessly integrated into downstream manipulation pipelines, such as animation, composition, and simulation.
Kinetic Typography Diffusion Model can generate kinetic typography videos with legible and artistic letter motions based on text prompts.
LLM4GEN enhances the semantic understanding ability of text-to-image diffusion models by leveraging the semantic representation of LLMs. Meaning: More complex and dense prompts that involve multiple objects, attribute binding, and long descriptions.
TransPixar can generate RGBA videos, enabling the creation of transparent elements like smoke and reflections that blend seamlessly into scenes.
Coin3D can generate and edit 3D assets from a basic input shape. Similar to ControlNet, this enables precise part editing and responsive 3D object previewing within a few seconds.
Reflecting Reality can generate realistic mirror reflections using a method called MirrorFusion. It allows users to control mirror placement and achieves better reflection quality and geometry than other methods.
SVFR can restore high-quality video faces from low-quality inputs. It combines video face restoration, inpainting, and colorization to improve the overall quality and coherence of the restored videos.
FabricDiffusion can transfer high-quality fabric textures from a 2D clothing image to 3D garments of any shape.
TangoFlux can generate 30 seconds of 44.1kHz audio in just 3.7 seconds on a single A40 GPU.
DAS3R can decompose scenes and rebuild static backgrounds from videos.
REACTO can reconstruct articulated 3D objects by capturing the motion and shape of objects with flexible deformation from a single video.
DiTCtrl can generate multi-prompt videos with smooth transitions and consistent object motion.
MMAudio can generate high-quality audio that matches video and text inputs. It excels in audio quality and synchronization, with a fast processing time of just 1.23 seconds for an 8-second clip.
SD-Codec can separate and reconstruct audio signals from speech, music, and sound effects using different codebooks for each type. This method improves how we understand audio codecs and gives better control over audio generation while keeping high quality.
AniDoc can automate the colorization of line art in videos and create smooth animations from simple sketches.