AI Toolbox
A curated collection of 611 free cutting edge AI papers with code and tools for text, image, video, 3D and audio generation and manipulation.
StyleBooth is a unified style editing method supporting text-based, exemplar-based and compositional style editing. So basically, you can take an image and change its style by either giving it a text prompt or an example image.
InFusion can inpaint 3D Gaussian point clouds to restore missing 3D points for better visuals. It lets users change textures and add new objects, achieving high quality and efficiency.
IntrinsicAnything is able to recover object materials from any images and enable single-view image relighting.
VQ-Diffusion can generate high-quality images from text prompts using a vector quantized variational autoencoder and a conditional denoising diffusion model. It is up to fifteen times faster than traditional methods and handles complex scenes effectively.
MOWA is a multiple-in-one image warping model that can be used for various tasks such as rectangling panoramic images, unrolling shutter images, rotating images, fisheye images, and image retargeting.
Speaking about video, more research is being conducted on motion control. Peekaboo allows to control the position, size and trajectory of an object very precisely through bounding boxes.
in2IN is a motion generation model that factors in both the overall interaction’s textual description and individual action descriptions of each person involved. This enhances motion diversity and enables better control over each person’s actions while preserving interaction coherence.
Ctrl-Adapter is a new framework that can be used to add diverse controls to any image or video diffusion model, enabling things like video control with sparse frames, multi-condition control, and video editing.
Video2Game can turn real-world videos into interactive game environments. It uses a neural radiance fields (NeRF) module for capturing scenes, a mesh module for faster rendering, and a physics module for realistic object interactions.
LoopGaussian can convert multi-view images of a stationary scene into authentic 3D cinemagraphs. The 3D cinemagraphs can be rendered from a novel viewpoint to obtain a natural seamless loopable video.
[ControlNet++] can improve image generation by ensuring that generated images match the given controls, like segmentation masks and depth maps. It shows better performance than its predecessor, ControlNet, with improvements of 7.9% in mIoU, 13.4% in SSIM, and 7.6% in RMSE.
PanFusion can generate 360-degree panorama images from a text prompt. The model is able to integrate additional constraints like room layout for customized panorama outputs.
MindBridge can reconstruct images from fMRI brain signals using a single model that works for different people. It achieves high accuracy even with limited data, making it effective for new subjects.
GoodDrag can improve the stability and image quality of drag editing with diffusion models. It reduces distortions by alternating between drag and denoising operations and introduces a new dataset, Drag100, for better quality assessment.
InstantMesh can generate high-quality 3D meshes from a single image in under 10 seconds. It uses advanced methods like multiview diffusion and sparse-view reconstruction, and it significantly outperforms other tools in both quality and speed.
Speaking of reconstruction. Key2Mesh is yet another model that takes on 3D human mesh reconstruction, this time by utilizing 2D human pose keypoints as input instead of relying on visual data due to scarcity in image datasets with 3D labels.
Sparse Global Matching for Video Frame Interpolation with Large Motion can handle large motion in video frame interpolation by using a sparse global matching approach.
[MCC-Hand-Object (MCC-HO)] can reconstruct 3D shapes of hand-held objects from a single RGB image and a 3D hand model. It uses Retrieval-Augmented Reconstruction (RAR) with GPT-4(V) to match 3D models to the object’s shape, achieving top performance on various datasets.
ZeST can change the material of an object in an image to match a material example image. It can also perform multiple material edits in a single image and perform implicit lighting-aware edits on the rendering of a textured mesh.
Imagine Colorization leverages pre-trained diffusion models to colorize images while supporting controllable and user-interactive capabilities.