Image Inpainting
Free image inpainting AI tools for restoring and manipulating pictures, allowing artists to create and perfect visuals effortlessly.
Prompt Sliders can control and edit concepts in diffusion models. It allows users to adjust the strength of concepts with just 3KB of storage per embedding, making it much faster than traditional LoRA methods.
tps-inbetween can generate high-quality intermediate frames for animation line art. It effectively connects lines and fills in missing details, even during fast movements, using a method that models keypoint relationships between frames.
Generative Photomontage can combine parts of multiple AI-generated images using a brush tool. It enables the creation of new appearance combinations, correct shapes and artifacts, and improve prompt alignment, outperforming existing image blending methods.
IMAGDressing-v1 can generate human try-on images from input garments. It is able to control different scenes through text and can be combined with IP-Adapter and ControlNet pose to enhance the diversity and controllability of generated images.
Minutes to Seconds can efficiently fill in missing parts of images using a Denoising Diffusion Probabilistic Model (DDPM) that is about 60 times faster than other methods. It uses a Light-Weight Diffusion Model and smart sampling techniques to keep the image quality high.
Analogist can enhance images by colorizing, deblurring, denoising, improving low-light quality, and transferring styles using a text-to-image diffusion model. It uses both visual and text prompts without needing extra training, making it a flexible tool for learning with few examples.
An Empty Room is All We Want can remove furniture from indoor panorama images even Jordan Peterson would be proud. Perfect to see how your or the apartment you’re looking at would look like without all the clutter.
ZeST can change the material of an object in an image to match a material example image. It can also perform multiple material edits in a single image and perform implicit lighting-aware edits on the rendering of a textured mesh.
SEELE can move around objects within an image. It does so by removing it, inpainting occluded portions and harmonizing the appearance of the repositioned object with the surrounding areas.
Uni-paint can perform image inpainting using different methods like text, strokes, and examples. It uses a pretrained Stable Diffusion model, allowing it to adapt to new images without extra training.
PGDiff can restore and colorize faces from low-quality images by using details from high-quality images. It effectively fixes issues like scratches and blurriness.
Inst-Inpaint can remove objects from images using natural language instructions, which saves time by not needing binary masks. It uses a new dataset called GQA-Inpaint, improving the quality and accuracy of image inpainting significantly.
[Reference-based Image Composition with Sketch via Structure-aware Diffusion Model] can edit images by filling in missing parts using a reference image and a sketch. This method improves editability and allows for detailed changes in various scenes.
LDMs are high-resolution image generators that can inpaint, generate images from text or bounding boxes, and do super-resolution.