Image Restoration
Free image restoration AI tools for repairing and enhancing photos, perfect for photographers and digital artists seeking to recover lost details.
Exploiting Diffusion Prior for Real-World Image Super-Resolution can restore high-quality images from low-resolution inputs using pre-trained text-to-image diffusion models. It allows users to balance image quality and fidelity through a controllable feature wrapping module and adapts to different image resolutions with a progressive aggregation sampling strategy.
Adobe’s Magic Fixup lets you edit images with a cut-and-paste approach that fixes edits automatically. Can see this being super useful for generating animation frames for tools like AnimateDiff. But it’s not clear yet if or when this hits Photoshop.
HumanRefiner can improve human hand and limb quality in images! The method is able to detect and correct issues related to both abnormal human poses.
An Empty Room is All We Want can remove furniture from indoor panorama images even Jordan Peterson would be proud. Perfect to see how your or the apartment you’re looking at would look like without all the clutter.
ConsistentID can generate diverse personalized ID images from text prompts using just one reference image. It improves identity preservation with a facial prompt generator and an ID-preservation network, ensuring high quality and variety in the generated images.
MaGGIe can efficiently predict high-quality human instance mattes from coarse binary masks for both image and video input. The method is able to output all instance mattes simultaneously without exploding memory and latency, making it suitable for real-time applications.
GoodDrag can improve the stability and image quality of drag editing with diffusion models. It reduces distortions by alternating between drag and denoising operations and introduces a new dataset, Drag100, for better quality assessment.
FouriScale can generate high-resolution images from pre-trained diffusion models with various aspect ratios and achieve an astonishing capacity of arbitrary-size, high-resolution, and high-quality generation.
ResAdapter can generate images with any resolution and aspect ratio for diffusion models. It works with various personalized models and processes images efficiently, using only 0.5M parameters while keeping the original style.
While LCM and Turbo have unlocked near real-time image diffusion, the quality is still a bit lacking. TCD on the other hand manages to generate images with both clarity and detailed intricacy without compromising on speed.
pix2gestalt is able to estimate the shape and appearance of whole objects that are only partially visible behind occlusions.
DiffusionLight can estimate the lighting in a single input image and convert it into an HDR environment map. The technique is able to generate multiple chrome balls with varying exposures for HDR merging and can be used to seamlessly insert 3D objects into an existing photograph. Pretty cool.
HyperHuman is a text-to-image model that focuses on generating hyper-realistic human images from text prompts and a pose image. The results are pretty impressive and the model is able to generate images in different styles and up to a resolution of 1024x1024.
ScaleCrafter can generate ultra-high-resolution images up to 4096x4096 and videos at 2048x1152 using pre-trained diffusion models. It reduces problems like object repetition and allows for custom aspect ratios, achieving excellent texture detail.
DA-CLIP is a method that can be used to restore images. Apart from inpainting, the method is able to restore images by dehazing, deblurring, denoising, derainining and desnowing them as well as removing unwanted shadows and raindrops or enhance lighting on low-light images.
PGDiff can restore and colorize faces from low-quality images by using details from high-quality images. It effectively fixes issues like scratches and blurriness.
RIP expensive low-light cameras? It’s amazing how AI is able to solve problems which so far was only possible with better hardware. In this example the novel LED model is able to denoise low-light images trained on only 6 pairs of images. The results are impressive, but the team is not done yet. They’re currently researching a method that works on a wide variety of scenarios trained on only 2 pairs.
LDMs are high-resolution image generators that can inpaint, generate images from text or bounding boxes, and do super-resolution.
GFPGAN can restore realistic facial details from low-quality images using a pretrained face GAN. It works well on both synthetic and real-world images, allowing for quick restoration with just one pass, unlike older methods.