Image AI Tools
Free image AI tools for generating and editing visuals, creating 3D assets for games, films, and more, optimizing your creative projects.
AniDoc can automate the colorization of line art in videos and create smooth animations from simple sketches.
FitDiT can generate realistic virtual try-on images that show how clothes fit on different body types. It keeps garment textures clear and works quickly, taking only 4.57 seconds for a single image.
ColorFlow can colorize black and white line-art and manga panels while keeping characters and objects consistent.
InvSR can upscale images in one to five steps. It achieves great results even with just one step, making it efficient for improving images in real-world situations.
Personalized Restoration is a method that can restore degraded images of faces while retaining the identity of the person using reference images. The method is able to edit the restored image using text prompts, enabling modifications like changing the color of the eyes or making the person smile.
Leffa can generate person images based on reference images, allowing for precise control over appearance and pose.
TryOffAnyone can generate high-quality images of clothing on models from photos.
FireFlow is FLUX-dev editing method that can perform fast image inversion and semantic editing with just 8 diffusion steps.
Factor Graph Diffusion can generate high-quality images with better prompt adherence. The method allows for controllable image creation using tools like segmentation and depth maps.
MV-Adapter can generate images from multiple views while keeping them consistent across views. It enhances text-to-image models like Stable Diffusion XL, supporting both text and image inputs, and achieves high-resolution outputs at 768x768.
Anagram-MTL can generate visual anagrams that change appearance with transformations like flipping or rotating.
Negative Token Merging can improve image diversity by pushing apart similar features during the reverse diffusion process. It reduces visual similarity with copyrighted content by 34.57% and works well with Stable Diffusion as well as Flux.
FlowEdit can edit images using only text prompts with Flux and Stable Diffusion 3.
You ever tried to inpaint smaller objects and details into an image? Can be kind of a hit or miss. SOEDiff has been specifically trained to handle these cases and can do a pretty good job at it.
MegaFusion can extend existing diffusion models for high-resolution image generation. It achieves images up to 2048x2048 with only 40% of the original computational cost by enhancing denoising processes across different resolutions.
DreamMix is a inpainting method based on the Fooocus model that can add objects from reference images and change their features using text.
Omegance can control detail levels in diffusion-based synthesis using a single parameter, ω. It allows for precise granularity control in generated outputs and enables specific adjustments through spatial masks and denoising schedules.
FlipSketch can generate sketch animations from static drawings by allowing users to describe the desired motion. It uses motion priors from text-to-video diffusion models to create smooth animations while keeping the original sketch’s look.
StyleCodes can encode the style of an image into a 20-symbol base64 code for easy sharing and use in image generation. It allows users to create style-reference codes (srefs) from their own images, helping to control styles in diffusion models with high quality.
SGEdit can add, remove, replace, and adjust objects in images while keeping the quality of the image consistent.