Image AI Tools
Free image AI tools for generating and editing visuals, creating 3D assets for games, films, and more, optimizing your creative projects.
MagicColor can automatically colorize multi-instance sketches while keeping colors consistent across objects using reference images.
TRG can estimate 6DoF head translations and rotations by leveraging the synergy between facial geometry and head pose.
Generative Photography can generate consistent images from text with an understanding of camera physics. The method can control camera settings like bokeh and color temperatures to create consistent images with different effects.
Dream Engine can generate images by combining different concepts from reference images.
ImageRAG can find relevant images based on a text prompt to improve image generation. It helps create rare and detailed concepts without needing special training, making it useful for different image models.
Distill Any Depth can generate depth maps from images.
GHOST 2.0 is a deepfake method that can transfer heads from one image to another while keeping the skin color and structure intact.
KV-Edit can edit images while keeping the background consistent. It allows users to add, remove, or change objects without needing extra training, ensuring high image quality.
Any2AnyTryon can generate high-quality virtual try-on results by transferring clothes onto images as well as reconstructing garments from real-world images.
UniCon can handle different image generation tasks using a single framework. It adapts a pretrained image diffusion model with only about 15% extra parameters and supports most base ControlNet transformations.
MIGE can generate images from text prompts and reference images and edit existing images based on instructions.
InstantSwap can swap concepts in images from a reference image while keeping the foreground and background consistent. It uses automated bounding box extraction and cross-attention to make the process more efficient by reducing unnecessary calculations.
MaterialFusion can transfer materials onto objects in images while letting users control how much material is applied.
ControlFace can edit face images with precise control over pose, expression, and lighting. It uses a dual-branch U-Net architecture and is trained on facial videos to ensure high-quality results while keeping the person’s identity intact.
Stable Flow can edit images by adding, removing, or changing objects.
FramePainter can edit images using simple sketches and video diffusion methods. It allows for realistic changes, like altering reflections or transforming objects, while needing less training data and performing well in different situations.
One-Prompt-One-Story can generate consistent images from a single text prompt by combining all prompts into one input for text-to-image models.
X-Dyna can animate a single human image by transferring facial expressions and body movements from a video.
ReF-LDM can restore low-quality face images by using multiple high-quality reference images.
Chat2SVG can generate and edit SVG vector graphics from text prompts. It combines Large Language Models and image diffusion models to create detailed SVG templates and allows users to refine them with simple language instructions.