Image AI Tools
Free image AI tools for generating and editing visuals, creating 3D assets for games, films, and more, optimizing your creative projects.
MagicFace can generate high-quality images of people in any style without needing training. It uses special attention methods for precise attribute alignment and feature injection, working for both single and multi-concept customization.
Generative Photomontage can combine parts of multiple AI-generated images using a brush tool. It enables the creation of new appearance combinations, correct shapes and artifacts, and improve prompt alignment, outperforming existing image blending methods.
Filtered Guided Diffusion shows that image-to-image translation and editing doesn’t necessarily require additional training. FGD simply applies a filter to the input of each diffusion step based on the output of the previous step in an adaptive manner which makes this approach easy to implement.
[Matryoshka Diffusion Models] can generate high-quality images and videos using a NestedUNet architecture that denoises inputs at different resolutions. This method allows for strong performance at resolutions up to 1024x1024 pixels and supports effective training without needing specific examples.
Sprite-Decompose can break down animated graphics into sprites using videos and box outlines.
IPAdapter-Instruct can efficiently combine natural-image conditioning with “Instruct” prompts! It enables users to switch between various interpretations of the same image, such as style transfer and object extraction.
Lumina-mGPT can create photorealistic images from text and handle different visual and language tasks! It uses a special transformer model, making it possible to control image generation, do segmentation, estimate depth, and answer visual questions in multiple steps.
VAR-CLIP creates detailed fantasy images that match text descriptions closely by combining Visual Auto-Regressive techniques with CLIP! It uses text embeddings to guide image creation, ensuring strong results by training on a large image-text dataset.
SEG improves image generation for SDXL by smoothing the self-attention energy landscape! This boosts quality without needing guidance scale, using a query blurring method that adjusts attention weights, leading to better results with fewer drawbacks.
DreamMover can generate high-quality intermediate images and short videos from image pairs with large motion. It uses a flow estimator based on diffusion models to keep details and ensure consistency between frames and input images.
Magic Clothing can generate customized characters wearing specific garments from diverse text prompts while preserving the details of the target garments and maintain faithfulness to the text prompts.
ViPer can personalize image generation by capturing individual user preferences through a one-time commenting process on a selection of images. It utilizes these preferences to guide a text-to-image model, resulting in generated images that align closely with users’ visual tastes.
Adobe’s Magic Fixup lets you edit images with a cut-and-paste approach that fixes edits automatically. Can see this being super useful for generating animation frames for tools like AnimateDiff. But it’s not clear yet if or when this hits Photoshop.
Artist stylizes images based on text prompts, preserving the original content while producing high aesthetic quality results. No finetuning, no ControlNets, it just works with your pretrained StableDiffusion model.
Cinemo can generate consistent and controllable image animations from static images. It achieves enhanced temporal consistency and smoothness through strategies like learning motion residuals and employing noise refinement techniques, allowing for precise user control over motion intensity.
MasterWeaver can generate photo-realistic images from a single reference image while keeping the person’s identity and allowing for easy edits. It uses an encoder to capture identity features and a unique editing direction loss to improve text control, enabling changes to clothing, accessories, and facial features.
IMAGDressing-v1 can generate human try-on images from input garments. It is able to control different scenes through text and can be combined with IP-Adapter and ControlNet pose to enhance the diversity and controllability of generated images.
AccDiffusion can generate high-resolution images with fewer object repetition! Something Stable Diffusion has been plagued by since its infancy.
ColorPeel can generate objects in images with specific colors and shapes.
HumanRefiner can improve human hand and limb quality in images! The method is able to detect and correct issues related to both abnormal human poses.