Image AI Tools
Free image AI tools for generating and editing visuals, creating 3D assets for games, films, and more, optimizing your creative projects.
SEG improves image generation for SDXL by smoothing the self-attention energy landscape! This boosts quality without needing guidance scale, using a query blurring method that adjusts attention weights, leading to better results with fewer drawbacks.
DreamMover can generate high-quality intermediate images and short videos from image pairs with large motion. It uses a flow estimator based on diffusion models to keep details and ensure consistency between frames and input images.
Magic Clothing can generate customized characters wearing specific garments from diverse text prompts while preserving the details of the target garments and maintain faithfulness to the text prompts.
ViPer can personalize image generation by capturing individual user preferences through a one-time commenting process on a selection of images. It utilizes these preferences to guide a text-to-image model, resulting in generated images that align closely with users’ visual tastes.
Adobe’s Magic Fixup lets you edit images with a cut-and-paste approach that fixes edits automatically. Can see this being super useful for generating animation frames for tools like AnimateDiff. But it’s not clear yet if or when this hits Photoshop.
Artist stylizes images based on text prompts, preserving the original content while producing high aesthetic quality results. No finetuning, no ControlNets, it just works with your pretrained StableDiffusion model.
Cinemo can generate consistent and controllable image animations from static images. It achieves enhanced temporal consistency and smoothness through strategies like learning motion residuals and employing noise refinement techniques, allowing for precise user control over motion intensity.
MasterWeaver can generate photo-realistic images from a single reference image while keeping the person’s identity and allowing for easy edits. It uses an encoder to capture identity features and a unique editing direction loss to improve text control, enabling changes to clothing, accessories, and facial features.
IMAGDressing-v1 can generate human try-on images from input garments. It is able to control different scenes through text and can be combined with IP-Adapter and ControlNet pose to enhance the diversity and controllability of generated images.
AccDiffusion can generate high-resolution images with fewer object repetition! Something Stable Diffusion has been plagued by since its infancy.
ColorPeel can generate objects in images with specific colors and shapes.
HumanRefiner can improve human hand and limb quality in images! The method is able to detect and correct issues related to both abnormal human poses.
Minutes to Seconds can efficiently fill in missing parts of images using a Denoising Diffusion Probabilistic Model (DDPM) that is about 60 times faster than other methods. It uses a Light-Weight Diffusion Model and smart sampling techniques to keep the image quality high.
PartCraft can generate customized and photorealistic virtual creatures by mixing visual parts from existing images. This tool allows users to create unique hybrids and make detailed changes, which is useful for digital asset creation and studying biodiversity.
PartGLEE can locate and identify objects and their parts in images. The method uses a unified framework that enables detection, segmentation, and grounding at any granularity.
MIGC++ is a plug-and-play controller that enables Stable Diffusion with precise position control while ensuring the correctness of various attributes like color, shape, material, texture, and style. It can also control the number of instances and improve interaction between instances.
Motion Prompting can control video generation using motion paths. It allows for camera control, motion transfer, and drag-based image editing, producing realistic movements and physics.
StyleShot can mimic and style transfer various styles from an image, such as 3D, flat, abstract or even fine-grained styles, without tuning.
AnyControl is a new text-to-image guidance method that can generate images from diverse control signals, such as color, shape, texture, and layout.
MIRReS can reconstruct and optimize the explicit geometry, material, and lighting of objects from multi-view images. The resulting 3D models can be edited and relit in modern graphics engines or CAD software.