Image AI Tools
Free image AI tools for generating and editing visuals, creating 3D assets for games, films, and more, optimizing your creative projects.
Pair Customization can customize text-to-image models by learning style differences from a single image pair. It separates style and content into different weight spaces, allowing for effective style application without overfitting to specific images.
Anywhere can place any object from an input image into any suitable and diverse location in an output image. Perfect for product placement.
ConsistentID can generate diverse personalized ID images from text prompts using just one reference image. It improves identity preservation with a facial prompt generator and an ID-preservation network, ensuring high quality and variety in the generated images.
MaGGIe can efficiently predict high-quality human instance mattes from coarse binary masks for both image and video input. The method is able to output all instance mattes simultaneously without exploding memory and latency, making it suitable for real-time applications.
Similar to ConsistentID, PuLID is a tuning-free ID customization method for text-to-image generation. This one can also be used to edit images generated by diffusion models by adding or changing the text prompt.
CharacterFactory can generate endless characters that look the same across different images and videos. It uses GANs and word embeddings from celebrity names to ensure characters stay consistent, making it easy to integrate with other models.
Parts2Whole can generate customized human portraits from multiple reference images, including pose images and various aspects of human appearance. The method is able to generate human images conditioned on selected parts from different humans as control conditions, allowing you to create images with specific combinations of facial features, hair, clothes, etc.
TF-GPH can blend images with disparate visual elements together stylistically!
CustomDiffusion360 brings camera viewpoint control to text-to-image models. Only caveat: it requires a 360 degree multi-view dataset of around 50 images per object to work.
StyleBooth is a unified style editing method supporting text-based, exemplar-based and compositional style editing. So basically, you can take an image and change its style by either giving it a text prompt or an example image.
IntrinsicAnything is able to recover object materials from any images and enable single-view image relighting.
VQ-Diffusion can generate high-quality images from text prompts using a vector quantized variational autoencoder and a conditional denoising diffusion model. It is up to fifteen times faster than traditional methods and handles complex scenes effectively.
MOWA is a multiple-in-one image warping model that can be used for various tasks such as rectangling panoramic images, unrolling shutter images, rotating images, fisheye images, and image retargeting.
[ControlNet++] can improve image generation by ensuring that generated images match the given controls, like segmentation masks and depth maps. It shows better performance than its predecessor, ControlNet, with improvements of 7.9% in mIoU, 13.4% in SSIM, and 7.6% in RMSE.
PanFusion can generate 360-degree panorama images from a text prompt. The model is able to integrate additional constraints like room layout for customized panorama outputs.
MindBridge can reconstruct images from fMRI brain signals using a single model that works for different people. It achieves high accuracy even with limited data, making it effective for new subjects.
GoodDrag can improve the stability and image quality of drag editing with diffusion models. It reduces distortions by alternating between drag and denoising operations and introduces a new dataset, Drag100, for better quality assessment.
ZeST can change the material of an object in an image to match a material example image. It can also perform multiple material edits in a single image and perform implicit lighting-aware edits on the rendering of a textured mesh.
Imagine Colorization leverages pre-trained diffusion models to colorize images while supporting controllable and user-interactive capabilities.
MuDI can generate high-quality images of multiple subjects without mixing their identities. It has a 2x higher success rate for personalizing images and is preferred by over 70% of users in evaluations.