AI Art Weekly #22

Hello there my fellow dreamers and welcome to Issue #22 of AI Art Weekly! 👋

Before we dive into this weeks issue, I’m honoured to announce that I’m part of the “ART IS” collection on Montage for which I’ve created my first “serious” art piece called Cerebral Ecstasy (checkout the Mint tab). I used different tools like Midjourney for the initial generation, Dalle-2 for in- and outpainting and Affinity for post-processing as well as ffmpeg to stitch everything together. Was a fun challenge.

But enough self promotion, this weeks highlights are:

  • Community Cover Art Challenge
  • MultiDiffusion and RRR which give more control to image composition with multiple prompts
  • Interview with artist @jeffjag
  • Latent Blending
  • Automatic1111 OpenPose-Editor plugin

Cover Challenge 🎨

The challenge for this weeks cover was “img2img” which received 46 submissions from 31 artists. It’s so cool to see the before and after with img2img, so I’ve added the input images to the challenge gallery. A bit sad this challenge didn’t get more submissions, would’ve loved to see more ControlNet stuff, but, oh well 🤷‍♂️. The community decided on the final winner:

  1. @spiritform 🏆
  2. @Lynncorrigible 🥈
  3. @anainsomnia 🥉
  4. @NathanBoey ❤️

Congratulations to @spiritform for winning the cover art challenge a second time 🥳🎉🎊 and a big thank you to everyone who found the time to contribute!

For the next challenge we’re going to switch things up a bit, instead of artists submitting pieces individually, we’ll be working on (hopefully) one single piece together.

I’ve created a simple algorithm which lets me practically stitch together an endless canvas. All you gotta do to participate is to submit a prompt here (there is also an example). At the end of the challenge I’m going to collect all submitted prompts and let the algorithm stitch them together into one final cover created by the community.

I’m looking forward to all of your prompts 🙏


Reflection: News & Gems

MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation

ControlNet has been a huge leap for Stable Diffusion and the art that has been created with this new won freedom of control is amazing. But one thing that’s not possible with ControlNet (yet), is the ability to control composition with multiple text prompts. MultiDiffusion by @WeizmannScience seems to solve that. The framework is able to generate wide panoramas and use segmentation map masks to guide diffusion with existing pretrained SD models. Though it seems there is one catch for now, either width or height of a generated image should be limited to 512 pixels otherwise generation time will triple for each additional 512 pixels. There is a panorama demo on HuggingFace in case you want to try it out.

MultiDiffusion examples

Reduce, Reuse, Recycle

But as usual in AI paper land, one solution to a problem doesn’t come alone. RRR is a set of new samplers which enable compositional generation. There are two Jupyiter notebooks available in their GitHub repo in case you want to try these out.

Image Tapestry RRR example

SeMani

More control over composition is definitely something I’m looking forward to, but the ability to edit images is important as well. Inpainting is the state of the art technique to achieve that at the moment. But what if you wouldn’t have to draw or use masks to edit an image? Well, SeMani, similar to Pix2Pix-Zero, introduces the ability to edit an existing image with a simple text prompt.

SeMani example

Tuning Encoder

There is a new fine-tuning method in town called Tuning Encoder which is able to train new concepts and styles with a single image and 5-15 tuning steps in only a few seconds. The drawback? It increases VRAM requirements and it’s only possible with target classes that are well represented in the dataset.

Tuning-Encoder comparison with Dreambooth

NerfDiff

And last but not least, let’s take a look at NerfDiff. One of three papers (PC2, RealFusion: 360°) this week that aims to produce a 3D shape from a single image. This will make generating unlimited 3D assets a breeze. Just imagine using a fine-tuned diffusion model based on your prefered art style to generate 2D representations of your assets and then being able to bring them into 3D space. I feel like I say this every week, but I can’t wait to find the time to dive into 3D. So much potential there.

NerfDiff example


Imagination: Interview & Inspiration

This week we talk to trad and AI artist @jeffjag. Jeff recently caught my attention with his new Builders of Titania project as well as his RentHedz collection which was published using NiftyKit. A no-code solution that lets you manage your own collections outside of marketplaces and which I’m considering for an upcoming project. Let’s jump in.

[AI Art Weekly] Jeff, what’s your background and how did you get into AI art?

As a life-long traditional and digital artist, I have extensive experience in realism and have been trained in various mediums such as pencil, pen, chalk, watercolor, pastel, and acrylic. In 2001, I completed my BA in Media Arts and Animation and pursued a career in graphic design, motion design, and 3D animation.

Recently, in 2021, I ventured into AI Art through vqclip + gan on colab and later explored tools like Wombo, MJ, and SD. My current artistic process involves integrating various mediums and tools to create unique pieces.

“gm” by @jeffjag

[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?

I currently have four ongoing projects, each at different stages of development, with new releases soon to come. These projects are FiatHedz Paper & Gold, Paint Drop, RentHedz Renovations, Nightmare Fuel X, and my latest project, Builders of Titania. BoT is a narrative digital collection that will be released through a mint button via NiftyKit’s DropKit. I’m super excited to say that I’m working on a couple projects that will have custom contracts from my partners at @NiftyKitApp and @manifoldxyz.

JPGs by @jeffjpg by @jeffjag

[AI Art Weekly] What does your workflow look like?

While I’m highly experimental with this technology, there isn’t a hard rule for my creative process. Typically, I begin by imagining in MJ or Niji, then heavily edit in Photoshop or Procreate, and finally use Img2Img in Stable Diffusion. For my early projects, I relied heavily on generated text prompts, which I composited and edited in Photoshop and Procreate, or combined multiple generations and illustrated over them. As this tech changes and progresses at a rapid pace, my creative process continues to evolve along with it.

Currently, I use Midjourney, Nijijourney, and Stable Diffusion (using multiple models based on 1.5) across various implementations and web-apps, including Automatic1111 and InvokeAI. I also employ tools such as Photoshop, Procreate, Gigapixel, Adobe Bridge, Google Sheets, NiftyKit, Manifold, OBJKT, and Rarible.

[AI Art Weekly] What is your favourite prompt when creating art?

I’m pretty sure that for the past 9 months, every prompt I’ve received has had the phrase ultrarealistic detail included in it at some point, lol. However, I’m a big fan of weighted prompts for text2img generations. My approach involves starting with a broad stroke, which I weigh at 4 or 3, then I add context with two phrases that are weighed at 2. After that, I fill out the rest of the prompt with 6-8 words or phrases that are weighed at 1. I also include tags for aspect, model, quality, and style in MJ, and settings in SD. For my negative prompts, I use standard phrases like painting, drawing, nets, dots, webs, bokeh, signature, watermarked, specks, blur, warping, fuzzy etc.

Editorial note: Prompts in Midjourney and Stable Diffusion can be weighted to instruct the models to pay more attention to those parts during image generation. Syntax differs from model to model, in Midjourney prompts can be weighted like this: main subject::4 additional context::2 more details. Automatic1111 has a slightly different syntax: (main subject:1.4), (additional context:1.2), more details.

“She’s aware” by @jeffjag

[AI Art Weekly] How do you imagine AI (art) will be impacting society in the near future?

AI Art is us dipping our toes in the water and is just the beginning of a technological revolution. Much like Blockchain, AI has the potential to revolutionize every aspect of our lives, from the way we interact to how we conduct business. It’s a sea change of a scale so big we can’t even begin to comprehend it. However, when these two technologies converge, their impact will surpass even that of the internet. In the near future, more and more open-minded artists will experiment with AI, unlocking its true potential. Although public opinion may be negative at present, this will inevitably change as the tools continue to evolve, enabling us to accomplish things that were previously impossible. Eventually, people won’t be able to ignore it or push it aside.

“She Wasn’t Actually There” by @jeffjag

[AI Art Weekly] Who is your favourite artist?

My kids :)

[AI Art Weekly] Anything else you would like to share?

I’m involved with NFTs because I strongly believe in the concept of owning and selling original digital collectibles. This tech has been a long-awaited dream for me for 20+ years and I am here to stay. AI Art tools are an incredible advancement in image creation, allowing artists to explore new realms of creativity beyond what traditional tools can offer. I find them ethical to use, because I believe they’re transformative in a fair-use context. Furthermore, their presence in the industry is only going to grow stronger. Let’s embrace the weird. The future is unknown but fear doesn’t have a place in it for me.


Creation: Tools & Tutorials

These are some of the most interesting resources I’ve come across this week.

blend of the 4 finalist pieces of the img2img challenge

And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:

Reply to this email if you have any feedback or ideas for this newsletter.

Thanks for reading and talk to you next week!

– dreamingtulpa

by @dreamingtulpa