Hello there my fellow dreamers, welcome to issue #11 of AI Art Weekly 👋
Here are some of this weeks highlights:
- ChatGPT can generate MidJourney prompts and new v4 features
- Corgi – a new Diffusion model enters the game
- Sketch guided image diffusion ✍️
- Interview with AI artist Stephan Vasement.
- Stable Diffusion v2.0 Google Colab notebook
- And a free open-source image Upscaler
Let’s jump in.
Cover Challenge 🎨
This weeks “ moebius ” challenge got 64 submissions from 34 artists and the community decided on the final winner. Congratulations to @_HorizonLights_ for making the final round 🥳 And as always a big thank you to everyone who contributed!
The theme for the next challenge is “ dreamscape ”. Think nature & landscapes, but with a dream like or alien touch. Any art style is okay. Prize is another $50. Rulebook can be found here and images can be submitted here.
I’m looking forward to all of your submissions 🙏
Reflection: News & Gems
“Chat-To-Image” – ChatGPT and MidJourney
OpenAI published ChatGPT this week, a model specifically trained for conversations. @GuyP tested it by asking it a one-line question about ideas on how to decorate a living room. He then continued to type the answers ChatGPT gave him straight into MidJourney which resulted in some amazing images.
This seems like a great way to spice up your prompts if you ever feel stuck or just want to try something completely new. I’ve tried it myself, and ChatGPT is a bit picky on how you phrase the question. But the following pattern inspired by @visualashish seems to work really well: “ Hey, provide some interesting text to image prompts for an image about <subject> in the style of <writer/director/artist> ”.
Speaking of MidJourney: They added some new updates to their V4 model, bringing back support for
--stylize and adding a new option flag called
--style, which enables to generates images based on the original v4.0 style (they call it 4a). They claim they have also improved their Upscaler which should result in less blur and better details. My first tests seem to confirm this. So, happy prompting 🤘
Corgi – yet another Image Diffusion model
Dalle, MidJourney and StableDiffusion might be the big players in town, but research on new and improved methods are not standing still. Just a few weeks ago we talked about eDiffi. This week, there is Corgi (Clip-based shifted diffusi On model bRidGIng the gap), a novel method for text-to-image generation based on a shifted diffusion model that encodes prior knowledge from the pre-trained CLIP model. Extensive experiments show that Corgi has a stronger generation ability compared to existing methods, achieving new state-of-the-art results on downstream language-free text-to-image generation tasks.
Sketch-Guided Text-to-Image Diffusion Models
I love this one. The idea of drawing something on an iPad for instance and then letting AI do its magic is such a cool concept. A new paper called “Sketch-Guided Text-to-Image Diffusion Models ” makes it possible by introducing a novel approach to guide a pretrained Text-to-Image Diffusion Model by using a special type of network called a Latent Guidance Predictor, allowing the creation of images that follow the guidance of a sketch.
If you’re into sketch based art, CLIPascene is for you. The paper proposes a method for automatically converting a given scene into a sketch that can be adjusted to different levels of abstraction in terms of both fidelity and visual simplicity.
NFD: 3D Neural Field Generation using Triplane Diffusion
3D diffusion models are a heavily researched topic. One of the latest ones, NFD, is an efficient diffusion-based model that has been developed for 3D-aware image generation. This novel approach pre-processes 3D training data, such as ShapeNet meshes, by converting them into 2D feature planes, which can then be used to train existing 2D diffusion models.
Imagination: Interview & Inspiration
This week, we dive into the fascinating world of AI Art with Stephan Vasement – a trained screenwriter, street photographer and recently turned late night AI artist. I recently stumbled upon his lucid work, and as an avid dreamer, I was immediately captivated by the dream-like quality of his imagery. Let’s jump in!
[AI Art Weekly] Hey Stephan, what’s your background and how did you get into AI art?
AI was introduced to me by friends who are collectors of generative art. It was early summer when MidJourney was still invite-only. We were just enjoying the process itself, coming up with prompts, paintings and collections. It didn’t seem to me back then, that AI art would grow to the scale it is now.
I studied to be a screenwriter in my youth and then worked in the animation business. We created what are called explainer videos. We wrote the script, made the storyboard, drew and animated it ourselves. As a result, we created a small cartoon advertisement. Often we had to invent some metaphors to explain not the most obvious things which was very stimulating for our imagination.
At the same time I did a lot of street photography. Two or three times a week I would go out and walk the streets from dusk till dawn, looking for interesting subjects – sometimes walking up to 20 Kilometers a day.
[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?
Right now all my attention is on the “In Dreams” collection. In it I’ve finally been able to combine my love for photography and a certain artistic aesthetic. With it I try to combine photorealism and dream logic. I like AI because I can use it to give simple subjects an artificial feel and to build up interesting stories. Aesthetically, the collection is built on my favourite directors and photographers: Andrei Tarkovsky, Peter Greenaway, Georgi Pinkhassov, Nan Goldin, Joel Meyerowitz, Louis Barragan and others.
[AI Art Weekly] What does your workflow look like?
I use Midjourney. Since it’s discord-based, I have no problem using it almost anywhere I go.
Usually I try to relax and not think about creating anything. I observe the world, watch movies, read books, listen to music. At some point an idea comes to mind and I start experimenting with it. The initial idea sometimes changes a lot and my intention takes a new path after seeing the first results. For instance, I may see some random object in an image and then consciously add new words to the evolving prompt.
Sometimes I can do five pieces in an evening that I’m happy with, sometimes I won’t prompt a whole week at all. I also use DALL-E if I need to correct some details or add something else to the piece.
[AI Art Weekly] What is your favourite prompt when creating art?
Right now I’m mostly exploring the world of film photography. I create photorealistic interiors in which I add new objects to create an interesting sense and atmosphere of mystery.
35mm for film type,
soft light for color settings and sometimes
film itself all work great.
kodak pro gold is another good example.
[AI Art Weekly] How do you imagine AI (art) will be impacting society in the near future?
I think we haven’t realized the place of AI in our lives yet. AI art is going the same way as photography, only much faster. But for me personally, the future seems very unpredictable as of right now.
[AI Art Weekly] Who is your favourite artist?
In AI art I especially like those artists who stick to their style no matter what. @StLaurentJr and @marc are two examples. Both have a striking style, their work cannot be confused with anyone else and inspire me very much. I know there can be a very strong temptation to start doing something popular for collectors, especially when sales are down. And with AI, it’s very easy to do that.
[AI Art Weekly] Anything else you would like to share?
I just released my new work called “The Introvert”. It is the first of a trilogy dedicated to certain colors.
This piece was created when I was exploring how AI could draw snow without having an original plan in mind until at some point I started to think back to my childhood.
Back then I would leave for school early in the morning and had to walk through a fairly snowy area in the dark. People wandered around sleepily and storefronts were already lit. I remembered the feeling of loneliness that overtook me at the time. I wanted to express it and I came up with a basic idea, which seems very simple, but at the same time feels close to many people: “The tree and the bar merged into one figure, which loomed menacingly over the helpless introvert.” What I did not expect was that out of a thousand variations I would suddenly find this one. In this image came the depth that I look for in art.
Creation: Tools & Tutorials
These are some of the most interesting resources I’ve come across this week.
And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:
- Sharing it 🙏❤️
- Following me on Twitter: @dreamingtulpa
- Leaving a Review on Product Hunt
- Using one of our affiliate links at https://aiartweekly.com/support
Reply to this email if you have any feedback or ideas for this newsletter.
Thanks for reading and talk to you next week!