AI Art Weekly #25

Hello there my fellow dreamers and welcome to issue #25 of AI Art Weekly! 👋

This week is PACKED with new stuff, so I’m gonna keep the introduction short this time. Our little Discord community has grown to 100+ people since last week and I had a good time chatting with some of you. Come and join us if you’re looking for like-minded creators to talk all things AI and more!

  • Midjourney V5 & GPT-4
  • MeshDiffusion – a novel method to generate 3D meshes
  • Interview with artist WeirdMomma
  • Automatic1111 3D OpenPose Plugin
  • Stable Diffusion Infinite Zoom Out / Zoom In

Cover Challenge 🎨

Theme: horror
105 submissions by 63 artists
AI Art Weekly Cover Art Challenge horror submission by EternalSunrise7
🏆 1st: @EternalSunrise7
AI Art Weekly Cover Art Challenge horror submission by alejandroaules
🥈 2nd: @alejandroaules
AI Art Weekly Cover Art Challenge horror submission by sarcastic_songs
🥉 3rd: @sarcastic_songs
AI Art Weekly Cover Art Challenge horror submission by chaotic_crayons
🧡 4th: @chaotic_crayons

Reflection: News & Gems

Midjourney v5 and GPT-4

Let’s start with the obvious ones. My guess is most of you already know, but in case you don’t, GPT-4 and Midjourney V5 got released this week.

Midjourney’s v5 model is a nice step ahead compared to v4. The new model was trained on a dataset with 1024x1024 images compared to the previous standard of 512x512, which results in images being able to consist of more details. Details tend to be more correct compared to previous versions, like for example one of the most discussed topics when it comes to AI art: hands. The new model also lifts the restrictions on aspect ratios, which makes it possible to create super wide or super tall images. Oh, and image prompt weighting (--iw) is back as well.

Although OpenAI’s GPT-4 model is not directly linked with AI art, it’s still an impressive piece of tech which I want to quickly talk about. Apart from the models impressive improvements when it comes to generating text output, the feature that I’m most excited about, are the new still in private beta vision capabilities. Once released, you’ll be able to feed a picture into GPT-4 and give it instructions on what to do with it. In the developer livestream on Thursday, they turned a handdrawn mockup into HTML/CSS code with working JavaScript buttons. Impressive and I can’t wait to somehow utilize this for creating new things.

Apart from these two big ones, we also got a ton of other cool stuff that got showcased this week.

“Untitled” created with MJv5 and SD by me

MeshDiffusion: Score-based Generative 3D Mesh Modeling

MeshDiffusion is a new novel approach to generating 3D shapes. Compared to other methods, this one creates meshes instead of point clouds or voxels. Meshes are more desirable in practice, because they enable easy and arbitrary manipulation of shapes for relighting and simulation, and they can fully leverage the power of modern graphics pipelines which are mostly optimized for meshes. The generated meshes can also be textured by using something like TEXTure (issue #20).

With standard DDPM training and sampling, MeshDiffusion can generate realistic and diverse sets of 3D meshes, many of which are novel shapes not in the training set.

3DFuse: Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation

Some type of models rarely get released alone. 3DFuse is another model that can generate 3D objects from text or an input image. The difference to MeshDiffusion is that this one generates a NeRF instead of a mesh. The examples look quite good as well.

3DFuse examples

3D Cinemagraphy from a Single Image

Then we had 3D Cinemagraphy. A new technique that marries 2D image animation with 3D photography and is able to turn a single still image into a video that contains both visual content animation and camera motion.

3D Cinemagraphy examples

StyleGANEX: StyleGAN-based manipulation beyond cropped alinged faces

StyleGANEX is a method for face generation from sketches or masks, upscaling faces and manipulating facial attributes in images and videos. Want to turn yourself into a pixar character, test a new new hair color while making yourself 20 years younger? What a time to be alive!

StyleGANEX hair color example

Fate/Zero: Fusing Attentions for Zero-shot Text-based Video Editing

No, Today we’re talking not about the anime, but about yet another video editing method. Compared to previous methods, Fate/Zero doesn’t require to train a model on every video and works with pretrained Stable Diffusion models. And the results look quite promising.

Fate/Zero examples

E4T-diffusion

There is a new text-to-image fine-tuning method in town called E4T-diffusion created by @mk1stats. The new method is able to fine-tune a new concept in a few seconds with as much as 5-15 steps. Impressive, but there is a catch! You’re going to need a GPU with at least 40GB of VRAM. But I’m sure someone will find a way to reduce that rather sooner than later.

E4T comparisons with other fine-tune methods


Imagination: Interview & Inspiration

This week on AI Art Weekly, I had the pleasure of chatting with WeirdMomma, an artist I met very early in my AI art journey and who has continued to grace my path with her beautiful creations ever since. WeirdMomma describes herself as a Texan mother who creates art, plays video games, and tries to befriend the crows in her neighborhood. Let’s dive in and learn more about her work! 🙌

[AI Art Weekly] Momma, what’s your background and how did you get into AI art?

I am a 5th generation Texan, and I have been an artist my whole life. Eventually, I finished college with degrees in photography and design. Prior to working with AI, my main focus was on historic photographic contact printing processes, such as salt prints and cyanotypes. I became interested in AI because I am a curious person and wanted to see what all the fuss was about. I was immediately hooked by the speed at which you can explore concepts, as well as the semi-unpredictable nature of AI. I enjoy trying to wrangle the AI into doing what I want it to do. It took me many months to circle back to my natural style because I spent most of the early days exploring everything, and as a result, very little of my work was cohesive. However, I feel that my art is finally back to where I started, but on a different level.

“AI helps me step into my dreams” by @weird_momma_x which won 4th place in Claire Silver’s 3rd AI art contest

[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?

Yes, I started exploring the idea of representing regret and aging - those moments of reminiscing about paths you didn’t take - through a more surreal lens. I’m also very much attracted to using older faces, since most AI seems to want to create lovely young models. I’ve been using blends in Midjourney of my AI portraits and traditional work in order to achieve the feel I am looking for.

[AI Art Weekly] What drives you to create?

I’m not really sure. I’ve always lived with that drive. When I don’t have a creative outlet, my mood and general outlook on life deteriorate. Parenthood took away a lot of my personal time, so being able to create images on my phone in the small moments I have has been a godsend. My attitude towards life tends to be very “what if…” so I enjoy trying new things and experimenting. I definitely collect hobbies.

Dreams and Lies” by @weird_momma_x

[AI Art Weekly] What does your workflow look like?

Usually, I’m just experimenting, and suddenly, something will strike me. Other times, I’m doing housework, and I get an idea. At that point, I start trying out different things, either prompts or initial images with blends to get to a good starting point. For AI images, I exclusively use Midjourney. However, as a photographer, I won’t allow raw images to be released for sale, even though I share most of the images on social media in their raw form. I use Photoshop to fix the eyes mostly, making them round, adding highlights, etc. If I can’t outright fix the hands or get a reroll to give me something I can composite, I’ll start blurring the image to change the focal point.

[AI Art Weekly] What is your favourite prompt when creating art?

Surreal photograph of [insert subject here]. Art by Dora Maar and Angus McBean and Joseph Cornell and Maurizio Anzeri.

That’s my starting point though I add or remove other things constantly.

“Untitled” by @weird_momma_x

[AI Art Weekly] How do you imagine AI (art) will be impacting society in the near future?

I’m hoping that this is just the beginning of less technically skilled folks being able to create more complex things, like video games or full-length movies. There are so many people out there with big imaginations, but they don’t have the resources to bring their ideas to life. Yes, we’ll end up with a lot of bad content, but I also think this will bring us more original content. I think we all get tired of the recycled movie plots and familiar video game structures.

[AI Art Weekly] Who is your favourite artist?

Joseph Cornell. He does assemblage boxes. No matter where I see his work, when it’s in person, I get a little jolt like that love-at-first-sight feeling.

“A Parrot for Juan Gris” by Joseph Cornell

[AI Art Weekly] Anything else you would like to share?

Just wanted to say that I’ve been really enjoying all the other artists on Twitter and in the various Discord servers. I’ve been in several creative spaces, and while every group has its issues, this one has been a great place for support and interaction. The enthusiasm of other AI artists is contagious.


Creation: Tools & Tutorials

These are some of the most interesting resources I’ve come across this week.

A blend of different images and ishtar abstract oil painting Dora Maar Angus McBean Joseph Cornell Maurizio Anzeri --ar 3:2 --s 400 --c 100 created with Midjourney V5 by me

And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:

Reply to this email if you have any feedback or ideas for this newsletter.

Thanks for reading and talk to you next week!

– dreamingtulpa

by @dreamingtulpa