AI Art Weekly #14

Merry Christmas my fellow dreamers and welcome to issue #14 of AI Art Weekly 🎅✨

I was busy this week hacking together an eCard service that lets you generate unique animated and voiced greeting cards from Santa Claus himself. Here is a greeting card from Saint Nick to the AI Art Weekly community. If you want to support the newsletter, this is a good opportunity and lets you spread some joy at the same time. Also the Looking Glass December deal is still on. Check it out below.

This weeks highlights are:

  • Point·E - OpenAI’s answer to Text-to-3D.
  • Tune-A-Video which lets you modify existing videos using text prompts.
  • Custom Diffusion. A new faster and more lightweight fine tuning method which is able to combine multiple concepts.
  • Microsoft is bringing high quality 3D generated avatars to the table with Rodin.
  • Interview with Guy Pearson aka GuyP
  • New unCLIP diffusion model called Karlo

Let’s jump in.


Cover Challenge 🎨

This weeks “ christmas ” challenge got 55 submissions from 35 artists and the community decided on the final winner. Congratulations to @DIGIMONSTER1006 for making the final round 🥳 And as always a big thank you to everyone who contributed!

The new year is just around the corner, so the theme for the next cover challenge is “ change ”. AI already enabled a lot of new possibilities this year, and as we move into the new year, there will likely be even more opportunities for growth and transformation. No matter who we are, change will be inevitable. Prize is another $50. Rulebook can be found here and images can be submitted here.

I’m looking forward to all of your submissions 🙏


Reflection: News & Gems

Point·E - OpenAI’s answer to Text-to-3D

OpenAI open-sourced(!) a new Text-To-3D model called Point·E this week. The pre-trained model is able to generate 3D objects in a matter of minutes compared to other 3D models by first generating a single view image using text-to-image and then turning that image into a 3D point-cloud. Although the quality is not on par with other 3D methods for now, it’s still an exciting release. There is also a HuggingFace demo by @hahahahohohe in case you want to try it out.

Point·E examples

Tune-A-Video

It has been a while since we saw any new Text-To-Video tech. Tune-A-Video is a new method for text-to-video generation which is able to change the subject or background, modify attributes or transfer styles using text and an existing video as input prompts. So far all we’ve got was a lot of flicker when creating input video animations with notebooks like Deforum. This might soon change!

Tune-A-Video example

MM-Diffusion

MM-Diffusion is the first joint audio-video generation framework that tries to generate realistic videos with matching sounds. I remember trying to create a bonfire animation in the same style as my rain animation using MJv3 a few months ago (feels like ages) which will soon be possible with just the click of a button. Crazy times.

“Two subnets for audio and video learn to gradually generate aligned audio-video pairs from Gaussian noises.”

Custom Diffusion

After Dreambooth and last weeks LoRA, there is a new fine-tuning approach by Adobe in town. Custom Diffusion also lets you train Stable Diffusion models with new concepts. What’s new? It’s fast (~6 minutes on 2 A100 GPUs), it uses less storage (75mb per concept) and it can combine multiple concepts such as new object + new artistic style, multiple new objects, and new object + new category. Checkout the multi-concept results for some examples.

Custom Diffusion explainer

Rodin – 3D Avatar Diffusion

Microsoft announced Rodin, a Text-to-3D model that is able to generate highly detailed 3D digital avatars. It’s also able to generate avatars from an image input and can text-guided editing of an existing avatar. Pretty impressive, but just a research paper for now. Hopefully we can soon play around it with ourselves.

Rodin 3D avatar examples

MidJourney Anime Model

The Niji anime model is now officially available directly from within the MidJourney discord bot. The new model has been specifically fine-tuned to generate anime and illustrative styles and has an enhanced understanding of anime aesthetics. It also produces more dynamic and action-packed shots, as well as more character-focused compositions overall. To use the new model simply add --niji to your prompt. “niji” means “rainbow” or “2D” in Japanese.

“excited waifu anime santa claus” by me


Imagination: Interview & Inspiration

This week we talk to Guy Pearson aka @GuyP, a digital marketer and creative content producer turned AI artist. Recently, Guy garnered a lot of attention from fellow artists for his ChatGPT discoveries, which showed how to train the chat bot to generate elaborate text-to-image prompts. He is the creator of the DALL-Ery GALL-Ery website and the prompt/repsonse newsletter in which he shares his expertise on creating AI art.

[AI Art Weekly] Hey Guy, what’s your background and how did you get into AI art?

I’ve been working ‘online’ since I was 19 in digital communities, social media, audience growth and marketing. In some ways I’m the perfect kind of person to embrace this new wave of AI art tools: I work with tech and creative content all the time, but I’m not quite a coder or a proper artist!

I’ve been following the industry for some time but really it was the launch of DALL·E 2 in early 2022 that captured my attention and made me feel like I _ needed _ to figure this opportunity out!

[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?

I’ve just been experimenting these last few months – there are so many possibilities! – but in 2023 I’ll be relaunching my DALL-Ery GALL-Ery website with tons new content under a new platform-agnostic banner, in partnership with my newsletter prompt/response. For me, there’s a lot less emphasis now on ‘detailed prompt engineering’ so it’s more about finding ways to document the huge space of possibilities with the tools available.

Made up still from the made up movie “OFFSHORE” by Guy Pearson

[AI Art Weekly] What does your workflow look like?

I don’t have a very specific or detailed workflow – just a lot of typing prompts into different tools! I am interested in chaining AI and other tools together to create interesting effects, to make kind of Rube Goldberg-like contraptions, whether that’s something like ChatGPT to manifest concepts or D-ID’s MyHeritage to add 3D effects.

[AI Art Weekly] What is your favourite prompt when creating art?

Anything starting with Film still of... - it really feels like you’re manifesting a whole tangible ‘universe’, and I love some of the #AIcinema experiments that are out there!

The images immediately create a sense of storytelling possibilities and narrative propulsion that I don’t always get from illustration-type prompts. On top of which, I’ve noticed there’s something about this kind of output which seems less triggering for conventional artists, I guess because they’re like images of real-seeming things and places that don’t exist, rather than the type of digital art that these creators specialise in.

Generated in MJv4, outpainted in Dalle, post in Photoshop by @juliewdesign_

[AI Art Weekly] How do you imagine AI (art) will be impacting society in the near future?

It’s hard to say, which is weird to admit, because I do believe the impact is coming very soon! The pace of change this year has been intense – in some ways a lot of my expectations, based on DALL·E 2, have been met, it’s just taken six months rather than three years.

One new thing we might see is the ability of small visionary teams, or even independent storytellers, to create rich, even ‘sprawling’ creative worlds – taking a core creative concept and running with it in a lot of different directions, accelerated by AI. So we could end up with ‘Mr Beast’-style creators developing their own MCU-sized indieverses.

“Collage” experiment by Guy Pearson

[AI Art Weekly] Who is your favourite artist?

I’ve just noticed I’ve always been drawn to artists that use text like Jenny Holzer and David Shrigley - maybe that’s why I’m more comfortable writing pithy little things in boxes than drawing pictures!

Two folks I like over on Instagram are neptunianglitterball and manufacturedmemory - lots of creative world-building and fauxtography! Also I admire people that can take a concept and stick with it and go deep, whereas my ADHD-addled brain is constantly leaping from one idea to another!

“A quarter pounder with a side of disappointment” by neptunianglitterball

[AI Art Weekly] Anything else you would like to share?

Just want to encourage everyone using these tools to keep on keeping on, experimenting and making tons of amazing work! Life is too short to divert energy into a combative mode-of-being – better by far, I feel, to focus one’s efforts on creating more of what you want to see more of.


Creation: Tools & Tutorials

These are some of the most interesting resources I’ve come across this week.

“film still of my ADHD-addled brain is constantly leaping from one idea to another, kurzgesagt style” by me

And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:

Reply to this email if you have any feedback or ideas for this newsletter.

Thanks for reading and talk to you next week!

– dreamingtulpa

by @dreamingtulpa