AI Art Weekly #15

Hello there my fellow dreamers, welcome to issue #15 of AI Art Weekly 👋

The year is almost over and 2023 is right at our doorstep. Although I’ve been interested in generative art since the first day I wrote my first line of code (about 12 years ago), I never really took the time to create something myself. This all changed six months ago, when MidJourney sent me my beta invite code. And, oh boy, have I been hooked since then (and I’m sure most of you have as well). I started this newsletter exactly 15 weeks ago, and it has been one hell of a rollercoaster. AI is moving so fast; it feels like I’ve been doing this for an eternity. And I’m sure next year will be even crazier. With MJv5, SDv3, GPT-4, 3D, video, audio, music, and movie models, there is so much potential to improve and revolutionize, and I’m thankful to each and every one of you, 731 subscribers, for being by my side for this journey. From the bottom of my heart: thank you and happy new year ❤️

I’ll write to you from the other side.

P.S.: If you’re still looking for a great way to showcase your AI art at home, the Looking Glass deal from the last two issues is still good until tomorrow.


Cover Challenge 🎨

This weeks “ change ” challenge got 45 submissions from 28 artists and the community decided on the final winner. Congratulations to @arvizu_la for winning the final round on the 10th AI Art Cover Challenge 🥳. And as always a big thank you to everyone who contributed!

The theme for the next challenge is “ utopia ”. How would your personal utopia look like? Prize is another $50. Rulebook can be found here and images can be submitted here.

I’m looking forward to all of your submissions 🙏


Reflection: News & Gems

As to be expected, this has been the slowest week since the start of this newsletter and there aren’t a lot of updates in the research department this week.

I’m using this issue to highlight some of my favourite papers, tools and discoveries per issue that came out since I’ve started covering AI art related news:

  • Issue #1: Deforum Stable Diffusion, still my go to tool for creating video animations and I’m currently compiling my first SDv2.1 animations.
  • Issue #2: Make-A-Video, the first major text-to-video model announcement. Although we aren’t blessed with an open-source text2video model yet, I’m still impressed and hope 2023 is the year we’ll finally get to lay our hands on one.
  • Issue #3: Stable Diffusion Artist Style Studies. This reference style guide helped me a lot in learning more about different art styles and guiding my creations in the directions I wanted. There is an updated style guide for v2, but I couldn’t get a lot of use out of it, due to the way Stable Diffusion’s v2 release removed a lot of artist references.
  • Issue #4: AudioGen, another mind blowing paper announcement by Meta, this time for a text-to-audio model that can generate soundbites of birds chirping or cars driving by.
  • Issue #5: CLIP-Interrogator, a handy tool that I still use Today to find new words to sprinkle into my prompts.
  • Issue #6: ARF (Artistic Radiance Fields), which allows to transfer the style of an image into a NeRF scene. Since then, NeRF-Art is a newer approach that enables to apply styles to NeRFs using only text.
  • Issue #7: IRL “Prompt Battle”, a rap battle inspired competition but with keyboards and AI image generation instead of words. Weird and quirky.
  • Issue #8: MotionBERT was one of the first that let you transform videos of human motion into 3D animation data.
  • Issue #9MinD-Vis is my pick of the year. A model that is able to turn brain activity into images. I eagerly await the day when I’m able to visualize my sleeping dreams – I have a feeling we don’t have to wait that much longer.
  • Issue #10: VectorFusion, the first Text-To-SVG transformer which is able to generate scalable vector graphics. Still no code though :(
  • Issue #11: ChatGPT would be the obvious pick here, but like MJv4, Dreambooth or SDv2, the most obvious are already “boring”. That’s why “Sketch-Guided Text-to-Image Diffusion Models” takes the cake. A model that lets you turn sketches into images – while you sketch.
  • Issue #12: ANGIE comes right after MinD-Vis on my most favourite papers list this year. The model is able to generate high-fidelity co-speech gesture video sequences from audio inputs and a single image. While the “world” is worried about image diffusion models creating deep fakes, this would probably make them faint.
  • Issue #13: MAGVIT makes it possible to outpaint and inpaint videos as well as turning a single image into an animation, create future frame predictions and compress videos by a rate of 600x compared to the original footage.
  • Issue #14: And last but not least, Point·E, a novel OpenAI model for text-to-3D generation (although last week as packed with amazing stuff).

And these are some of my personal highlights from the past few weeks of AI Art Weekly. What were your highlights this year? Let me know on Twitter or by replying to this email.


Imagination: Interview & Inspiration

This week we speak with Dadaist and data journalist @Merzmensch, the author of the online publication Merzazine. I first encountered Merzmensch only shortly after the interview with @GanWeaving for the first AI Art Weekly issue and he has since been lingering on my “inspiring AI artists interview” list. I myself am a sucker for surrealism and philosophical questions, so I’m glad this interview came together. Enjoy.

[AI Art Weekly] Hey Merzmensch, what’s your background and how did you get into AI art?

I have an academic background in Cultural and Art Studies. My main research focus was and still is on the Historical Avant-Garde (Dadaism, Surrealism, MERZ-art etc). Beginning with 2015, my attention was drawn to creative AI models (Google Deep Dreams, GAN, StyleTransfer, GPT-2/3, DALL-E, Diffusion Models). Joining online discussions between AI researchers, artists, and art critics, I’ve seen many parallels between the disruptive art movements of the 1920s and 2020s.

Observing creative developments around new technologies, I see it clearly: we are experiencing and shaping the new Art Epoch of Human-Machine creative collaboration.

[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?

I work on several projects concurrently. My most important one is #reMERZ, where I am training various AI models on my poetry, essays, photos, and creating new visions of my work using AI. During my work on #reMERZ, I have discovered stunning connections and inspiring ideas that I may not have come up with on my own. I wrote an essay about this project on Medium.

In this video I am using an AI-written poem together with AI-generated visions – all models were trained on my works.

[AI Art Weekly] What does your workflow look like?

Within #reMERZ, I have several workflow approaches. In one process, I allow AI to generate numerous new versions of my texts, curate them, and create AI-driven images. In another, I generate AI images and write poems inspired by them. It’s always a collaboration between my creativity and the creativity of a machine.

Sometimes, I even act as a creative agency for AI, for example, with the following short movie which I wrote about in my article Creative Collaboration with AI.

Nyeshkerh (2021) – an AI-generated short movie by Merzmensch

[AI Art Weekly] What is your favourite prompt when creating art?

My favorite kind of prompt relates to the artist’s studio, for example, Artist in his studio <image>.

With such prompts, I want to unleash the creativity of a machine without influencing it with my prompts. Here, the main task is to create an art studio, but which artist will be depicted and which artworks, styles, and motifs are up to AI.

I am always fascinated by providing the machine with much freedom and seeing the creative ways it will use this freedom. I wrote about this in my essay Art in Art in Art (DALL-E, MidJourney, Stable Diffusion).

[AI Art Weekly] How do you imagine AI (art) will be impacting society in the near future?

AI will impact culture and society by enabling every creative person to fulfill their dreams, even if they are not skilled in particular areas. Not everyone is skilled in drawing, but everyone has dreams, visions, and ideas, and AI can enhance and augment human creativity in entirely new ways.

[AI Art Weekly] Who is your favourite artist?

I am always fascinated by Dadaists and Surrealists, and my favorites are Kurt Schwitters and Rene Magritte. These artists were exploring transmedial art before it was even invented as such; they were also fascinated by collisions of meanings, semantical shifts, and the questioning of reality. AI models are an exciting continuation of their take on scrutinizing traditional art and culture.

“Collective Invention” by Rene Magritte

[AI Art Weekly] Anything else you would like to share?

Everyone can create, so don’t be shy, explore new terrains, and don’t be afraid of new technologies. Be a part of the new Art Epoch.


Creation: Tools & Tutorials

These are some of the most interesting resources I’ve come across this week.

“masterpiece, double exposure of a female profile silhouette and a dreamy foggy city forest background coming out of her head, fireworks, true detective intro” by me

And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:

Reply to this email if you have any feedback or ideas for this newsletter.

Thanks for reading and talk to you next week!

– dreamingtulpa

by @dreamingtulpa