AI Art Weekly #58

Hello there, my fellow dreamers, and welcome to issue #58 of AI Art Weekly! 👋

We didn’t get to see OpenAI’s androids on DevDay this week, but the new API capabilities, custom GPTs as well as xAI’s Grok are still pretty exciting. These changes might not look like much on first sight, but they mark the beginning of the end for search engines and standalone software. It’s gonna take a few more years, but in the future we’ll be talking to an AI assistant to get things done.

But before that is going to happen, let’s dive into this week’s highlights:

  • OpenAI releases custom GPTs
  • LRM: Adobe’s new image-to-3D model
  • I2VGen-XL: A new image-to-video model
  • Consistent4D: 360° dynamic object generation from a single video
  • MeshNCA: Dynamic textures on 3D meshes
  • Interview with artist ORGNLPLN
  • and more tutorials, tools and gems!

Cover Challenge 🎨

Theme: fogbound
58 submissions by 33 artists
AI Art Weekly Cover Art Challenge fogbound submission by mamaralic
🏆 1st: @mamaralic
AI Art Weekly Cover Art Challenge fogbound submission by Origamiplan
🥈 2nd: @Origamiplan
AI Art Weekly Cover Art Challenge fogbound submission by MynimalM
🥉 3rd: @MynimalM
AI Art Weekly Cover Art Challenge fogbound submission by EternalSunrise7
🧡 4th: @EternalSunrise7

News & Papers

Custom GPTs

OpenAI released the ability to craft your own GPTs.

Custom GPTs are basically ChatGPT with some additional instructions. The difference to simple prompt engineering comes from the ability to add custom data and connect it to external services through APIs. To summarize:

  • Low entry barrier: Everyone can create a customized GPT through natural language, no coding required.
  • Extendable knowledge: GPTs can be extended with external data through files and databases.
  • Custom Actions: GPTs can fetch and send data to external tools through APIs.
  • Multi-modality support: All GPTs can write and run code with code-interpreter, create art with Dalle-3, search the internet with web browsing.
  • Revenue share: OpenAI announced that they will launch a GPT store later this month and that they will pay out revenue shares to GPT creators.

I predict that at some point in the near future the default ChatGPT will be able to access all the custom GPTs and their data. This will be our first taste of a single AI assistant that can do almost everything we’ve done with standalone software so far. Instead of having to learn and handle multiple apps, we’ll get things done through a simple chat interface.

I’ll deep diving custom GPTs over the next few weeks and I’m especially interested in building GPTs that can connect to external services. The first one I built is called CineTulpa. It’s a movie recommendation GPT based on my personal taste. If you create GPT yourself, please share it with me!

Sam Altman reveiling custom GPTs


AI video generation has made some incredible progress this year. Semantic accuracy and temporal continuity are still a challenge though. I2VGen-XL is a new model that generates videos from images while trying to solve these issues.

I2VGen-XL example: A lonely tiger walks by the sea, at sunset, 2D culture.

LRM: Large Reconstruction Model for Single Image to 3D

Adobe is entering the image-to-3D game. LRM can create high-fidelity 3D object meshes from a single image in just 5 seconds. The model is trained on massive multi-view data containing around 1 million objects. The results are pretty impressive and the method is able to generalize well to real-world pictures and images from generative models.

LRM examples

Consistent4D: Consistent 360° Dynamic Object Generation from Monocular Video

Consistent4D is an approach for generating 4D dynamic objects from uncalibrated monocular videos. With the speed we’re progressing, it looks like dynamic 3D scenes from single-cam videos will be here sooner than I’ve expected the last few weeks.

Consistent4D examples

Mesh Neural Cellular Automata

Mesh Neural Cellular Automata (MeshNCA) is a method for directly synthesizing dynamic textures on 3D meshes without requiring any UV maps. The model can be trained using different targets such as images, text prompts, and motion vector fields. Additionally, MeshNCA allows several user interactions including texture density/orientation control, a grafting brush, and motion speed/direction control.

MeshNCA example

More papers & gems


Over the course of the next few issues, Anna Dart and I are bringing back some #AISurrealism interviews. Starting with ORGNLPLN

Tools & Tutorials

These are some of the most interesting resources I’ve come across this week.

Greetings from the edge of reality” by me

And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:

  • Sharing it 🙏❤️
  • Following me on Twitter: @dreamingtulpa
  • Buying me a coffee (I could seriously use it, putting these issues together takes me 8-12 hours every Friday 😅)
  • Buy a physical art print to hang onto your wall

Reply to this email if you have any feedback or ideas for this newsletter.

Thanks for reading and talk to you next week!

– dreamingtulpa

by @dreamingtulpa