AI Art Weekly #8
Welcome to issue #8 of AI Art Weekly. A newsletter by me (dreamingtulpa), to cover some of the latest happenings in the AI Art world.
Cover Challenge 🎨
This weeks “ cyberpunk ” challenge got a wooping 76 submissions from 48 artists! Wow, thank you everyone for participating. The community once again decided on the final winner. Congratulations @_K_o_S_ and a big thank you to everyone who contributed!
The theme for the 4th challenge is “ comic ” as in comic book. Think Tintin, Stan Lee or anything that resembles a comic book look. Prize is again $50. Rulebook can be found on the website. Images can be submitted here.
I’m looking forward to all of your submissions 🙏
If you want to support the newsletter, this weeks cover is available for collection on objkt as a limited edition of 10 for 2.50ꜩ a piece. Thank you for your support 🙏😘
Reflection: News & Gems
MidJourney V4 Alpha
Last weekend started with a bang. MidJourney released an alpha version of their V4 model and “oh boy” does it generate stunning images. Image prompts, multi prompts and weighted prompts are back! Although there is currently only support for square images, other aspect ratios are in development. Can’t wait.
@sharifshameem, the founder of the Lexica Stable Diffusion search engine, announced that they’re working on their own model called Lexica Model. He posted a sneak peak of the models capabilities on Twitter and the images produced with it look hyperrealistic. I don’t know it the model will focus on realism or if that’s just one use case. We’ll see.
eDiffi got a first unofficial StableDiffusion implementation of their Paint-with-words method that lets you generate image from a text-labeled segmentation map. I can see this being very useful when integrated directly into image editing tools like Photoshop and Affinity. Just draw a quick map, add some words to the different sections and generate some beautiful artwork on top of it in a few seconds.
Nature flythroughs & Video Editing
In the video editing department I stumbled across three interesting papers this week: InfiniteNatureZero, Text2LIVE and FactorMatte.
InfiniteNatureZero is a new video generation method from Google that produces high-resolution 3D flythrough videos of natural scenes from just a single 2D image.
Text2LIVE enables generating complex semi-transparent layers to augment an input scene without changing irrelevant content in the image or video. The results look super cool, checkout their project page for video examples, especially the “snowy countryside scene” is 🤯.
FactorMatte is more focused on separating foreground from background and could be a timesaving workflow extension for rotoscoping work for example.
I’m no expert in this area, but 3D animation of human motion seems to have gotten significantly easier this week. @walterzhu8 and the team behind MotionBERT published the code and models to let you transform videos of human motion into 3D animation data.
Apart from our beautiful cover art, this is the most cyberpunk content I’ve seen this week. The madlad @SCPSolver built a home brewed server with 4x NVIDIA K80 GPU’s with a total capacity of 96 GB VRAM. And if that’s not enough, he plans on building a second one and cluster them together to run inference on large language models with a total of 192 GB VRAM. All the parts for this he acquired on Ebay. You can find his shopping list on Reddit.
@BjoernKarmann created a proof of concept showing how voice command, selection gesture, SD and AR can come together to alter the reality around us.
Maybe not “art” in the traditional sense but still super cool to look at. By @Weiyu_Liu_: “StructDiffusion constructs structures out of a single RGB-D image based on high-level language goals. The model trained in simulation generalizes to real-world objects the robot has never seen before.”
Imagination: Interview & Inspiration
In this weeks interview of AI Art Weekly we talk to makeitrad. An AI artist that continuously grabs my attention with beautiful GM posts and stunning GAN interpolation animations. Ever wondered how these get created? Read on below.
[AI Art Weekly] Hey Rad, what’s your background and how did you get into AI art?
I discovered AI art by seeing some of it on twitter a couple years back. I can’t remember the first artist I saw do it, but I do know I was too shy to reach out to them directly. After a couple months of seeing a few individuals work, I mustered up the guts to reach out to @TormanJeremy and @unltd_dream_co both of which helped steer me in the right direction – both ethically and technically. I came from the commercial design/agency world so I wasn’t really sure if AI art was art… if I was ripping people off… what it was, but I was super intrigued. After tinkering for a month or two I dropped a VQGAN animation on Hicetnunc and the rest was history.
I could immediately tell there was a tough learning curve and each artists work could be very unique. I spent days at a time in front of the computer trying to learn as much as I could. I discovered AI was a huge rabbit hole and while I was most interested in the visual aspects, things were starting to explode in other areas as well. GPT2 and StyleGAN2 both caught my eye. I only dabbled with GPT but StyleGAN became an obsession and I still use it in my workflow to this day, even though its old and not the cool new toy. When people see a great interpolation done in StyleGAN it still blows their mind and people reach out asking “How’d you do that?”.
[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?
Since the NFT market has slowed down in the last few months I’ve taken the time to learn and test more tools. I’ve worked on numerous personal projects that I’m doing for myself with not much of an intent on selling. I’ve had the pleasure of beta testing many amazing tools recently and I hope to keep this going and displaying my work in more real life events throughout the year. At some point I’ll list more work but right now I’m exploring and trying to find new techniques.
I try to post daily on Twitter and Instagram and remain active in the AI Art community. Most of the work I’ve done up to this point has been about architecture or the natural world. I’m looking to make something a bit more personal and more controlled. I love to embrace the happy accidents and unexpected results you find in AI but for my next project I want to explore more conceptual deeper meaning in the work. I’ve also really pushed the technique previously mixing and matching different AI animation tools. With my next collection I am going to push to using one tool, and attempt to only use stills. Not sure what the response will be but I think that’s the fun part. Every time I’ve reached out of my comfort zone I’ve generally been rewarded. Fingers crossed this happens again.
[AI Art Weekly] Can you tell us something about selling AI art?
This is a big question… Probably far too long of a response is needed. I guess the biggest piece of advice I can give is not to do this for the money. Make things daily and do it for yourself. I’ve seen many come into the scene with the expectations of becoming Beeple overnight. The thing is Beeple didn’t become Beeple overnight. He worked and grinded for many years and was a well know mograph artist. He gave the community of his peers knowledge for free by hosting tutorial sites and being active in the community. He was known long before NFTs. He was at the right place at the right time, had an amazing following, and rad work. It’s the combo of all these things that will make you successful. The work alone will not get you there. I’ve found that those that help others and are willing to give some truthful insight usually go further than the ones who trap themselves in their room and drop work in a vacuum. Promoting yourself is 9/10th of the game as well. You will spend hours a day on Twitter, Instagram, and NFT platforms to be successful. I think my biggest mistake in the early days of NFT making was trying to do dailies and release awesomeness every day. At a certain point I was really being hard on myself and was ready to burn out. Pace yourself, take your time, and don’t be in such a rush. Many times I’ve been asked, “when are you going to quit the day job and do this full time?” My typical answer is never, I do this for fun and the minute I start relying on it to pay the bills it will lose its charm. By not relying on art making this way I can do whatever I want. I’m not pressured by collectors, only myself. If I make a few bucks awesome, if not… I learned something rad.
[AI Art Weekly] What does your workflow look like?
My process varies from project to project. Generally speaking, I like to make work that stings many different processes together. For instance, sometimes I’ll generate 5000 images in a text to image generator like Disco Diffusion, JAX, or Stable Diffusion. From there I will finetune and train a model on that above work. Then generate another 5000 images from that model, to then bring the outputs back into StyleGAN and train it again there for the interpolations. I’ve used this process a couple times and had some very original results. Doing this helps me make something I consider truly unique. Trying to figure out how, and reverse engineering it becomes harder and harder as well. I’ve shared the secrets with many but I rarely see anyone take the deep dive and try to emulate the look of the work.
As far as inspiration goes… I taught for many years at CalArts and pull my inspiration from my students and coworkers in the mograph/design industry. There is so much talent from both avenues my Instagram feed is full of amazing work to take inspiration from daily. Making the transition from Designer to Artist has been a tough one. It’s uncomfortable and somewhat frightening not having a “client” help guide the work. When working for yourself you are tied to a different set of rules and expectations. It’s been both liberating and terrifying finding my voice in this endless abyss. On top of that you battle with the fickle crypto market as an indicator for validation and success of your work. This sometimes makes you wonder if you are any good and so the endless battle of imposter syndrome takes over. It can definitely be a mental health challenge. It wasn’t until I started listening to many different Twitter Spaces that I found this to be a common thread many of us share and I am not alone.
If you want to try out GAN interpolation yourself, @makeitrad1 and @alterbeastlab put together a How-To Guide Twitter thread with some useful links and information that they used for creating their “Majestic Mycology” collection.
[AI Art Weekly] What is your favourite prompt when creating art?
I’m happy to share my process and thinking, but I don’t hand out the prompts. There are plenty of tools out there like “Prompt Interrogator” that will get you in the ballpark so use them. The way you structure your prompt and the words you use should be individual to you. To me they are private. It’s like my diary of messed up thoughts that no one has any business reading. I don’t prompt like the book tells you too. I have horrible spelling, there are huge grammar mistakes. They are more like ramblings.
These are all from my GM images. They all explore one prompt and I’ve been posting hundreds of them throughout the year. I love this prompt…
[AI Art Weekly] How do you feel AI (art) will be impacting society?
I like to believe people are generally good, much like @EMostaque from Stable Diffusion. I’m tired of the idea that companies need to “protect” us from AI. In the few short months that Stable Diffusion has been released to the public we have seen amazing sets of new innovation and tools based on the models. This was all possible because of the lack of gatekeeping and the idea that people are good and everyone deserves to be able to explore and use these new ground breaking tools. I think only more incredible work is on the way. The way we think AI will be used will be just the tip of the iceberg. New and amazing uses no one has thought about will continue to evolve and blow minds. I truly believe humanity is in the beginning stages of something huge. Medical innovation all the way to creating pretty pictures will benefit from these new tools. Life as we know it is going to change dramatically in the next 10 years because of AI. I hope it is an inclusive world and helps push humanity to bigger and better things. I think those that are currently fearful are mostly just scared of change. Once they actually try the tools they will be amazed and sparked by the potential of what’s to come.
[AI Art Weekly] Who is your favourite artist?
I really enjoy artists that are pushing the idea of blockchain art further than just a pretty picture. While I love a good image and animations just as much as the next person. We have something special available to us with the blockchain. The smart contract. I think more and more artists will team up with devs similar to @muratpak. The smart contract can and should be used as part of the idea, another piece of the medium that lends to the idea. I’ve only had the opportunity to do this once, but the project was fun and something different. I collaborated with a developer I met on a weekly $ASH community twitter space. @ktrbychain and I discussed for a few weeks how we might collaborate together and the “Extraordinary Crystal” was born. Basically, one collector could buy the “Extraordinary Crystal” and while it would be locked to one wallet forever, anytime it was bought, sold or transferred it would spawn one of 5 other crystals that the recipient would receive. This is now a living breathing collection that has a life of its own and is touching people in @Donye_NFT‘s theframegames.com in weekly games and community functions. I never thought something like this would happen to the collection and it’s been awesome to see what others have done with it.
Each week we share a style that produces some cool results when used in your prompts. This weeks style is
popup-book. There is also a dedicated Stable Diffusion model to produce more fine tuned results.
Creation: Tools & Tutorials
These are some of the most interesting resources I’ve come across this week.
If you want to improve your prompt game this book might be for you. Learn more about format, modifiers, magic words, parameters, img2img and other important tips.
@RiversHaveWings has trained a latent diffusion upscaler for the Stable Diffusion autoencoder. Colab written by @nshepperd1.
A huggingspace demo by @Flux9665 that lets you create AI voices for a specific text input. The voices still sound hearable artificial, but might be a cool play session to implement them in some experiment.
Automatic1111 WebUI finally got a Dreambooth extension. If you haven’t tried Dreambooth yet, this might be the time to do it. I’ll definitely take a look at it and try to run it on Paperspace as soon as I find some time for it.
And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:
- Sharing it 🙏❤️
- Following me on Twitter: @dreamingtulpa
- Using one of our affiliate links at https://aiartweekly.com/support
Reply to this email if you have any feedback or ideas for this newsletter.
Thanks for reading and talk to you next week!