AI Art Weekly #10
Hello there my fellow dreamers 👋
Welcome to issue #10 of AI Art Weekly. Yes #10. I’m doing this for 10 weeks now, time is flying by. Thank you for sticking with me ❤️
To improve the newsletter I would really like to know which parts of the newsletter are your favourite and which ones you wouldn’t miss. I’m also looking for help in curating the news and resource section of the newsletter. Could you do me a favour and shoot me an email or hit me up on Twitter if you have any feedback or are interested in helping out? Thank you 🙏
That being said, here are some of this weeks highlights:
- Stable Diffusion v2.0. New MJv4 aspect ratios. Text-To-SVG. And new Video diffusion models.
- Interview with AI artist RedruM.
- First SD v2.0 fine tune, Magic: The Gathering TCG model and a first tutorial on how to create a Gradio interface to use SD v2.0
Let’s jump in.
Cover Challenge 🎨
This weeks “ forestpunk ” challenge got 56 submissions from 33 artists and the community decided on the final winner. It was the most voted on final round yet and @CosmicCamera came out on top as the winner. Congratulations! And as always a big thank you to everyone who contributed!
The theme for the next challenge is “ moebius ”. A tribute to famous French artist Jean Giraud. Any subject is okay, as long as it’s clearly inspired by his art style. Prize is another $50. Rulebook can be found here and images can be submitted here.
I’m looking forward to all of your submissions 🙏
If you want to support the newsletter, this weeks cover is available for collection on objkt as a limited edition of 10 for 2.50ꜩ a piece. Thank you for your support 🙏😘
Reflection: News & Gems
Stable Diffusion 2.0, 768x768, depth2img
Out of nowhere, Stability AI announced and released Stable Diffusion 2.0 this week. Here’s what’s included:
- New text-to-image diffusion models which capable of creating images with a default resolution of 512x512 and now 768x768 pixels.
- A new upscaler model which allows enhancing the resolution of images by a factor of 4.
- A new depth2img model which infers the depth of an input image and then generates new images based both on the text and depth information.
- A new text-guided image inpainting model, fine tuned on the new V2 text-to-image models which apparently makes it super easy to switch out parts of an image.
The best thing of all: They hold on to their promise and made it open source. The models are not a drop in solution like previous checkpoints, so interface will need to get updated. Can’t wait to see what the community comes up with.
New aspect ratios for MidJourney v4 & NijiJourney
This week we finally got support for 2:3 and 3:2 aspect ratios in MidJourney and NijiJourney. If you haven’t heard about Niji before, it is a fine tuned anime version of MidJourney made in collaboration with Spellbrush. Their website doesn’t offer a lot of information to who they are, but from the linked Twitter profile it seems to be a company called Sizigi Studios that creates indie games and AI applications like Waifu Labs. Niji is currently in Beta and invite only, which I’m unfortunately out of, but if you’re looking for one, I’m sure folks on Twitter are willing to share them.
If you’re a designer or into pixel art, good news. UC Berkeley researchers @ajayj_, @amberxie_ and @pabbeel have built the first Text-To-SVG transformer which is able to generate scalable vector graphics (which by design are compact and can be scaled to any size while staying sharp). Not is this only useful for creating digital icons, graphics and stickers, but by restricting SVG paths to be squares on a grid following Pixray, VectorFusion can generate retro video game pixel art. Unfortunately there is no open source code (yet). But one can hope 🤞
Latent Video Diffusion Models
AI generated videos are certainly on their way. This week we saw the arrival of two new papers in the video diffusion department.
LVDM and Magic Video are both models that are able to produce photorealistic videos from a single text prompt.
With all these new updates from the “big” ones, not all are happy in diffusion land. Apparently the new Stable Diffusion v2.0 models rely on a dataset that removed a lot of NSFW images and references to famous artists, celebrities and more. MidJourney is long known for not being in support 0f NSFW shenanigans and it was to be expected that Stability AI will go down the same path – which now happened with v2.0. Which is the reason why a group of artists started their own movement called Unstable Diffusion. This is a statement from a recent announcement they made:
While the open source release of SDv2 is commendable, we at Unstable Diffusion commit to creating AI systems that respect freedom of expression and unrestricted creation. The limiting rules of companies like Stability AI, OpenAI, and Midjourney prevent these AI systems from becoming useful tools. An artist’s brush is not blocked from drawing anything, nor should the new tools that are becoming integral to the workflow of the next generation of artists. To ensure this, we are launching our Kickstarter on December 9th to help fund the research and development of AI models fine-tuned and trained on extremely large datasets specifically curated to help you more easily create beautiful art that is body and sex positive.
I’d applaud and support a truly unrestricted open source diffusion model that is funded by the people for the people, but according to this Reddit thread, that’s not the case. So caution is advised.
@ninklefitz is working on @AlpacaML. A next-generation design platform with new tools & workflows for the upcoming age of generative AI models.
@bioinfolucas built a proof of concept RPG game demo in 5 hours using Godot, Krita, Stable Diffusion and MidJourney.
Imagination: Interview & Inspiration
In this weeks interview of AI Art Weekly we talk to RedruM. An Italian artist I not only appreciate for his cool name, but also for his inspiring and unique art style – which is influenced by traditional and modern Italian artists – and indistinguishable use of the color red (guess why?). In fact the interview with ArtificialBob in issue #5 is what put RedruM on my radar and I’ve enjoyed his creations ever since.
[AI Art Weekly] Hey Redrum, what’s your background and how did you get into AI art?
I don’t have an artistic background, in the sense that I have studied in this sector, but since I was a child I have been passionate about art. Because of the influence of my father, who is a fair collector, I have toured Italy between galleries, exhibitions and events such as the Venice Biannale and Arte Fiera Bologna. Thanks to this I developed a predisposition for art which allowed me, once I discovered AI, to be able to express something we can define unique or rather recognizable.
My adventure with AI art was born almost as a game, friends told me about it, I created my first piece at the end of June 2022, got hooked and then started to refine my skills up to this day (and hopefully beyond).
[AI Art Weekly] Do you have a specific project you’re currently working on? What is it?
For the moment I’m focused on producing in a fairly limited way between the marketplaces and the blockchains in which I currently publish. But there will certainly be some new features in the future, for example, I will soon be dropping on NiftyG.
[AI Art Weekly] What does your workflow look like?
At the moment I use mainly MidJourney to produce images, which I’ll then refine with Procreate and at the end post process with Photoshop and Gigapixel.
From the beginning I tried to maintain a style that made me recognizable and have spent several months experimenting on various platforms (Wombo, MJ, SD and Dall-E) – in fact if you see my work there is a clear “evolution”.
After running various tests and playing around with different words, I found the prompt most suitable for me by studying artists who use the color red better than others.
[AI Art Weekly] Who is your favourite artist?
I have a lot of favourite artists in the digital world and to avoid saying too many names (and maybe forgetting someone), I’ll only tell you those of traditional art (here too, naming them all is complicated).
To start there are certainly Caravaggio and De Chirico.
Then in the domain of contemporary art there are many Italian artists such as Maurizio Cattelan, Mario Schifano, Lucio Fontana, La Transavantgarde, Valerio Adami and Enrico Baj.
Then artists that go definitely beyond the border of Pop Art are Daniel Arsham, Kaws, Murakami and Minjun.
I try to capture and repurpose something from all the above and more, not necessary in a literal way, sometimes only at a subconscious level.
[AI Art Weekly] What would be your advice to newcomers to find and create a unique art style?
To find a unique and recognizable style, my advice would be to learn more about engineering prompts and to search for words and artists that can fully express the concept you want to convey.
Even if this means spending months refining your style – long term recognizability always pays off and will be the criterion to distinguish yourself from the massive influx of artists that AI art produced and continues to do so.
If you’re into NFTs: Don’t mint all the “cool” things you find with every prompt, but study and develop your own vision first, and then follow it.
Each week we share a style that produces some cool results when used in your prompts. This weeks style is
surprised anime <subject> reading the news and works especially well with NijiJourney.
Creation: Tools & Tutorials
These are some of the most interesting resources I’ve come across this week.
As far as I know, SD v2.0 models don’t work with existing user interfaces (yet). So if you want to build your own, @1littlecoder put together a short video tutorial on how to achieve that by using @Gradio.
Listen up, MTG nerds! HuggingFace user volrath50 the madlad created a comprehensive fine-tuned Stable Diffusion model trained on all currently available Magic: the Gathering card art (~35k unique pieces of art). Might be a cool experiment to create your own unique proxies.
(Probably) the worlds first Stable Diffusion v2.0 trained fine-tune model trained with Dreambooth by @Nitrosocke.
@HaihaoShen and the team behind Intel® Neural Compressor have made it possible to fine-tune Stable Diffusion on a CPU with a single image in around 3 hours of time. There is a demo on HuggingFace in case you want to try out.
And that my fellow dreamers, concludes yet another AI Art weekly issue. Please consider supporting this newsletter by:
- Sharing it 🙏❤️
- Following me on Twitter: @dreamingtulpa
- Leaving a Review on Product Hunt
- Using one of our affiliate links at https://aiartweekly.com/support
Reply to this email if you have any feedback or ideas for this newsletter.
Thanks for reading and talk to you next week!