THIS WEEK’S ISSUE
Welcome back to Stable Digest!
We’re joined by artist, designer and pixel manipulator makeitrad for a very special spotlight interview as we dive into his very rad world!
Discover how you can make history as part of an unprecedented AI animation competition in collaboration with legendary artist Peter Gabriel!
Plus, we’ve got two new releases fresh from the Stability labs to share with you all; THE revolutionary text-to-image model SDXL and the groundbreaking language model StableLM!
Buckle up, it’s a big one!
BY THE COMMUNITY
MODS MADE THIS
Our magnificent mod team has been hard at work on more outstanding artwork.
Take a look at this stunning snapshot of just a few of their latest creations.
Interested in joining our Stable squad and becoming part of the volunteer Discord mod team? We’re always looking for wonderful members of the community to join us!
Please fill in this application form if you’re up for joining the crew!
WINNERS OF THE WEEK
WOW! We’re back for another look at the best of the best, the champions of your choice, our winners of the week!
Alongside the ever-spectacular Picture of the Week event, we’ve just kicked off a whole new series in Challenge of the Weekend!
Let’s start with a look at our prime POW picks, kicking things off with this purrfect piece by Wipeout from our pawesome Catstravaganza, followed by a prime example of the fundamental beauty of nature in Sometimes’ Elemental Flowers.
Moo’ving on to our COW creations, Guizmus took pole position on the fantastic Futuristic Fashion runway with their neon-soaked masterpiece, and TonyS glimpsed a nightmarish being beyond the veil of reality and uncovered an unfathomable world of Cosmic Horrors!
Head on over to the Community Events Centre on the Discord and join in the fun!
MODELS AND EMBEDDINGS
Time for another marvellous model straight from our community Models and Embeddings forum!
This issue, we’re taking a look at a phenomenally photorealistic model created by ResidentChiefNZ - I Can’t Believe It’s Not Photography!
Hyper-realistic photographic or CGI characters and landscapes are all possible with this extraordinarily adaptable model, complete with amazing lighting effects.
Check out this brilliant creation and more on the Models and Embeddings forum!
WITH THE COMMUNITY
STABLE SOCIETY DEEP DIVE - makeitrad
“It’s easy, make it rad”
Welcome! It’s wonderful to have you here with us today; We can’t wait to dive into the incredible world of your artwork!
To start things off, could you tell us a little bit about yourself and your background as an artist and creator?
In college, I took an art class of all things. It was a sculpture class that was more about playing with new materials and reevaluating what sculpture even is. This amazing teacher encouraged me to pursue a career in the arts. He kept pushing for me to go to CalArts. Eventually I put in my application, not expecting to get in, but I did.
After completing their program, I entered the world of motion design, working my way up from a designer to an art director and eventually becoming a creative director, running two offices, one on the West Coast and one on the East. I did that for a decade until that company eventually folded.
After this I took a few other full-time jobs but mostly bounced around, doing work as a contractor and consultant. Somehow, amid all this, I started directing live-action commercials, dealing with the filming, talent. Today, my two worlds reside in being a creative director and shooting live-action. I also went back to my alma mater to start teaching motion design. This was a great way to escape and become creative without day to day work influences.
When NFTs started blowing up, I saw an opportunity there. I thought, "I can do all of this. I've got these skills, I can probably rock this too." ETH was a bit crazy at the time and just minting a piece of work cost $200-$300 in gas. The first few NFTs lived on Foundation but didn’t all sell. I wanted to learn 3D software again, so I picked up Cinema4D with Octane and began creating monochrome loops which I minted on Hic Et Nunc, a Tezos-based platform. Once I made the switch, I was able to build this great community around my work and sell out multiple collections, and it sort of just took off from there.
You’d already created amazing 3D collections like “Monochrome”, “Potions” and “Childhood” before you began incorporating AI into your artistry.
How did you first discover the world of AI, what inspired you to explore it and what were your early steps into implementing it into your work?
I actually discovered AI in early 2021, after having already been in the NFT space for a bit. I don't remember the first piece of AI I saw, but I do remember stumbling upon Jeremy Torman's piece, “Shapeshifter,” that used VQGAN and Ebsynth that really blew me away. I reached out to him, he responded immediately and we just hit it off. I was a little nervous to use AI tools at the time, I didn’t know anything about it, or the ethics behind using it, but he just kept encouraging me, so off I went. Another person who was very helpful in the beginning was @unltd_dream_co. I remember they showed me an inpainting notebook that took stills and turned them into 3 dimensional movies and it totally blew my mind.
When I opened my first colab notebook, it was all pretty foreign to me. The first one I used was made by Katherine Crowson (@RiversHaveWings) and it was actually in Spanish, so I had to keep pressing buttons until I got to a point where I could type in a prompt. I ended up with these stills of splashing fish and this fantastic squid, I was hooked. My biggest issue was I come from a moving world. I definitely wanted to bring some kind motion into it, everything that I did until this point had moved.
I came across this animation notebook by Chigozie Nri (@chigozienri) It was a really simple set up that I could do basic keyframing in. I did that and minted one of my first pieces on Hic Et Nunc and they exploded, no one was really doing animation at that time. I felt like the first month I was doing one of these videos every day, and they would sell out immediately. It was right where I wanted to be.
Since those early days of AI, the technology has progressed at lightning speed and the tools we have available to us now are too many to count!
Can you tell us a little about your workflow and the myriad methods and tools you’ve discovered along your journey?
When I started playing with AI it was very different, there was just a small community of people. We spent a lot of time on the PYTII Discord. It was a legendary group where all the big names of today were, and we would share our work and help each other figure things out. Nobody seemed to care about selling NFTs; it was just about experimenting and pushing the limits of what we could do with these tools.
From the beginning, I wanted to have my own models, that was always my priority. I remember Jeremy (@TormanJeremy) pushing me to use StyleGAN and not wanting to, you had to have these huge datasets, and it took so long to train, like two weeks. It was intimidating. I had this bright idea to make my own data set. I ended up using JAX Diffusion the first time I did it, I love that notebook. I produced around 5,000 architectural homes and used them to train my GAN model. The results were really good, and I ended up releasing the Artificial Architecture collection of them on Tezos.
After that, everything became StyleGAN-based. I would train my own model for DiscoDiffusion first, using KaliYuga’s guide, then render out a dataset from within Disco and train the dataset with StyleGAN. A good example of this was my Majestic Mycology series, you can see a detailed process here. I usually tried to string a few techniques together to make things look different and avoid being emulated by others.
My process has mostly stayed the same with the new tools. I still use and love to work with StyleGAN; now, I just use StableDiffusion to create datasets instead of DiscoDiffusion. I also still come at the tools as I did initially with that experimental mindset, trying to push the limits and not just use the easiest ones. For example, recently, I spent some time working with StableDiffusion to get V5-quality photorealistic results, which everyone said wasn't possible. It takes a bit more work and patience, but once you figure out the tricks, it works fine, and you leave with a greater sense of accomplishment.
The worlds of AI, NFTs and the wider Web3 landscape go hand in hand. How did you first take the leap into minting your artwork as NFTs, and what freedom and creative opportunity does the space afford you as an artist?
The NFTs were just a way for me to document my work and prove that it was done, and maybe someone would be interested in buying it. It was never about making a bunch of money. I just wanted to be creative and make things.
I had fallen into this kind of management position in my career where I talked to clients and talent, but I wasn't actually making anything. So that was the end goal, to be able to create again. Now I am in a position where it's nonstop every day. I create, create, create. I'm running out of hard drive space.
Your artwork often features natural and organic imagery, such as in your “Majestic Mycology” and “CRYST-AI-LS” collections.
Can you tell us a little about how you find inspiration in nature and the world around you?
In college, I made a poster called "Your Future History," which was my first NFT; it's still available actually, it never sold. It's about escaping technology and embracing nature. That was 25 years ago, but it still speaks to me now.
I've always been into technology, and nowadays, I'm inundated with it. I find myself creating objects and scenes that reflect nature to escape it all. It wasn’t a conscious decision, some part of me realized I needed a break from everything being computer-driven.
Plus, I enjoy being outdoors, away from crowds, the desert is my favorite place. Its beauty is captivating, and I've had amazing experiences exploring caves and rock formations with my kids. Those formations inspire much of my work with crystals.
We see your love of the natural world harmonised with architectural and interior design in the “LaiMPs”, “CHaiRs" and “Indoor Outdoor” collections. What led you to your fascination with architecture, and what draws you to combining these elements in this way?
In college, I saw my first Eames chair when a teacher sent us to look at these furniture showrooms in Hollywood. When you first hear about a chair costing $1200 and you’ve been eating Tuesday taco night at Taco Bell, you think it's kind of pretentious. But I remember thinking, when I first saw it, that if I had that kind of money, I would buy one. That's how my fascination with chairs began. I had to make a chair in a class once before, so I knew how hard it really was, and I was amazed by how the Eames chairs were made. The whole concept of making beautiful, functional items for the masses of this period in the 60's really stuck with me. It has influenced a lot of my choices and style.
You’ve exhibited a lot of your artwork in both the physical and digital worlds, with work from your “Exploring Noise” series even featuring up on the screens in Times Square!
Can you tell us about this project and experience, and some of your latest exhibitions?
Well, when I got into NFTs, I didn't really know what I was doing. I noticed I had some strong collectors, like Chris Trueman (@ctrueman). Chris asked me to collaborate on a piece with him, but having a day job and really doing this to escape working with others I was a bit scared to say yes. He told me what he was looking for help with and it just so happened it was something I was very familiar with having examined OpArt in popular culture for my thesis. I helped make a small piece of motion for him and our friendship quickly grew.
Next he asked if I was interested in making a piece for Times Square. Loving my black-and-white work, especially a piece called "Exploring Noise." He asked if I could create something similar but in color and on a much larger scale. This seemed amazing and I immediately jumped on the opportunity.
He has since introduced me to tons of people, gotten me into exhibitions all over the country, and helped me build my reputation. We even collaborated on a few pieces together which were exhibited at Winston Wachter Gallery in Seattle, WA.
The scope and scale of your work is extraordinary! From your fascinating architectural explorations to your wonderful animated abstractions, you’re exploring brave new frontiers in the AI art scene. Could you perhaps tell us a little about the projects you’re working on now, and any exciting plans for the future?
I've got two or three more architecture pieces brewing, but they'll be smaller, like 20-piece projects, not the huge ones I've been doing. I didn't realize how much work 144 pieces would be when I started the GMGN Collection.
I’m really interested in trying to incorporate more complex smart contracts into my work. I did one called “Extraordinary Crystal” where I worked with Ethan Jones (@ktrbychain) a programmer to create a unique mechanic. We auctioned off one crystal that is wallet locked, but multiplies if you try to sell or send it. It's grown to over 100 crystals now. I loved combining 3D, AI, compositing skills, and a cool smart contract. So that is something I want to focus on in the future.
As time and technology progresses, we’re seeing AI being adopted and implemented in all manner of industries and applications. We’re seeing a creative and technological revolution before our eyes!
How do you see AI being used in the art and design scene, and the wider world as a whole, in the near future?
Well, the future is already here, you know? I work for a creative agency, and they're already using AI to make storyboards and such. I think artists need to embrace it, or they might miss the boat. The biggest challenge is getting everyone on board, because AI can change everything as we know it. We can now create a thousand designs in a day, which was impossible before. But more than that, it's the unexpected results in AI work, those happy accidents that make you think differently. People often want a specific outcome, but it's exciting when you get something totally unexpected. We're just scratching the surface with AI in image creation, music, and language models. And while it's fun to create art with AI, bigger changes are coming that will impact businesses and people’s lives in unimaginable ways.
There was this 60 Minutes piece on AI at Google recently, and they talked about how DeepMind could predict the protein structures, which would take a researcher their entire Ph.D. to do just one, but AI did 200 million proteins that are known to science in a couple of years. The database is public and considered a "gift to humanity" Those are the types of things that I think will really be making a huge difference in our lives.
You’ve collaborated with so many amazing artists in your career!
Is there anyone you’d like to shout out today, and maybe even someone in particular you’d love to have the chance to work with in the future?
Jeremy Torman (@TormanJeremy) is always there to bounce ideas off. Somnai (@Somnai_dreams) and @singlezer0 who taught me a bunch in the early days. Huemin (@huemin_art) and Pharma (@pharmapsychotic), who contribute so much to the community and work on Deforum. Zippy (@AlexanderRedde3) who helped me set up my 4090 and has taught me many AI tricks along the way. Of course I have to mention Chris Trueman (@ctrueman) and SUTU (@sutu_eats_flies) who have always been big supporters of mine. There are the giants of AI like @nshepperd1 and @RiversHaveWings who created tools that changed my artistic outlook. Pak (@muratpak) for bringing attention to some of my early AI work and allowing me to see that the medium can contribute to the message. There are just so many people…All the creators in the PYTTI discord and @sportsracer48 who wrote PYTTI and supported the rag tag group of AI artists. It's been an amazing journey with so much more to come.
“It’s easy, make it rad”
FOR THE COMMUNITY
DIFFUSE TOGETHER - WITH PETER GABRIEL
Stability proudly partners with none other than the legendary Peter Gabriel to bring you: Diffuse Together - I/O Edition!
A once in a blue moon opportunity, this animation challenge invites you to create a short AI-powered animation set to the sound of Peter’s incredible music.
Explore a world of duality, hidden meanings and opposites and harness the transmutational power of Stable Diffusion to craft a unique audiovisual experience.
Three lucky winners will receive cash prizes, API credits, tickets to see Peter preform live on his upcoming I/O tour and even more!
The contest has just kicked off, and you have until Monday 1st May to submit your creation.
On Friday 5th May, make sure to tune in to the Stability Twitch channel for an unforgettable winner’s announcement by Peter Gabriel himself!
Head over to the Stable Foundation Discord server for the full details and join us on this incredible journey!
StableLM
It's happened! StableLM has been released, and we are spinning with excitement! Yes, yes, we know, another language model? But here at Stability we pride ourselves on making foundational AI tech accessible to all. With StableLM, researchers can "look under the hood," developers can build apps without relying on proprietary AI services, and everyday people can have the tools they need to unlock new opportunities!
Even better, we are kicking off our crowd-sourced RLHF program very soon!
Learn more about how this small-sized model packs a big punch. You can find the models in our GitHub repository, where you can freely inspect, use and adapt them for commercial or research purposes.
Want a more in-depth read on the nuts and bolts? Check out our blog!
SDXL
Say hello to the future of image generation!
We were absolutely thrilled to introduce you to SDXL Beta last week! So far we have seen some mind-blowing photorealism and fantastic visuals being created by this game-changing technology and we are looking to even more!
SDXL beta boasts next-level photorealism capabilities, enhanced composition and face generation, shorter prompts, and most notably, the ability to create legible text! Plus, it goes beyond text-to-image prompting with image-to-image prompting, inpainting, and outpainting!
You can find SDXL in our premium consumer app, DreamStudio, and popular third-party apps like NightCafe Studio.
Ready to give it a whirl? Try SDXL in DreamStudio, test it for free on Clipdrop, or access SDXL's API on our platform.
For more information, visit our Stable Diffusion page on the Stability AI website.
Let's create something amazing together!