Ryan Murdock Interview

“8 years” by Ryan Murdock

Who are you?

My name is Ryan Murdock; I’m an artist and a machine learning engineer at Adobe. My specialty for the past year or so has been working with CLIP to create text-guided imagery that combines poetry and visual art.

When did you start programming & making art?

I started doing generative art/programming many years ago with a visualizer called MilkDrop and then eventually moved to Processing — probably about 5 years ago — and I started doing neural art probably around 3 years ago. I began with GANs, then feature visualization as my main specialties, though I’ve worked on many different projects using ML.

How did you first get into machine learning?

This is the exact video that started me on ML Daniel Shiffman’s Nature of Code series is really awesome, and prompted me to start making art with NNs.

“Become nothing” by Ryan Murdock

Tell us more about the BigSleep notebook.

It’s kind of ancient history now, but it was a Colab notebook combining BigGAN and CLIP for generating text from images. It introduced a fair number of people to text-to-image as a method and way to make art. It wasn’t my first CLIP-based generative notebook though: that was actually one called DeepDaze, which came out very shortly after CLIP’s release (being the first CLIP text-to-image system published, by my knowledge). But BigSleep produces much better results haha!

According to your Twitter bio you originated the approach of combining CLIP+VQGAN. I’d love to learn more about the origin story of this.

Yeah, in early March last year I put together a notebook (that eventually was released on my Patreon as LatentVisions) and made some posts about combining VQGAN and CLIP on Twitter. Now there are many open source implementations of VQGAN&CLIP and websites using it that people make art with with, and I’m proud to see the approach become so popular.

What were your Aha moments that led to DeepDaze, BigSleep, and the eventual combination of VQGAN+CLIP?

I realized that CLIP could do text-to-image a couple of days after its release, and I definitely remember the first time that I typed `a white clock`, and it actually turned into something. It was a really exciting time!

“Crowded Wisdom” by Ryan Murdock

With Text2Image where do you draw the line of the art being made by the human or the computer?

I think it’s something of a collaboration. CLIP can create strikingly beautiful things with minimal input, but for the work to be really exciting to me, the text/prompt usually has to be carefully thought-out or groundbreaking in some way. So while CLIP facilitates a lot of art, I think that the human artist is still an integral part.

What do you have to say to the people who don’t consider Text2Image art?

I don’t think they have anything new or interesting to say that wasn’t already said after the camera came out, honestly. People who want to gatekeep art in this way usually just don’t know that much about art.

Under what conditions can you proudly say “I made this” with a machine learning piece?

I think that people should say “I made this” every time — but they would ideally add: “with xyz system/notebook/tool, etc.”

What are some of your favorite prompts or keywords to use with Text2Image?

I keep many of them secret, but I really like `gradients` and `intricate` as keywords lately.

“Intricate, Weeping Tree” by Ryan Murdock

Do you have any creations or findings you are most fascinated by?

I’m always interested in happenstance — those prompts that result in images you’d never expect. For instance, `Odysseus speaks to the shades in Hades` is one of my favorites, as it almost always puts sunglasses on Odysseus… you can’t anticipate that kind of fun lapse in communication and intention between you and the machine.

Do you ever sell your works?

Yes, I used to sell some work on H=N, and now I occasionally list things on Objkt.

What usecases do you see machine learning being used for in the art world or the real world?

I’m excited to see natural language allow us to direct a variety of artistic and practical ventures. Should just expand to most things as time goes on, I think.

“The Art of Ending Futures” by Ryan Murdock

What do you think are the most recent breakthroughs of the machine learning scene?

CLIP-Guided Diffusion is looking really, really good! LookingGlass (Finetuned RU-DALLE) is also impressive.

Who are some machine learning artists who we should know about?

Tons of really cool people in this space; would be impossible to name them all, but Images_AI retweets most of the people you should be following in the scene!

What do you look for in a piece of Text2Image artwork?

It depends: I think art can and should be a lot of different things. Sometimes I like art that’s beautiful or thoughtful or deliberate. Sometimes I like art that is intentionally very, very ugly.

“The Hyacinth Girl” by Ryan Murdock

What advice would you give to someone who wants to explore the wonderful world of machine learning art?

Have fun! Write up some cool projects. And read Arxiv preprints too.

What are your 2022 and future plans?

In terms of art, I have some projects on the backburner, but I’m also just always trying to push things forward wrt the images I write.

“We walk the same hallways all our lives” by Ryan Murdock

Is there anyone you want to shoutout?

Shoutout to my family, friends, and partner for dealing with my obsessive multimodal addiction 😀

Anything else?

Thanks for having me! I always enjoy writing about this work.

Leave a Reply