patten on AI-assisted art, communicating through music, and ‘crate digging in latent space’

patten on AI-assisted art, communicating through music, and ‘crate digging in latent space’

patten often looks to technology for his next creative move. The inventive south London producer and audiovisual artist — real name Damien Roach – released his first LP as patten in 2011, and signed to Warp two years later, setting out his experimental electronic stall with the ‘EOLIAN INSTATE’ EP. Since 2013, he has put out several releases for Warp and via his own 555-5555 label and moniker. He is also known for crafting unique AV shows that have taken him to the ICA and Tate Modern.

Roach has been working intensively with AI since 2018, applying it to his visual art practice and more recently to his design/creative direction work, as shown in the video for Kelbin’s remix of Daphni’s ‘Cloudy’, or the ‘Cherry’ video last year.

Now he’s taken his relationship with AI a step further on what is arguably his most tech-minded musical project yet. ‘Mirage FM’ — Roach’s first album in three years — is described as “the first longplayer of its kind”; an album made entirely from text-to-audio AI-generated sound sources. The 21-track LP takes a lo-fi tumble into the uncanny valley with bursts of vintage-sounding R&B, hip-hop and a smattering of other semi-perceptible genres. However, the thought-provoking experiment is conversely like nothing you’ve ever heard before, challenging us to step outside our comfort zone and confront the aural realities of AI.

What did the making of 'Mirage FM' teach him about AI-powered music? “It’s maybe too early to draw conclusions on AI in music as a whole, but I’m really interested to see where things go with AI-assisted creation systems, and the impact they might have on the ways we communicate with each other,” he offers.

We caught up with patten to get his take on AI-powered creation and the connection between image and sound.

What inspired you to make ‘Mirage FM’? Can you describe the process of working with Riffusion? 

“I heard about this new AI system for making audio from text instructions late in December, over the winter holidays, and basically couldn’t believe it was already possible. Things are moving so fast in the AI world at the moment. It’s unreal. I dove in right away to try it out, and got totally sucked into this process of exploring what it could do. Really quickly the idea of using the generated sound as a sample source seemed like an interesting path to test out, and then I basically spent the next 36 hours straight just making hours and hours of recordings with the plan to return to it to find tiny clips that I could assemble into a collection of tracks.”

How did you go about choosing the text prompts and what was the editing process like? 

“The interface is pretty simple, like, literally a text box for your description and then the thing spits out these bits of new sound based on what you asked for. It’s still incredible to me that it’s even possible to do this. Some prompts were really detailed and other ones were pretty vague, but out of say 20 min[ute]s of audio I’d recorded from a single prompt I liked, I only used a few seconds of sound. So then it was all about the edit. Scouring the recordings for tiny moments that had something about them.”

What are some of the main issues and challenges of working with text-to-audio AI? 

“I think the same as any creative tool — ideas. Having ideas is always the main thing. Something to communicate. Without that, no tool will help. If any tool, like these ones, somehow allows more people to communicate things to each other, then to me, that’s a good thing for sure.”

What was your primary objective with the album?

“Exploration. It was all led by genuine intrigue and asking the question: ‘what if?’. Once I saw what might be possible with tools like these, I wanted to explore it just from a personal perspective, and also to kinda shout about it a bit. Like, ‘Hey this is wild, look at what you can do!’. I’m surprised we’ve not seen more things like this yet. I mean, I heard about this new tool on around December 22nd, and I was announcing the album three weeks later in mid-January. I worked like crazy going deep on the recordings in those weeks, when I guess people were generally taking a holiday, but still, it’s nearly six months now since this has been possible, and I’m genuinely surprised we’ve not seen more records like this yet.”

What has the response been like?

“I’ve been really touched by the response so far. I think tech talk and magazine stuff aside, it’s mainly the messages I’ve had about how it has resonated emotionally with other people that’s been the core litmus test. Having something you’ve made connect with other people like that is never not an incredible thing. It’s the endless quest as an artist.”

How do you think AI can benefit creators? To what extent do you think the fears associated with AI’s impact on the creative industries are warranted?

“Access. For a long time, the idea of ‘creators’ being a ‘special’ kind of person has shaped how we think about and value things like visual art and music. Characteristics we call  ‘talent’, or ‘skill’, are things that take a lot of time and support to grow over time. Not everyone has access to that time and support, but it doesn’t mean they have less to say. It doesn’t mean that the things they might communicate have no value. No way. So I think that more people being able to express things in ways that were once only accessible to a few lucky ones, might have a big effect on how we even think about what these forms are, and how they fit into how we live. Music and visual art have been such a massive part of all of our cultures around the world from the beginning of human time. It’s a relatively new concept: the idea that there’s a separation between everyday life and making these sorts of things. We talk in pictures again with emojis, memes, instagram posts, just like hieroglyphs or cave paintings. It’s all just communication, right? People reaching inside and externalising it. The more people get to do that, maybe the better we’ll learn to understand each other, and this weird ‘existence’ thing we’re all experiencing right now.”

Where would you like to see AI-generated and assisted music/production go in the next five years?

“Into the hands, ears, minds, and hearts of as many people as possible.”

What are your thoughts on blockchain technologies? How do you think they have the potential to revolutionise the music industry?

“The most interesting thing about this is maybe the idea of the permanent digital archive. It’s a shame that the conversation around it has been so focused on commerce and turning archival units into financial assets. [It] seems like a missed opportunity so far in so many industries to have not looked at it outside of the lens of just capital exchange, but it’s early days. 

“I’ve made a few on-chain things, the most recent being this project ‘SEED’ last year that heavily incorporated AI as well. It’s based off of a machine learning model I trained in 2020 on hundreds of images I collected of 17th Century Dutch Realist flower paintings. I made this system that could create an infinite number of new, unique, mutated digital still lives stemming from that model. There were lots of levels to the project with a scent, bespoke skateboards, an AV installation at KOKO in Camden, one-off tees, ambient music, and classical music compositions I wrote — all woven in there. I was sort of translating data from one form to another. It was quite systemic and super organic at the same time. Organic Systems.”  

Where would you like to take your music next? 

“Right now I’m working on the ‘Mirage FM’ live AV system. It’s quite different from anything I’ve done for a long time on the live front. For a while it’s been about a synergy between sound, video projections, LEDs, and lasers, that I programmed and controlled live all at once. I’m really into sort of carving out a total environment with the live shows. It’s all really sculptural in every dimension with a system like that. I’ll return to it at some point, but for this record I wanted to take a different kind of path. It’s very connected to the whole ‘crate digging in latent space’ idea of the album. I’m putting together a system so I can be ultra-flexible at every show and find different things on the fly inside of the music, and the videos I’m making for the whole album. The connection between image and sound is embedded in it, and from show to show, or even in one show, I can go super abstract and heady, stay close to the album, or dive into a 3am dancefloor zoner. No two sets will be the same, and it’ll all be really responsive to where and when it happens. I’m using AI audio analysis live as well, so can alter some very radical things about the sound in real time. It’s like I’ve made all of this material and can now just treat it like plasticine right there, in front of you. Really psyched to start sharing it out there. I have a few things booked for the summer already and the first London ‘Mirage FM’ live AV show is at IKLECTIKA Festival on July 8th. Come check it out.” 

Back to blog