0:00
/
0:00
Transcript

AI, Art & Us

A discussion with the musician, computer scientist, WaveAI CEO, and AI pioneer Maya Ackerman, author of the new book "Creative Machines: AI, Art & Us"

Has there ever been a more polarizing new technology than artificial intelligence? When Ned Ludd smashed the stocking frames in 1779, bequeathing his immortal name to the popular lexicon, mechanical knitting machines had been in use for almost 200 years. Thus the original Luddite was responding to something mechanical that he neither liked nor understood, but not to something novel. Nuclear fission, meanwhile, gave us the mega-deadly atomic bomb, but also an incredible new means of generating energy.

Like the stocking frame, AI drives ordinary humans to fits of rage. And like nuclear fission, AI has the power to kill us all, as AI researcher and ethicist Eliezer Yudkowsky and Nate Soares explore in their recent book If Anyone Builds It, Everyone Dies; as The Thinking Machine author Stephen Witt notes in this week’s New York Times piece with the not-at-all-alarmist title, “The A.I. Prompt That Could End the World;” and as literally every writer of science fiction who ever tackled the subject has concluded, may Wintermute correct me if I’m wrong. But its potential for helping humanity is great enough to justify the dangers. It’s already doing that.

Like almost everyone else in the creative writing tribe, I’ve long been wary of generative AI. It brings out my inner Ned Ludd. Not because I fear creative machines rendering me obsolete—it would take a supercomputer of Brobdingnagian capability to replace Yours Truly, and good luck with that, Sam Altman—but because I worry that, in the age of ChatGPT, young people will lose the ability to compose elegant prose, creative writing will atrophy, and literature will be reduced to a series of artless prompts, dank memes, and eggplant emojis.

It doesn’t help that the AI industry is led by creepy weirdos like Altman and Elon Musk, who I wouldn’t trust to drive my kid to the movies, let alone self-regulate a cutting-edge, and potentially world-destroying, new technology. When I see a pop-up for Apple Intelligence, Gemini, Grok, or some other AI “assistant,” I can’t hit the X button fast enough.

So when I was given the opportunity to speak to the CEO, computer scientist, musician, and generative AI pioneer Maya Ackerman about her new book, Creative Machines: AI, Art & Us, my first instinct was to scoff. No way, San Jose! But then I realized that I was being hasty and close-minded. Burying my head in the silicon sand was not going to make Sam Altman magically go away; not even a Scarlett Johansson lawsuit could do that. I might as well debrief with an expert, to better understand the enemy.

I’m so glad that I did. First of all, Ackerman’s book is terrific: part memoir, part history of artificial intelligence, part philosophical discussion about the nature of creativity, and part warning. It made me see AI—both its capabilities to help humans and its potential for great societal harm—in a whole new light.

And, contrary to what I expected, Ackerman is hardly an AI evangelist. She’s more concerned than I am about the hijacking of the industry by nefarious forces. I left the conversation feeling better informed about AI, and grateful to know that there’s at least one person in Silicon Valley who values creativity more than profit.

In the spirit of the topic, I’m going to let AI summarize our discussion—and, in so doing, begrudgingly admit that the computer did a pretty good job with the synopsis:

In this conversation, Maya Ackerman discusses her journey from musician to AI expert, exploring the intersection of creativity and artificial intelligence. She addresses the misconceptions surrounding AI, the nature of creativity, and the ethical implications of AI technology. The discussion delves into the potential of AI to enhance human creativity while also acknowledging the risks and biases inherent in AI systems. Ackerman emphasizes the importance of using AI as a tool for elevation rather than replacement, advocating for a future where AI and humans collaborate to create a more equitable and creative world.

Here are some highlights from the convo:


On AI Skepticism

GREG OLEAR (GO)

When I found out about your book, I was looking at it like, “Art and AI? Come on now.” I was skeptical because I’m a novelist—or I was, once upon a time. And my feelings about the generative AI are always like caveman stuff, I guess, like, “No, they’re coming to take me away! I’m gonna be replaced by the computers!” and all of that.

But then I thought, “Maybe I’m just being reflexively like the old man shouting for the kids to get off my lawn,” because I can’t sit here and talk to you for half an hour denouncing AI, and then, right after this is over, I’m going to get a really good transcript of our conversation that’s completely generated by AI. So it’s not like AI is inherently evil, or good, or anything. It’s like any technology. It is completely reliant on who’s using it and for what.

But I think part of it is just that the two letters “AI” together conjure up something in me. And I think I’m speaking for a lot of people. Let’s start with something really, really basic. What is AI, in a very, very rudimentary way? What is a creative machine? Why are you not afraid of it? And why should I not be?

MAYA ACKERMAN (MA)

I’m not exactly an AI optimist, if I’m completely honest. I love AI the way that parents love their children. I love it a lot. I got into it before it was popular. I got into it when it’s me and my friends and my colleagues building these creative machines and imagining a world where we get to choose a path on how these creative machines engage with the world.

And I know—I know not in a kind of abstract imaginative way; I know for a fact—the incredible stuff that these machines can do for humans. And some of it is being played out, and some of it is not yet. But I also see what happens when this incredibly powerful, beautiful technology gets into the wrong hands. And I watch that and experience that as well.

So I’m kind of of two minds about it. I’m not here to convince the audience that everything is peachy. In fact, I want people to have slightly more open eyes about it, about what’s broken, and to see that a little bit more accurately—but also to more accurately see the opportunities.

So AI—sorry, I just want to quickly actually answer your question—is having machines that have some form of real intelligence, which is not necessarily like our own.


Art vs. Commerce

GO

You talk about something that you call the Trojan Horse of Creativity, which is basically—you had some experience with this because you founded WaveAI a while ago now. So you’re really a pioneer in this, and I’m sure just watched like, “My God, this is really exploding,” in ways that that you probably didn’t quite imagine.

But now the money people are gathering around, and they want to make more money. And I saw you on some other podcast talking about this: what about the idea that AI can make movies and write books and write music without humans at all—the idea that they can replace humans. And you were saying that, “Well, that’s not what we want. That’s what the people with the money want. So that’s what gets developed.”

I look at the world now and I just see the income inequality going haywire. And the people that have the most money tend to be involved in technology, and they are almost comically ill-suited to be arbiters of humanity’s future. I don’t know if he’s a friend of yours, but Sam Altman looks to me like he looks like he stepped out of Central Casting [for evil genius billionaire] and we’re all doomed. Like, I don’t trust that guy at all. And Musk, we’ve seen what he’s all about, unfortunately. So are we wrong to be afraid of this? And what can we do to stop it?

MA

We’re extremely correct to be afraid of this. And it was really from being part of WaveAI that I appreciated how correct the general public sentiment is. We released our products—which were always designed to help people—but we would still get this backlash of people’s fears. And at first I just felt like it was so misplaced. But then the more I’d listen to people’s fears, and the more I’d look at what’s going on in the industry, especially since late 2022. It’s like, now the people feel what’s happening.

So I think the most difficult formative experience in my adult life was when, in late 2022, when Gen AI became hot, investors went from telling me that, effectively, “Generative AI is never going to happen, stop talking nonsense,” to being able to get meetings with everybody. And I have so much respect for the venture ecosystem. I mean, they’ve accomplished so much. There’s so much that works about it.

But on sort of the moral, ethical front of what I was confronted with, was this request that I take our technology and build it in a direction to replace musicians. And I was never going to do that, right? But of course they easily found other people who were willing to do that.

And so that’s the world we look at, where investors—not all of them, just kind of the more powerful ones—tend to be in this direction. They see this opportunity, they see these brilliant machines, and they want to apply it in a very specific way.

So it’s an uphill battle. It’s an uphill battle.


Compensation & Regulation

GO

In the book, you write this: “The truth is that if every creator were properly compensated, the economics of large scale generative AI would collapse. The reason these models work is because they’re trained on massive data sets, tens of millions of examples, often more. If each piece of content costs even a hundred dollars, hardly a great value for a piece of art or music, the bill would reach hundreds of millions, possibly billions, before even accounting for the actual model, training, staffing, and operations. Only the richest corporations or countries could afford to play.”

And I’m like, “Yeah, and?”

You know, I don’t know. I really honestly don’t know how I feel about it. Because it is— it’s like, ethically, they’re taking the writing, but it’s not like they’re using just mine to do something. Don’t I want—I was joking before a little bit [about being glad to be included in the AI author data set], but I did feel that way.

But don’t they want to have as broad a number of input as possible to be able to, somehow, even if it’s possible to do so, to replicate all of humanity? You can’t just have Shakespeare in there, say, because nobody talks like that anymore. You know what I mean? It has to be…I don’t know. Ethically, I don’t know how I feel about it. I’m kind of puzzled by it. You know what I’m getting at?

MA

Let’s see if we can figure this out together on the call. The reason that I wrote this passage is in response to this massive focus on “Don’t use my data.” That’s the reason I wrote this, right? Don’t use my data or if you use it, compensate me well, right?

GO

Mm-hmm. Right.

MA

And what’s happening in reality is that the only people who are financially benefiting from this line of thought are the rights holders. So rights aggregators for music, for art, for books. And most of the time the creators don’t make a penny. Okay. So my point there is that even though perhaps there is validity in that argument, we need to be careful of how much energy we put down that direction.

GO

Yes.

MA

And at the expense of perhaps focusing on other things, there are limits to how successful we can be in this direction. Even if every creator makes three bucks, which would be a massive accomplishment, I really don’t think that’s going to make it. You can’t even go out for lunch with this. I mean, it doesn’t make any difference in some sense. OK, so what does matter?

GO

Yeah, yeah.

MA

What you said—and I’m so glad to hear that because it doesn’t come across often enough in these conversations—is “Okay, use my data, it’s gonna be part of this massive data set. I don’t mind contributing to humanity’s knowledge base in this sense. I don’t mind contributing to creating this awesome collective consciousness brain.” Which is great, which is really, really good to hear.

The only case that I feel needs to be carved out is you probably don’t want ChatGPT or any other system to explicitly allow people to imitate you specifically. So that needs to be carved out. And luckily, luckily, we are starting to see motion in this direction.

I actually feel exactly the same way. Use all my stuff to become smarter in an abstract general sense; don’t imitate me without having a very, very explicit contract with me and also don’t force me into that situation if I don’t want it, right? So in that sense, we need to educate the public on what it is that we should even be arguing for, instead of just saying, “Don’t use my data,” as sort of our only level.

GO

That’s a really good point, I think. You see: we did it! We worked it out.

No, but it’s a good point. There’s more to it than that. Like, I was talking about Sam Altman before, and I’ve been out on him since the whole Scarlett Johansson thing, when he was like, “I want to use your voice,” and she was like, “No.” And then he kind of did it anyway, in this weird workaround way. That’s like creepy with a capital C.

MA

Hahaha!

GO

But now he’s launched, this week—there’s a new app called Sora, which, I don’t know. It seems to me, based on what I’ve read about it, it is basically a deep fake app. You know, there’s literally a movie, Mountainhead, talking about how this is a terrible idea and what’s going to happen, which I guess Sam didn’t see.

But going back to the idea, you were saying we don’t want to be, we don’t want it to imitate us specifically without compensation. That’s probably the right line to draw.

How can that be policed, and how will it be policed? Because right now, we have a geriatric Congress, that half of these people don’t even know how to use a smartphone, I think, let alone understand what this is and what it means. You have, you know, a president who is clearly just going to do whatever Musk and Thiel tell him to do. And then the moderators of this world are these, you know, again, these tech bro guys that seem like they would rather have all of us die so they can go to Mars, which doesn’t seem like maybe the best policing system. So like if you were in charge of it, how would you suggest that we police this?

MA

It can feel very hopeless, for very good reason, right? The general population, despite the fact that we technically have a democracy in some sense, is not that powerful, which is scary. But I think that there is a lot of power in consumers as a whole. As a whole, have some kind of collective power. And right now our collective power is being directed by the rights holders.

The right holders kind of took our collective power and said, “The way that you use it to make AI good is to make sure your data is treated properly.” That sort of manipulated us into this one path. And what we need is to take [back] our collective power, and with it, demand things that actually are more valuable for us than making sure that these right holders stay wealthy. . . .

And then there is some precedent in Europe now, demanding that our likeness, our voice, our unique creative voice are protected—which maybe won’t happen with this administration. There are certain limitations that we have at the moment, but ultimately I think it’s a very reasonable thing. And we’re starting to see a precedent for it.

And the other thing that I think we should be demanding very, very loudly is saying what we want these companies to build and what we don’t want them to build…Don’t build stuff that replaces artists! We have the right to say words like this. Build stuff that elevates us! There is nothing that prevents us from having this narrative, and there is nothing that prevents those companies, other than greed—misdirected greed, by the way—from following that kind of thing.

Empowering people to see things more clearly and make demands that are more aligned with human wellness is a sort of power that we could be leveraging more.

Share


Follow Maya on LinkedIn.

Visit her website.

Buy the book:


Every piece at PREVAIL is free to read and always will be. No paywalls, ever. Your generous support keeps it that way. Thank you!

Discussion about this video

User's avatar

Ready for more?