The Fourth Industrial Revolution Will Not Be Televised, But It Will Be AI Generated
A discussion with Tom Kemp, the Silicon Valley-based entrepreneur, seed investor, policy advisor, and author of the book "Containing Big Tech."
Ready or not, the Fourth Industrial Revolution is upon us. Coined by Klaus Schwab of the World Economic Forum, the term refers to the confluence of advanced technology like machine-to-machine communication, the Internet of Things, gene editing, cutting-edge robotics, and, above all, artificial intelligence, or AI.
We are at the advent of an era that Karel Čapek, the Czech sci-fi writer who coined the term robot in his play R.U.R., warned us about a hundred years ago. Fortunately, we still have a long way to go before the replicants become indistinguishable from biological human beings, start to feel resentful, and band together to wipe us all out.
I kid. Like any new technology, AI is neither inherently good or inherently bad, as we shall see. It all depends on its applications, and on whom is doing the applying. Already, AI has done wonderful things. It makes driving safer, medical scans more accurate, and hacking harder. It helped Peter Jackson restore the audio in the Beatles docuseries Get Back and Chunk animate the intro to The Five 8. It produced, at the click of the button, a description of this week’s podcast episode, and a very good transcript.
But like all big tech, AI needs to be contained. There must be more constraints, regulations, reins, guardrails. In the wrong hands—scammers, hackers, data brokers, sex traffickers, producers of CSM, Republicans—AI can be a virtual nightmare. Needless to say, the Elon Musks and Sam Altmans of the world, who seemed to have missed the point of Blade Runner and resisted the urge to binge Battlestar Galactica, would rather be left to their own devices, in every sense of the word.
Tom Kemp, my guest on today’s PREVAIL podcast, is that rare Silicon Valley figure—a brilliant and successful tech guy who isn’t a ruthless libertarian, who understands the dangers of the Fourth Industrial Revolution’s dog-eat-dog Wild West, and who has invested considerable resources in crafting regulations on the new technology for the collective good. His book, Containing Big Tech: How to Protect our Civil Rights, Economy, and Democracy, is essential reading to understand the powerful forces at work here. I am grateful to him for coming on the show and hipping me to the latest on AI and other tech-related issues in the headlines.
Here are three takeaways from our discussion:
1/ Data brokers are the scum of the earth.
These are the faceless, shadowy outfits who gather data of every available kind, package it just so, and sell it to literally anyone with the means to pay: hacker, scammer, MAGA pollster, you name it. I don’t mind being offered products in my various social media feeds that I might actually be interested in; we are a long way from ads for toilet bowls appearing on every site I visit for three months because I went online once to shop for toilet bowls. But as with all tech, this must be reined in.
“Data brokers are companies that you don’t have a direct relationship with,” Kemp explains. “You don’t even know who they are. And they buy and sell your data to anyone with a credit card.”
He continues:
Now, you may actually know if you Google your name or put your address in, you’ll see all of these ‘people search’ sites that will list and then for $5 you can buy a report on any individual, et cetera. Those are data brokers—but there are other data brokers that do marketing data. They sell healthcare information. And there’s been a lot of controversy in the past that they’ve sold lists of pregnant women, people with medical conditions. They even sell lists to people that they have inferred have cancer—and they were found in Connecticut selling it to mortuaries…with the expectation that these people are probably going to die…
So they don’t have too many moral compasses in terms of who they sell the data to. Now, unfortunately, what’s happened is that there’s been breach after breach of these data brokers. And the last one was this entity called National Public Data, NPD. And it was just this fly-by-night organization that had 2.9 billion records, but it basically had everyone in the US’s ,for the most part, Social Security Number, and hackers got it.
Data brokers can determine who is likely to be a neo-Nazi, who is interested in conspiracy theories, who is homophobic or xenophobic, and so on—useful information to have had in 2016, if you happened to be running Trump’s social media operation.
Scammers do horrible things with your data. One popular scam, Kemp tells me, is that scammers “send an email to you, because they have their email address, and they will attach a photograph of your house, because they correlate your house with Google Maps. And they’ll basically say, ‘I know where you live, you’ve done something bad, send me this stuff.’ And of course, you have to send it via Bitcoin, right?
“So it kind of goes back to the whole non-regulation aspect of things as well. So data brokers are a major attack vector facility that hackers use to get after you. And so it’s all about this unwashed amount of data being collected and sold about us.”
Kemp was instrumental in the passage of the California Delete Act, which will make it easy for us to opt out of this mass data collection. This is a major accomplishment, and I hope other states and the federal government follow suit.
2/ AI can be used for good…
In the spirit of this piece, I decided to use AI to render the main image. After clicking “agree” on more legal forms that I should have, I made my way to an AI imaging site. I typed in, “Man with glasses at typewriter.” Of the four that popped up, one looked a bit like both of my grandfathers (see above), if not me. But it also looked a little like Mike Flynn. Is something like this actually valuable, in a way that would sustain a multi-billion-dollar company? Because Open AI is worth a boatload of money.
“It is,” Kemp says. “Well, things like that, you know, may not have as much value, but the value of artificial intelligence to analyze all login attempts that are happening and detecting fraudulent login attempts—that has value. Taking every CAT scan and MRI and finding that finding disease or a weird growth, which the average technician would miss in reviewing—that has value. Loading up every image of every stop sign, traffic sign, et cetera, and then having that in your car so that you don’t accidentally run over a bike—that has a lot of value.
“So there is a lot of good in terms of automating decision-making that’s happening with artificial intelligence.”
3/ …or ill.
Even before we get to the stage where the replicants and cylons band together and wipe out humankind—with climate change, they may not need to bother, but that’s another story—there are ominous aspects of AI, as Kemp explains:
The negative is twofold. The first negative, or maybe the challenge I should say, is, first of all, because AI is fundamentally about automating work, that over the last four decades—and this has given rise to Trumpism—is that there’s been a significant amount of automation with all the value of the automation and the increase in productivity going to the top. And what AI is going to do is accelerate that significantly.
Much like there was always politicians talking about factory jobs being shipped overseas, you’re going to have a situation [that affects] more of the middle-class, the white collar people, even lawyers—like maybe 30 or 40 percent of what they do can actually be automated. Like, analyzing a contract: before, you paid a lawyer $750 an hour, and they reviewed a contract for three hours; [now] maybe they only have to spend 20 minutes, and then the rest of the analysis will be done via AI.
So AI is going to cause significant displacement in the economy.
The increase in income inequality is not good for democracy. Historically, it’s not particularly good for the one percenters, either. But there are more specific, exigent negatives, as Kemp explains:
And then of course there’s the more of the blocking and tackling problems with AI: deep fakes that are fooling people and facilitating misinformation. There’s revenge porn and shit that’s happening in our schools with kids, generating images of naked images and graphic images of their classmates, 12-year-olds doing this to each other. So there’s just a lot of stuff.
And obviously there’s the whole protection of ourselves that people are kind of absconding via AI with our faces, our voices, our images, and then monetizing that. And so now you’re seeing bills in California to protect actors, even deceased people, because people are just gonna steal that as well. And then of course, the use of gen AI to facilitate cybercrime. So there’s just a whole host of issues…
To his credit and for our benefit, Kemp has continued to work with policy makers to put legal guardrails on this stuff. Congress needs to do more. Regulation will be key to maximizing the virtues of AI while mitigating the risks.
The ability of AI to simulate reality is of a piece with the MAGA technique of denying reality. We might be in the Fourth Industrial Revolution, but our brains haven’t evolved that quickly. Mentally, we’re no different from the ancient Greeks. We aren’t built to operate in a system where we have to doubt and question every single thing we encounter. And the bad guys know this. They want to break down reality. It’s how they stay in power.
“We’re now in a situation that we don’t know if things are real versus synthetic as it relates to content,” Kemp says. “At the same time, we’re dealing with this massive amount of automation that could facilitate further inequality. And then you’ve got fascists that take advantage of that.”
It’s a reality even Karel Čapek could not have foreseen.
LISTEN TO THE PODCAST
Tom Kemp is a Silicon Valley-based entrepreneur, seed investor, policy advisor and author of the award-winning bestseller Containing Big Tech: How to Protect our Civil Rights, Economy, and Democracy. Tom was the founder and CEO of Centrify, a leading cybersecurity cloud provider. As an angel investor, he’s made seed investments in over fifteen tech start-ups. He has also served as a volunteer technology policy advisor for political campaigns, legislators, and civil society groups. His advocacy work includes leading the campaign marketing efforts in 2020 to pass the California Privacy Rights Act, and advising and contributing to the passage of state privacy laws in 2023, such as the California Delete Act and Texas’ data broker registry law. In 2024, Tom collaborated with advocacy groups and legislative leaders on the California AI Transparency Act.
In this conversation with Greg Olear, Tom Kemp discusses the implications of Elon Musk’s actions in the political landscape, the significance of the Delete Act for data privacy, the complexities of AI technology, and the current state of antitrust actions against big tech companies. The dialogue explores the intersection of technology, politics, and civil rights, emphasizing the need for regulatory frameworks to protect democracy and individual freedoms.
Follow Tom:
https://x.com/TomKemp00
Visit his website:
https://www.tomkemp.ai/
Buy his book:
https://www.tomkemp.ai/containing-big-tech
Subscribe to The Five 8:
https://www.youtube.com/channel/UC0BRnRwe7yDZXIaF-QZfvhA
Check out ROUGH BEAST, Greg’s new book:
https://www.amazon.com/dp/B0D47CMX17
ROUGH BEAST is now available as an audiobook:
https://www.audible.com/pd/Rough-Beast-Audiobook/B0D8K41S3T
Terrific episode 🙏🏼 one thing I notice abt ai generated images- there is a strange…sheen to them. Is it just me?
When AIs start programming themselves to be smarter (already happening), it will take a few months to get that right, and then they will get smarter 100 times per minute, and we will be toast.
We won't even have to worry about the climate catastrophe, since the machines will direct their human slaves (our children) to burn more fossile fuel to run ever-bigger cooling systems.
“The better I get to know men, the more I find myself loving dogs.”
- Charles de Gaulle