You’ve heard a lot about Artificial Intelligence (AI) on EconTalk of late- the good, the bad, and the frightening. In this episode, host Russ Roberts welcomes one of AI’s most strident acolytes, venture capitalist and entrepreneur Marc Andreessen, to talk about his vision of how AI will save the world. Andreessen maintains that AI will make everything better- if we let it. “Fluid intelligence,” which he describes as the ability to think and reason, has long been the domain of humanity exclusively. Andreessen sees AI an augmentation to human intelligence, not a replacement.
Roberts is skeptical. Sure, he says, AI can respond to commands, but can it learn what I love?
Well, you know what all of us here at EconTalk HQ love, right? We love to hear from you. So take a moment and share your thoughts in response to any of the prompts herein. Let’s keep the conversation going.
1- How do Andreessen and Roberts each define the nature of intelligence? What’s the difference between fluid and general intelligence, according to Andreessen, and how does this apply to what AI can do for humans? To what extent do you think AI can/will develop the potential to “think like humans?”
2- What are you using AI like Chat GPT for, and why? How do Roberts and Andreessen see AI becoming practically useful, and what might you add? How likely are you to allow AI to become your “ultimate thought-partner,” as Andreessen describes it?
3- Roberts asks Andreessen why he believes AI is not going to run amok, despite the problems of anthropomorphizing and millenarianism. How does Andreessen answer? Why does he characterize the extreme AI skeptics as a kind of apocalyptic cult, and to what extent do you think this characterization is fair? What is the real danger of this “cult,” according to Andreessen? Again, is he fair?
5- Roberts pointedly asks Andreessen of AI, IS THIS GOOD FOR US (at least in the short run)? Specifically, he asks, “Do you believe that any technology that is not explicitly destructive–and by that I mean, say, a nuclear bomb or a virus–that any toy of which our lives are full of now as 2023 residents, that they’re all good?”
How does Andreessen answer this question with respect to AI, and to what extent does he convince you?
Equally interesting, Andreessen answers that nuclear power and nuclear weapons have both been net positive. How does he explain this, and again, to what extent do you agree?
To what extent should the precautionary principle rule how we confront new technology?
5- What do you think the biggest policy issues with respect to AI will be? Andreessen rightly insists that the race is on, and not every country will agree that AI poses an existential risk. How will the approaches of different countries (the two discuss Israel and China, for example) differ? How worried should we be that a “a new Cold War dynamic” may emerge with respect to China?

BONUS QUESTION: Roberts says, “…someone, I hope, will put all of the EconTalk transcripts into ChatGPT and let me interview Adam Smith.” Challenge is open; reward available. (Yep, I used Open AI to generate that image.)
READER COMMENTS
Mike C
Sep 3 2023 at 4:58pm
I think the biggest policy issue will occur in liberal democracy countries: whether those governments should take control over the development of AI by nationalizing the companies that design/build the electronics required to train the AI algorithms, i.e., the leading graphics processing unit chip companies and the leading semiconductor fab companies. Nationalization violates principles of property rights and rule-of-law that are fundamental to liberal democracies, and if that rubicon is crossed, then other fundamental principles of liberal democracies will become at risk, as well.
Amy Willis
Sep 5 2023 at 10:40am
Hey Mike- I agree with your take re liberal democracies. (The Marxist AI Andreessen describes as likely coming out of China…yikes.) But don’t you think nationalization is a BIG jump from regulation? Wouldn’t states likely start there?
Mike C
Sep 6 2023 at 6:33am
Hi Amy, yes, liberal democracies will definitely start with regulating AI, but will find that the pace of regulatory establishment is much too slow compared to the rate of advancement in AI. The only way to slow down AI advancement will be to control the hardware that drives the advancement — much like the only way to control nuclear proliferation is to control the supply and enrichment of fissionable materials. GPU’s are the same as uranium enrichment plants.