If you’ve been paying any attention to EconTalk over the last few months, you know that Artificial Intelligence (AI) is very much on host Russ Roberts’ mind. This episode may end up being the most frightening of them all, as Russ welcomes Eliezer Yudkowsky, a Research Fellow at the Machine Intelligence Research Institute, and an AI, er…skeptic. Skeptic of course does not begin to encapsulate Yudkowsky’s position; Yudkowsky argues that AI will kill everyone on earth. Is his position too extreme, or are you just as concerned?
Let’s hear what you have to say. And feel free to reference any of our Artificial Intelligence episodes as well! As always, we love to hear from you.
1- Both Roberts and Yudkowsky agree that the current level of AI is not dangerous. Roberts recalls an analogy from this 2014 episode with Nick Bostrom, “In general, you wouldn’t want to invite something smarter than you into the campfire.”
Why does he bring this up, and how does it help explain the distance between Roberts and Yudkowsky’s viewpoints? Perhaps put another way, to what extent can algorithms have goals, as Yudkowsky suggests?
2- Can AI really become smarter than us? Roberts challenges his guest, “What does it mean to be smarter than I am? That’s actually somewhat complicated, at least it seems to me; i.e., does ‘it’ really know things are out there, or is this just an illusion it presents?”
What does Yudkowsky mean when he talks about an invisible mind that AI might come to possess?
3- To what extent is AI analogous to the to the market process, particularly with regard to unintended consequences?
Yudkowsky challenges Roberts, ” Put yourself in the shoes of the AI, like an economist putting themselves into the shoes of something that’s about to have a tax imposed on it. What do you do if you’re around humans who can potentially unplug you?” Roberts counters with another question, “How does it [AI] get outside the box? How did it end up wanting to do that, and how did it succeed?” How would you respond to these questions? Whose answer better convinced you- Roberts’ or Yudkowsky’s?

4- Several times Yudkowsky mentions man’s moon landing, asserting that one would not be able to explain that achievement from an evolutionary perspective. To what extent do you agree? Roberts again challenges huis guest asking whether this viewpoint requires a belief that the human mind is no different than a computer? Again- whose answer were you more convinced by?
5- Roberts recalls his conversation with neuroscientist Erik Hoel in which the threat of mutually assured destruction had been able to “regulate” nuclear proliferation. Is there any way we can retrain AI that’s similarly meaningful? Can it be done without govt/ the use of threat of lethal force? To what extent do you agree with Yudkowsky that if the government comes in and wrecks the whole thing, that’s better than the thing that was otherwise going to happen?
READER COMMENTS
Jason Stone
Jun 5 2023 at 9:03pm
I think the problem with Yudkowsky’s argument is that in machine learning there has to be a large dataset of whatever it is to learn from. So now an AI can use language in any way that has ever been recorded and reproduce similar things or even the best of all things ever said by some measure. But it can’t learn to do anything that does not have a dataset. Or at least, it can’t learn to do anything that doesn’t have a goal that it can see true progress toward.
The analagies to human history don’t quite work. Humans are billions of years of experiments with the only goal of making more humans or just generally more life. In short, Humans developed tribes and then monuments to advertise the tribe for more members. Going to the moon is both a monument and a hedge to using up the earth with growth. I.e., making a more efficient tribe to produce more people. AI can process data very fast, but it can’t do physical experiments any faster than we can. AI can’t do billions of experiments on developing it’s own goal to achieve some other goal, like killing people to make paperclips, without data to say if it’s on the right track or not.
However, I think if AI is to kill us all, it will do it with interactions like facebook,twitter,youtube,newsite. They (We?) have come up with stories to inspire invasion of Ukraine, Iraq; storm capitals after elections and downtowns after other events (which leverage our tribalism). AI could feasibly run billions of experiments on interactions to develop that to its extremes. Nobody is going to agree not to go down that path, because someone else is going to do it first. Though I wonder if would that actually lead to our extinction anyway.
Amy Willis
Jun 7 2023 at 5:10pm
Hey Jason,
I love the way you characterize the moon landing example. I’ll go ahead and say I find yours much more plausible than Yudkowsky’s. He does seem to assume that evolution is strictly genetic/biological. Hayek and others would have plenty to say about that…
I also think your last point is plausible- i.e., that AI won’t directly kill us, but might induce us to kill each other. (Please correct me if I’m misinterpreting you!) But that said, I’m not sure Yudkowsky would disagree… What do you think? Now that I look back, he didn’t really describe HOW AI would kill us, did he? Just that it’s sort of inevitable.
Comments are closed.