If you’ve been paying any attention to EconTalk over the last few months, you know that Artificial Intelligence (AI) is very much on host Russ Roberts’ mind. This episode may end up being the most frightening of them all, as Russ welcomes Eliezer Yudkowsky, a Research Fellow at the Machine Intelligence Research Institute, and an AI, er…skeptic. Skeptic of course does not begin to encapsulate Yudkowsky’s position; Yudkowsky argues that AI will kill everyone on earth. Is his position too extreme, or are you just as concerned?

Let’s hear what you have to say. And feel free to reference any of our Artificial Intelligence episodes as well! As always, we love to hear from you.

 

 

1- Both Roberts and Yudkowsky agree that the current level of AI is not dangerous. Roberts recalls an analogy from this 2014 episode with Nick Bostrom, “In general, you wouldn’t want to invite something smarter than you into the campfire.”

Why does he bring this up, and how does it help explain the distance between Roberts and Yudkowsky’s viewpoints? Perhaps put another way, to what extent can algorithms have goals, as Yudkowsky suggests?

 

2- Can AI really become smarter than us? Roberts challenges his guest, “What does it mean to be smarter than I am? That’s actually somewhat complicated, at least it seems to me; i.e., does ‘it’ really know things are out there, or is this just an illusion it presents?”

What does Yudkowsky mean when he talks about an invisible mind that AI might come to possess?

 

3- To what extent is AI analogous to the to the market process, particularly with regard to unintended consequences?

Yudkowsky challenges Roberts, ” Put yourself in the shoes of the AI, like an economist putting themselves into the shoes of something that’s about to have a tax imposed on it. What do you do if you’re around humans who can potentially unplug you?” Roberts counters with another question, “How does it [AI] get outside the box? How did it end up wanting to do that, and how did it succeed?” How would you respond to these questions? Whose answer better convinced you- Roberts’ or Yudkowsky’s?

 

4- Several times Yudkowsky mentions man’s moon landing, asserting that one would not be able to explain that achievement from an evolutionary perspective. To what extent do you agree? Roberts again challenges huis guest asking whether this viewpoint requires a belief that the human mind is no different than a computer? Again- whose answer were you more convinced by?

 

5- Roberts recalls his conversation with neuroscientist Erik Hoel in which the threat of mutually assured destruction had been able to “regulate” nuclear proliferation. Is there any way we can retrain AI that’s similarly meaningful? Can it be done without govt/ the use of threat of lethal force? To what extent do you agree with Yudkowsky that if the government comes in and wrecks the whole thing, that’s better than the thing that was otherwise going to happen?