EconTalk host Russ Roberts goes back to the future again in this episode with engineer David Mindell of MIT. Their conversation covers exploring the Titanic, Mars, and the moon, air travel and the future of driverless cars, and more.

Their discussion had me tossing the notion of autonomy around in my brain endlessly. What do we really mean by autonomy, and how is it (ever) achieved?

Robots Ourselves2.jpg

1. What does Mindell mean when he says that the notion of full autonomy is a myth? To what extent is autonomy a useful notion when discussing robots and machines? Can the “human machine” achieve full autonomy? 2. If you were the CEO of Uber, would you be striving for driverless cars? Why or why not?

3. Mindell asserts, “You are much more likely to get killed by a poorly-designed robot than by an evil-thinking robot.” What does he mean, and to what extent do you agree?

4. What do you think of Mindell’s (career) advice to study computer science and the social sciences at the same time?

5. Is full autonomy a straw man? As some of you have argued in the comments, isn’t it enough just to be better than humans on their own? How would Mindell respond to this point?