You're Not the Boss of Me (Are You?)

EconTalk Extra
by Amy Willis
David Mindell on Our Robots, O... Canice Prendergast on How Pric...

EconTalk host Russ Roberts goes back to the future again in this week's episode with engineer David Mindell of MIT. Their conversation covers exploring the Titanic, Mars, and the moon, air travel and the future of driverless cars, and more.

Their discussion had me tossing the notion of autonomy around in my brain endlessly. What do we really mean by autonomy, and how is it (ever) achieved? We wonder this conversation made you think about. So please, share your thoughts with us. We love to hear from you.

Robots Ourselves2.jpg

1. What does Mindell mean when he says that the notion of full autonomy is a myth? To what extent is autonomy a useful notion when discussing robots and machines? Can the "human machine" achieve full autonomy?

2. If you were the CEO of Uber, would you be striving for driverless cars? Why or why not?

3. Mindell asserts, "You are much more likely to get killed by a poorly-designed robot than by an evil-thinking robot." What does he mean, and to what extent do you agree?

4. What do you think of Mindell's (career) advice to study computer science and the social sciences at the same time?

5. Is full autonomy a straw man? As some of you have argued in the comments, isn't it enough just to be better than humans on their own? How would Mindell respond to this point?

Comments and Sharing

TWITTER: Follow Russ Roberts @EconTalker

COMMENTS (5 to date)
Fredrik writes:

Listening to the podcast, I think Mr Mindell's position about automation is a bit premature. I work with machine learning.

There are many automation tasks with varying levels off difficulty, and different levels of risk. It is difficulty, risk and the state-of-the-art that will determine whether we can remove the human from the loop or not. Even 20 years ago, if I had to beat somebody in chess I would never ever want to interfere with a chess computer to decide moves for me if I had that choice. Today, I have a robotic lawn mower that cuts my lawn weekly 300 kilometers away from where I live. It does a better job than I do cutting grass, I tend to it only once a month, and it only errored twice the whole year, on which I was notified with a text message. That is not a human in the loop, that is social interaction.

100% correct is not what a robot have to comply to. It only needs to avoid catastrophic outcomes, and be better on average than humans. I worked on a robot picking parts out of a box to put it on an assembly line. The pick up algorithm failed 20% of the time, but the robot still emptied the box about the same time than a human would do, who failed the pick less 1% of the time. But the robot didn't turn in sick, took coffee breaks or asked for a higher wage.

There are 2 non-solved problems today in robotics: compliance and robustness. First, When a human collides with a person or object - it complies, i.e. it cancels or retracts motion in the direction of collision. It can be a matter of life and death in many situations, both for the robot and the victim. Second, robustness is about understanding your (the robot's) own limitations and uncertainty about the world, and not act without that scope of limited certainty. Even if progress in these domains have been slow, there has been progress. Breakthroughs in these two domains will greatly enhance the applicability of autonomous systems.

I also think it is a mistake to assume a linear extrapolation from hindsight into the future. It is likely that we will have many black swans in the coming years in technology. All of a sudden, combinations of existing technologies or a breakthrough in some algorithm may change everything. One possible such breakthrough is extensive 3D mapping. When you have accurate 3D maps to refer to, the cognitive load on a robot becomes much smaller, because the map did that for you already, for the static part (e.g you are on a road, there is a house 7.2 m away left, and an oak 2.1 m right, and a sign saying "exit" ahead). Left for perception is only for things that changed/moved.

Can robots become superhuman? Off course they can, they already are in some limited task. Robots can also reflect on their own mistakes and learn from them, they are just not as good as humans at it yet. Whether they will become an image of human intelligence is an entirely different question.

Malicious intent is a huge security problem for large scale automation that was not raised in the podcast, that gives credit to Mindell's skepticism.

To sum it up, humans will be removed from more and more control loops. It is premature to say at what level it will converge and how fast, and what tasks will be left for humans to supervise. What happens will also be according to historical, ethical, legal and cultural factors, not the potential of technology alone. If Mindell's point is that we are not there with full automation today, he is correct. Like in economy, I think you need to be agnostic about the future, also within a ten year horizon.

As someone that works within the Healthcare information technology field, I found the analogies and insights in this episode interesting and thought provoking. While I am not a computer science major nor do I do the physical coding on the solutions, in the past I have been in product management roles and currently I work closely with hospital organizations to implement and optimize technology designed to predict events such as an imminent Sepsis event in which alerts are triggered and sent to the clinicians caring for the patient coupled with Clinical Decision Support Systems designed to give guidance and recommendations to the clinicians on the appropriate actions to take. On the development side, it is easy to let hubris set in and believe that the system is essentially a wonderful piece of art that should continually push the boundaries of completely removing human intervention. Mindell serves up a nice reminder that the systems themselves are built by humans and often have those biases coded in.

Mindell reiterated on a number of occasions that the best systems are those that work within the human framework, automating what makes sense but still factoring in those elements that could only possibly be managed by the human in complex settings. At one point, Mindell indicated that, "the humans in these settings are not idiots..." and too often the natural thought of the product designer is that all things can be solved for and human error can be eliminated if there is just one more tweak to the system to remove the human factors. This episode of EconTalk served as a great reminder that the best systems are designed to allow those closest to the actual action to use the systems for all that they can do, but allow the inevitable human intervention that complex events will require. Healthcare is full of the "9 foot snowbanks in Boston" that Mindell referred to, so it was helpful to hear the perspective.

Lastly, as someone that has loved the idea of summoning a car with a mobile application and then reading the newspaper to commute to work, the implications of what Mindell discusses here were somewhat disheartening. That being said, the ability, as he calls it, to use automation to participate more fully in the world around you was a nice way to reorient my thinking. Instead of reading a newspaper and not observing what the automobile is doing nor observing the environment around me, why not use the technology to embrace such innovations as having the car provide reports on local events going on that align with my interests, places to go with any requisite information (tickets, availability), places to eat that fit my tastes, recommendations on music programs in the local area, provide updates and historical facts about the locations I am driving in, alert me in real-time to dangers going on in the vicinity or divert the vehicle rather than head into an accident or traffic jam, etc. In other words, perhaps the more exciting aspects of the automated car of the future will be allowing us to more fully immerse ourselves in the world around us and interact with us rather than allow us to put ourselves into a self-absorbed cocoon.

David Mindell writes:

Mr. Fredrik's lawnmower example supports my argument beautifully. It's autonomous for period of time, permeated by regular interaction on a 30-day cycle of human intervention. Truthfully, I'm a little hesitant to get into this conversation because I've written a book on the topic which addresses all of these issues, and debating on the basis of a podcast doesn't address that depth. But nothing in the book is a "linear extrapolation of the past," nor does it make predictions. Rather, the empirical observation is: 30-40 deeply researched examples of existing robotic applications in the real world show us how human interventions are added into systems (especially life critical ones), to mediate the autonomy when they get into real applications. People can argue "the future will be different, now we have [insert favorite new technology here]" and that may well be the case. But that is an argument about faith and a certain notion of progress, which we know is shaped by imaginations from the 20th century. The book offers a great weight of empirical evidence about how people work with robotic and autonomous systems in real environments, and when we talk about technology, we usually think that empirical evidence carries at least as much weight as faith. The discussion ought to be about whether the empirical evidence portrays a fundamental phenomenon or an ephemeral one; I find it strong enough to suggest its fundamental, but other interpretations are possible.

Amy Willis writes:

@Matthew (all of you, really), have you seen this? Anyone know more about it?

Maybe this fits your notion of "automated" better? Or perhaps this is the *real* future of driverless cars? Would love to hear your thoughts...

Comments for this podcast episode have been closed
Return to top