David Mindell on Our Robots, Ourselves
Nov 30 2015

Our%20Robots.jpg Are we on the verge of driverless cars and other forms of autonomous robots and artificial intelligence? David Mindell of MIT and the author of Our Robots, Ourselves talks with EconTalk host Russ Roberts about the robotic revolution. Mindell argues that much of the optimism for autonomous robots ignores decades of experience with semi-autonomous robots in deep-sea operation, space, air, and the military. In all of these areas, the role of human supervision remains at a high level with little full autonomy. Mindell traces some of the history of the human interaction with robots and artificial intelligence and speculates on what the future might hold.

Erik Brynjolfsson on the Second Machine Age
Erik Brynjolfsson of MIT and co-author of The Second Machine Age talks with EconTalk host Russ Roberts about the ideas in the book, co-authored with Andrew McAfee. He argues we are entering a new age of economic activity dominated by...
Gary Marcus on the Future of Artificial Intelligence and the Brain
Gary Marcus of New York University talks with EconTalk host Russ Roberts about the future of artificial intelligence (AI). While Marcus is concerned about how advances in AI might hurt human flourishing, he argues that truly transformative smart machines are...
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.


Eric Chevlen
Nov 30 2015 at 10:52am

Another great interview, Russ. Thanks. Please note, however, that the plural of “octopus” is not “octopi.” It is “octopodes,” and it is pronounced in four syllables.

David McGrogan
Nov 30 2015 at 12:01pm

I was struck by the overlap between this episode and Munger’s unicorn. There’s a lot of magical thinking about technology. Maybe people should be invited to think, rather than of technology per se, of technology they know, designed by people and companies they know of – just as one should think not of regulation per se but of regulation created by politicians one knows of.

Jim Ellison
Nov 30 2015 at 10:17pm

I didn’t hear this mentioned.
The Google cars have indeed logged over 10^6 miles but they have been in a number of accidents none of which was their “fault.” The number of collisions did seem high to me. Recently, a Google car was pulled over for going 24 mph in a 35 mph zone. It seems that they are too cautious. They follow the laws to the letter and thus act in ways unexpected by human drivers who then run into the Google car.

Nov 30 2015 at 10:57pm

Excellent interview as always. I find it fascinating and a bit frightening that the best minds today are so afraid of AI. It seems that they feel that super intelligent AI could pose a bigger threat to humanity than global warming or the threat of nuclear war.

Butler Reynolds
Dec 1 2015 at 11:27am

When they automate warehouses, they do not build machines to mimic how the humans do the job. That would be too complicated.

If we do have fully automated cars, I do not think that they will drive around like humans in the current conditions and environment that humans do.

There will be changes both inside and outside of the automobile.

Dec 1 2015 at 11:41am

The trouble with engineers is they are too close to the details. Glass half empty. No engineer would have said “We choose to go to the Moon in this decade …” in 1962.

Often thinking about problems in traditional ways gets you a long way, but not the final mile. Sometimes foundations need rebuilt.

As someone with low vision, I really would love to see a driver-less car in my lifetime. I hope these will not just be a collection of mechanisms to make the life of a sighted driver easier, but can be legally operated by by people who cannot get driver licenses currently.

Pete Miller
Dec 1 2015 at 12:33pm

Haven’t even finished listening to the episode and felt called to make this comment. Just listened to this bit:

As long as that driverless car works perfectly under all conditions, everywhere, all the time. Right?

Actually, wrong! Perfection is a wondrous goal, but the engineering standard should be to be materially safer and more efficient than human piloted automobiles. That is a lower bar than perfection and will almost certainly be achieved to great social benefit in the not too distant future. Even if humans remain better drivers under certain extreme situations requiring rapid judgement, so much of the congestion and collision costs that happen now happen under perfectly ordinary conditions which make up the bulk of driving miles and hours.

Ajax4Hire RedPluto
Dec 1 2015 at 1:00pm

Humans use to travel on autonomous vehicles.
Horses and horse drawn carriage.

Anyone who has ridden a horse can attest to its “mind-of-its-own” autonomy. Point the horse, horse follows the trail, horse sees the log, steps over it.

Not new so much as different new.

Ajax4Hire RedPluto
Dec 1 2015 at 1:04pm

David Mindell is mistakenly in thinking that fully autonomous vehicle must be perfect.

It just needs to be better than humans, most humans. Then, people will insist on legislation to require autonomous cars on all public roads.

Just like Federal government pushes regulation down to the local level (seatbelts, speed limit, emissions control). The requirement will be couched in terms of saving lives, “for the children”, etc.

Expensive Automobiles will have assist then self-driving ability. Then most cars will have self-driving feature.

Companies will compete on who has the best autonomy hardware/software.

Insurance premiums will rise for human control, forcing move to fully autonomous.

Like the parable, “two men walking in the woods”
They come across a bear, one immediately switches out to his running shoes. The other says, you can’t out run the bear!! Running Shoe guys says, I just need to out run you.

Autonomous vehicle just needs to be better than most humans.

Todd Kreider
Dec 1 2015 at 2:03pm

David beat me too it. My undergrad background is in physics, and I think for the most part we think differently than our engineering brothers and sisters. We have imagination going for us; they have pragmatism going for them.

Roberts made some funny snide comments about differentiating squirrels from toddlers crossing the street. But just after I rolled my eyes at that narrow view — computers won’t get better — he asked the critical question: So what about 2025?

It reminds me of a brief conversation I had last winter with David Hofstadler, author of Escher, Godel and Bach who gave a good talk on translation. I told him after almost everyone had left that I’ve thought since 1996 when I saw what the internet could do in expanded form in 2006 and 2016 that statistical machine translation would wipe out translators by about now and told him that.

Hofstadler laughed at this despite huge advances in MT over the past decade. I asked “What do you think machine translation will look like in 2025”? He looked surprised and said “That is an excellent question…”

2025 isn’t that far away.

(Oh, the guest gave a very simplistic summary of why the AirFrance plane went down. This podcast didn’t start off on a good not because of that. His claim that about 10% of flights go exactly as planned is a complete strawman.)

Dec 1 2015 at 3:02pm

Excellent podcast – as usual! Congratulations on 501, look forward to thousands more.

My recollection is that in “The Right Stuff” Tom Wolfe wrote that the Mercury astronauts insisted that some of the flight controls be made less autonomous. They wanted to show that they were not just chimps (who made the first flights).

Dec 1 2015 at 3:02pm


I would recommend the book UNDERSTANDING AIR FRANCE 447 by Bill Palmer for a technical discussion of the aircraft’s (Airbus A330) automation systems, flight envelope protection laws, and human-machine interface.

Dec 1 2015 at 4:47pm

The main idea: “keep humans in control” makes sense. But in car driving, the human that designs the device is smarter than many operators which is not the case with the examples discussed. This means, as @Pete Miller points out, that self-driving cars might be an overall improvement.

With regards to AI, indeed, successful people (not necessarily the smartest) overreact – we were able to augment our mechanical powers many times beyond those of our natural body. However, we did that by applying intelligence to increase physical power. What other than human intelligence would we apply to develop artificial intelligence? Let alone the “machines gaining conscience” nonsense. Read more:http://nonlin.org/knowledge/

Luis Fretes
Dec 1 2015 at 9:02pm

This particular episode became unbearable because he kept repeating a false assumption. No, they don’t need to be perfect, automation doesn’t even need to be better than humans in every situation.

Nor is it true that we need cars to drive in every possible circumstance we currently drive cars. We control the environment, we build roads for cars we didn’t just drive them over unaltered land.

Daniel Barkalow
Dec 1 2015 at 9:31pm

Like many listeners, I rode in an nearly-fully autonomous vehicle several times today, including a few times while listening to the podcast. This was the sort of vehicle that normally only takes input as to the destination, although it has an emergency stop button and a button that prevents it from leaving when more passengers are approaching. It does have manual controls, but they can only be used by emergency and maintenance personnel, as it was deemed too dangerous for ordinary citizens to operate the vehicle manually. I know such things exist around MIT, and are used by some people, although when I was there, I usually took the stairs instead.

I think when considering what’s going to happen with cars, it’s useful to consider whether cars are going to be like elevators or airplanes. I think the main reason to think that they will be like elevators is that they can stop pretty safely pretty easily, unlike an airplane (or sub or spacecraft). Another factor is that the reaction time required in a car is much faster than in an airplane, such that the automated system really has to do something in the period before a human can take control.

We’ll probably always have the stop button for situations where you want the car to stop for a reason unrelated to driving (e.g., motion sickness), but that will have the car pull over under its own control, rather than trying to transfer control to a human while in motion. We’ll probably always have manual ambulances and other vehicles where it’s a matter of life and death that the vehicle arrives at the destination promptly. Having widely-deployed self-driving cars will probably take until they can do all the necessary mapping themselves, though, which will probably be a while yet.

Daniel Barkalow
Dec 1 2015 at 11:19pm

Now that I found the reference, I see that most of what I said in my earlier comment is from the 99% Invisible episodes “Johnnycab” and “Children of the Magenta” and the Planet Money episode produced along with them “The Big Red Button”, which cover a lot of the same points as this episode of EconTalk, and would make good additional links.

Dec 2 2015 at 2:24am

Interesting perspective. I was thinking about riding the train after listening to this episode. It rides on fixed tracks, essentially on a predetermined schedule with specific stopping locations. And yet still there is a trained operator who controls the throttle and brake.

I could see a future where cars end up being operated by trained remote pilots who will drive to your house, pick you up and take you where you want to go, without being physically with you in the car. And I think that there will be so much detection and assistance technology in the car that it will be nearly impossible for that pilot to crash the vehicle. These vehicle pilots can work a full day, driving cars around for multiple people.

John T
Dec 2 2015 at 12:59pm

Thanks Russ for 501!

Very practical and down-to-earth guest.

I agree that although the vision is there that autonomous vehicles on average may result in fewer deaths in one-on-one accidents, the system has not been characterized and its real performance can’t yet be determined. So some caution should damper enthusiasm. I can see the possibility of systemic problems that may result in larger amounts of deaths occurring simultaneously per instance like airplanes crashing causing more people to die in a single crash. Overall, however flying is safer then traveling by car.

If cars become auto-no-mous who owns the system? Is it inevitable that such a system be taken over by the government because of liability and the infrastructure being supported by fewer vehicle owners i.e. better utilization? If the Uber vision plays out, the automobile companies won’t have an incentive to support the system because they will sell less and less cars and if Uber like companies own all the cars do they then own all the liability? Will they own the system : roads, radio road transceivers, etc. or will the government? Will this make the cost per mile more expensive then now or will utilization make it cheaper? What will insurance look like? If the government takes over the system does this make us more free or less?

What will males do if they can’t drive?

Bill Flusek
Dec 2 2015 at 5:34pm

Another great episode.

The discussion of the Air France flight reminded me of a discussion of the differences in approaches to self-driving cars. One approach is to incrementally have the car pick up more functions and the other is to work toward having the car do everything from the start. The latter is the path that Google is taking. But what always worried me was the problem that you had with the Air France flight, where something is going wrong and the vehicle tosses control back to the person when they are unprepared to deal with the circumstances. And as vehicles do more of the work, we are likely to find ourselves more often unprepared for that event.

On the worries about robots and automation, there is a description of the problem that I rather liked some time back. As you try to have a machine do a very complex task autonomously, you can very easily end up with a code set where the designers do not (probably cannot) understand what will happen in every circumstance. By looking at the current state of its world and following the rules given it, the device can end up doing something the builders might not have expected. Engineers are often to close to their problems and they build things to not go awry in situations they can see as possible.

Dec 2 2015 at 7:30pm

John T,

The proper question is: what will our wives do if they can’t criticize our driving?…

Mort Dubois
Dec 2 2015 at 8:15pm

I don’t see an easy transition to self driving cars, although it would benefit me greatly – my autistic son likes nothing more than to be driven around for hours, with loud music playing. I hate driving. But I do it anyway. Oh for the day when I can shut him in an autonomous vehicle and send him off.

But it probably won’t happen that way, because even though engineers consider driving to be a problem of navigation, as it’s practiced by humans it also has an aspect of negotiation (with other drivers) and conformance (to the rules.) Driving as currently practiced is a complex interaction of those three modes. And, as it happens, humans are brilliant at making split second decisions about how to interact with each other and whether to break the rules or not. Machines follow rules only.

I don’t see how a self driving car manufacturer can program in anything like the human mode of driving. And I also can’t see a day when all humans are, all at once, replaced by rule-following robots. So my prediction is that in 2025, and maybe even in 2055, driving will still be a human activity.

If there’s a place for robots on the road, it’s most likely to be trucks. Nobody wants to be a truck driver – it’s a miserable life. I have several friends that own trucking companies, and its well known in the industry that the driver shortage is getting worse and worse. The rest of the economy relies on cheap long-distance freight transfer, so I can imagine a day when trucks are robotic. It will happen first within large distribution centers, where the robot trucks can be deployed on private property. And then eventually they will be out on the road. Nobody will be bothered if trucks drive strictly by the rule.

Greg Maffett
Dec 3 2015 at 5:54pm

If nothing else, this episode did make for a more thoughtful drive to work. There is an intersection near my house that is a multistep process to cross depending on the status of the two adjacent traffic lights. This morning I hit the perfect storm combination. I got through it without creating a traffic jam or an accident. I think is would have been the kind of situation where the programmed car would have gone into the first type error and created a jam by doing the ultra safe thing, but maybe not the optimal thing.

I am an engineer and have written code and I will say my favorite line in the podcast was from the coder in the two seat airplane who responded to Russ’s question “How do we not hit other planes?” by saying “We look around.”

Dec 4 2015 at 12:09am

A couple problems with David’s argument:

His ideas about AI seem to have been developed before the recent rise of machine learning.

AI had a horrible track record until machine learning came along, and David uses this to basically argue “we’ve failed a lot at building fully autonomous things in the past, so it’s super hard and we should stop trying.”

For some reason David keeps asserting that AIs have to be perfect to be adopted. For instance:

“as long as that driverless car works perfectly under all conditions, everywhere all the time”

“technology that has no possibility of failing..”

“that approach is an approach where you have to solve the problem 100% perfectly to do it at all”

This makes no sense. All that is required for us to adopt fully autonomous vehicles is that the AI is better than human drivers.

Dec 5 2015 at 1:02am

Excellent conversation – thanks, Russ! From the perspective of my career developing technology, including autonomous systems, in the aerospace industry, Prof. Mindell is correct. It’s a eventful journey from the controlled conditions of the lab to the chaos and mayhem of the real world. While some elements of a design might not work at all, other features often provide unexpected new capabilities. It’s all about the interface between the human and the machine. And the curiosity and patience of the early users. There’s no way to test all logic paths in a complex code, and the real world seems to tease out paths that no one thought to test. 2025 will see the Google car sharing a lane with the motorcycle, the cement mixer, the school bus, and the 2003 ranch pickup hauling a trailer load of calves. Although I’m not expecting to send the six-year-old grandchild in the robotic car to pick up a quart of milk anytime soon, I’m certainly excited to watch the technology transition from the lab to our daily lives.
We’re in the manic phase, as in the early days of AI. Unrealistic expectations will be shattered, disillusionment will set in, and then eventually a mature understanding of autonomous systems technology will emerge. Thanks for the podcast!

Richard Fenton
Dec 7 2015 at 8:34am

I found this episode very interesting as I am a project manager in the robotics field. I thought the definition of robot was quite narrow and was a bit Hollywood driven. There are some quite subtle applications I’m involved in where the automation of data acquisition is now almost complete and huge amounts of user time saved, allowing users to do more imaginative tasks for customers. I could go on for ages about the specifics of these technologies but I think the constrained time frame of the conversation is more interesting to discuss.

Forever is a long time. Moore’s law has had a radical effect on the quality (and quantity through it’s influence on medical research and practice) of our lives. When we imagine / talk about what autonomy might bring in the future it’s worth setting it in the context of how far we’ve come in recent years. When I started in my career in 1999 the internet was pretty much nonexistent. Now goggle does a lot of my thinking for me. I have spreadsheets that automate calculations for me. I have business management systems that provide information only highly trained people could have acquired a few decades ago. A whole generation of skills have been removed from the workforce and replaced by computers.

I can go to Linkedin and network with nearly anyone in the world. This was for the rest of history about the highest level of business activity / thinking “it’s not what you know, it’s who you know”. Whether they’d want to talk to me is another matter. The point is that computer technology has reduced the transactions costs. A generation ago these advances were not seriously anticipated by the mainstream.

When I look at the complex systems we now work with and build, the human role is really only filling in the gaps where the programming hasn’t yet reached. I think this is the useful part of the guest’s experience. He describes issues accurately but they need to be set in context – 30 years ago what we can do today were just in the dreams of a few pioneers. Often the guest states that “not any time soon”. Well that’s true. I’m feeling old at 42 saying I’m amazed at what I’ve seen change in my life time, but I can see robotics becoming cheap, ubiquitous and eventually removing all our human toil right down to the “butler” in our home. Will that be in a few years? I doubt it. 10 years, possibly. 50 years from now, probably getting close.

The more interesting question for this century will be how the computer revolution will underpin the genetic engineering revolution. Without computers genetic engineering would have been impossible. The question for the future will be how much our human DNA changes and we choose to alter our inherited characteristics and become the first species to redesign ourselves into something new.

I think in this context AI will be seen to be an interesting help along the way and may have contributions to genetic engineering. We are now in the century where we will start to choose not only what our technological environment is, but what we actually are. It doesn’t sound possible, but it’s already started.

Choices in genetic engineering would make an interesting podcast. Here’s a link to a BBC program if you want more detail.


Russ Roberts
Dec 7 2015 at 2:21pm

Eric Chelven,

You are right, sort of. Octopuses is also right, evidently. From the Oxford Dictionaries:

The standard plural in English of octopus is octopuses. However, the word octopus comes from Greek, and the Greek plural form octopodes is still occasionally used. The plural form octopi is mistakenly formed according to rules for Latin plurals, and is therefore incorrect.

Meanwhile, here is a modest defense of saying “octopi” along with the perils of pronouncing “octopodes” correctly. Yes, it is four syllables but the accent isn’t where I’d have put it. So I’m sticking with octopi or octopuses.

Dec 8 2015 at 12:04pm

Russ–Great episode overall, but I felt like you somewhat mischaracterized Bostrom’s argument about the dangers of superintelligence. Bostrom doesn’t argue that a superintelligent machine would develop and pursue its own interests, but rather that it might behave destructively in pursuing the objectives it was programmed to achieve (which is much more consistent with Mindell’s ensuing comment about machines actions reflecting their creators’ intents).

Bostrom worries that it’s hard to program a machine to know what limits it should adhere to in pursuing its goals. He often uses the example of a machine programmed to create paperclips–as it gains intelligence it might resort to diverting all of the earth’s resources to paperclip creation. Bostrom detailed lots of reasons in his book and on his econtalk episode why it’s hard to prevent that behavior a priori and why the “just unplug it” solution could easily fail.

Robert Swan
Dec 8 2015 at 4:39pm

I’m with Mindell on this — fully functional, fully autonomous vehicles are a long way off. Google has been solving some tricky technical problems; there are plenty yet to go no doubt. It seems to me that fully automated trains and planes would be much simpler, and even they aren’t there at the moment. But let’s stick to the car and assume all the technical problems have been solved. I think the really big question is coexistence with human driven cars.

To me, the biggest benefit of such a vehicle isn’t that I’m freed the chore of driving, it’s that it drives better than I could ever dream of driving. It gets me where I’m going, possibly joining with close-formation convoys along the way to save fuel and road space; conceivably not even waiting at the automatically choreographed intersections (i.e. no traffic lights at all). Having dropped me off, my private car goes and parks itself, the public car picks up another passenger. It hasn’t just freed me to read a book in a traffic jam, it has eliminated the traffic jam altogether.

I think it’s clear that a human driver* navigating in this world would be terrified, and would badly affect the flow of the smart cars.

So we either separate the autonomous vehicles on their own road system, or we compromise (and likely never see) the greater benefits of the autonomous vehicles. That means an evolution with roads gradually becoming smart vehicle only. It’ll take a while.

In the meantime we are already seeing “autopilot” aids in dumb vehicles. Adaptive cruise control, traction control, collision prevention, lane departure warning, etc., all make the human driver’s job easier. Maybe this just feeds the arms race between the idiot-proofers and the better idiots, but it seems pretty close to all the real-world benefits of Google’s work so far.

(*)Thinking about it, the passenger in the autonomous vehicle might prefer to keep the blind down too.

Dave N
Dec 11 2015 at 8:28am

As usual, several posters have beat me to many of my points. So my extra 2 cents.

How long it will take is an obvious unknown, but it’s clear that at some point autonomous cars will be allowed on the roads. Then we will be faced with mounting public pressure to ban human driving as the death toll (and financial costs) become more and more obviously attributable to the human drivers. And remember, these are tremendously large numbers we’re talking about.

In the US alone for just the first half of 2015:
Nearly 19,000 killed
2.3 million serious injuries
Over 150 billion in related costs

[In the last year alone, two people have died within a few miles of our house from completely preventable human error accidents. A cyclist was hit from behind and an elderly lady was killed in a head on near a narrow bridge in a 35 mph zone. And let’s not forget the trauma that the other drivers who survived almost unhurt will have to carry for the rest of their lives.]

Additionally there is the the loss of productivity from so much time driving. Millions and millions of hours per year. Then add in the savings in infrastructure from more efficient traffic flow. And of course the savings from other things I haven’t even thought of yet plus the unknown unknowns.

Mind boggling numbers and that’s only for the US. Let’s not forget Europe, India and China. Insane. However, it seems to me that many of these benefits are of the ‘public good’ type. Where are the profit opportunities in accidents that don’t happen? And some big industries will go to the wall with the change. Imagine not having to pay $1000 or more per year in insurance. Imagine your car out making money for you when you’re not using it. And imagine every new car coming with a lifetime chauffeur. Pretty sweet.

So not only will the taxi and truck drivers be out of a job, but so will most of the auto repair companies and insurance companies.

Seems like there should be some massive incentivizing to get this to happen sooner rather than later. X prize type or whatever we can think of. I reckon with a ‘man on the moon’ type effort we could make it so my 6 year old never has to drive and maybe even my 10 year old.

I’m tempted to agree that it’s wishful thinking. But then who’d have believed 10 years ago that I could walk into a store and for $20 walk out with a smart phone or a tablet for $200 with all those amazing capabilities. (Two cameras? I mean, seriously?)

TED talk from Google.


[Nick changed to Dave N with commenter’s permission. It looks like some of the numbers come from “U.S. Traffic Deaths, Injuries and Related Costs Up in 2015”, by Stav Ziv, Newsweek, Aug. 17, 2015. –Econlib Ed.]

Dec 11 2015 at 10:09pm

I think the more interesting question is not when cars will be fully autonomous, but when sufficient aspects of driving become autonomous so that a new way of travelling on land is possible.

For example, “convoy” technology alone may make it feasible to handover vehicle control for the middle 90% of the journey for a fee to a “bus driver” (who drives multiple cars virtually hooked together), leaving the first and last mile to drive on one’s own assisted by autonomous tools.

I see possible a quantum jump in the different service models without requiring the quantum jump to fully autonomous vehicles.

Richard Berger
Dec 13 2015 at 9:28am

Fascinating talk, really changed my thinking about driverless vehicles and AI. I was especially intrigued by his reference to “scientism” near the end; rather than just an engineering question, this whole area has a large philosophical dimension.

I bought the book and have started reading it.

Dec 15 2015 at 3:55am

Will Baidu realize its goal or is it deluded? One thing the Chinese don’t need to worry about much are class-action lawsuits.

Baidu plans to deploy autonomous buses within three years

Tuesday, December 15, 2015

Chinese search giant and online services firm Baidu has announced plans for its new automobile unit to put self-driving buses on Chinese roads within three years and mass-produce them within five years, Reuters reported, citing a spokesman for the company. The new unit also hopes to mass-produce autonomous passenger vehicles within five years as part of a partnership with BMW. The Baidu spokesman declined to give details on potential automaker partners for the bus project or investment amounts for the new unit. Rival Alibaba Group Holdings recently announced plans to launch its first car in collaboration with state-owned SAIC Motor Corp.

[Please give urls when you quote from other sources. This appears to be from http://www.chinaeconomicreview.com/baidu-plans-deploy-autonomous-buses-within-three-years –Econlib Ed.]

Comments are closed.


EconTalk Extra, conversation starters for this podcast episode:

About this week's guest:

About ideas and people mentioned in this podcast episode:Books:


Web Pages and Resources:

Podcast Episodes, Videos, and Blog Entries:



Podcast Episode Highlights
0:33Intro. [Recording date: November 16, 2015.] Russ: David Mindell... is the author of Our Robots, Ourselves: Robotics and the Myths of Autonomy.... Now, you open with the tragic story of Air France, Flight 447, and you use the crash and the recovery of the wreckage as symbolic of our interaction with autonomous technology and robots. Why that story and what does it teach us? Guest: Well, the Air France story is a story about a failed handoff, where the automation onboard an airplane found a relatively minor fault and handed control of the plane back to the human pilots, too suddenly and ungracefully surprised them. And they had lost some of their skills flying too much with the automated systems and lost control of the airplane. Which actually was a perfectly good airplane about a minute into the crisis. And so they went from tens of thousands of feet flying through the sky and ended up spiraling into the ocean, tragically losing all aboard. And that's a story about what can happen when automation is in a life-critical system, in an extreme environment, and the relationships between the humans and the machines are not properly engineered to exchange control in a graceful way. Now, interestingly, the wreckage of Air France was then found by another kind of autonomous vehicle, an autonomous underwater vehicle. And that vehicle was able to do things, still under the control of its human supervisors but that were difficult to do under other circumstances. Russ: On the crash: the pilots had thousands of hours of experience. Guest: Yeah. The pilots were experienced pilots. It was late at night. They were probably a little fatigued. They were probably maybe a little distracted. And all of a sudden they got handed this, you know, screaming airliner with lots of different alarms; hard to sort through what was really happening. One pilot pulled back on the stick; one pilot pushed forward on the stick. The captain himself was not even in the cockpit at the moment of the crisis. The accident report-- Russ: He got there fairly quickly-- Guest: cited total loss of cognitive control of the situation. Russ: And that was started--there was a set of protocols that were unleashed because of an icing in some of the, a part of the airplane, correct? Guest: That's right. The engineers who had programmed the system told the computer that it could not fly if there was ice on the pitot tubes. Actually, there are ways an airplane can fly without the data from the pitot tubes in fact[?] unmanned aircraft fly that way all the time, or at least they have the ability to fly that way. But the human programmers had said, 'If all the data coming in isn't perfect, then you've got to check out altogether.' And that's basically what the computer did. Russ: Which seems reasonable. Guest: Well, it seems reasonable, although not if you thought carefully about what scenario you were likely to hand the human pilots in a kind of distress situation without too much warning. Russ: So, I'm not an expert on aviation, but what struck me reading the story, which I had not read carefully before, is: Why doesn't this happen more often? Or does it, and people just recover sufficiently in that kind of situation? In other words, why aren't there alarms set off by computers--autonomous computer algorithms that hand off control of the airplane control to humans with a lot of uncertainty about what's really going on? Guest: Well, at some level it happens all the time. Humans are very good at adapting to these small little errors. The Air France 447, there's no question it was a kind of corner case, an extreme case. By and large, computerized airliners are very, very safe; they certainly have had a role in the tremendous decrease in accidents in commercial airline flight over the decades. At the same time, the computers are not perfect, and the people are constantly fidgeting around with them correcting small mistakes, you know, reacting to unanticipated situations. The FAA's (Federal Aviation Administration's) study of cockpit automation estimated that the number of commercial airline flights that go exactly according to plan is 10%. And in the rest of the 90% case there's always some change--change in routing, change in circumstances that the human pilots adapt to. Russ: So, I apologize to anyone listening to this episode who has downloaded it and is on a flight. But I guess even though it's only 10%, most of the time, an overwhelming percentage of the time, that handoff to human control goes fine. Guest: That's correct. And one of the things you can say is you say, an increasing number, proportion of airline accidents, it's true of automobile accidents, too, come from human error. And that's true, because the mechanical systems are becoming more and more reliable. But we tend to know a lot about accidents. You have to be very careful about studying these problems just by studying accidents. Accidents get a lot of attention. They get a lot of resources. They are studied very carefully, second by second. We know a lot less about normal operations, the sort of everyday thing that happened in the 40,000+ commercial airline flights that happen every single day in this country. And in those normal situations, people are constantly preventing accidents. Again--correcting small errors, correcting small failures, responding to changes in situations. And without the people in those loops, you'd probably have many, many more accidents. Russ: Yeah. I recently took a flight where on the outbound leg, the landing was so spectacular that the flight attendant said, 'That's how you land an airplane.' And we all applauded in appreciation. On the return trip, the pilot, it felt like, bounced the plane. As close, as [?] unpleasant a landing as I've had on a commercial flight. I've obviously not had many unpleasant experiences. But it was clearly--something had not gone correctly. We had no idea what it was. There was silence from the cockpit. I thought there'd be a 'Sorry about that, folks.' But it just--they taxied to the gate, and we dutifully, sheeplike, got off the plane. But something unusual happened there that we were unaware of. Guest: Mmmhmm. Yeah. You just don't know what that is. And you don't know--maybe there was an automatic landing system in one or both of those. Russ: Right. And as you point out--and the phrase is 'autopilot,' which is of course--this is an automated landing--can be done now. And takeoff, of course. And it's all, most of the time would work fine.
8:00And what we're going to talk about today, those of you listening, is, we're going to talk about the role that autonomy plays in the advance of technology and robotics and we're going to get to driverless cars and a whole bunch of things. But I'd like to hear about your personal experience, which you cover in some detail in the book. Tell the listeners your own involvement in these extreme environments with semi-autonomous robotics. Guest: Sure. The book starts where my career started, which is in the very deep ocean. And I began as an engineer in the 1980s working with early robotic vehicles in the very deep ocean. And at the time we thought they were going to replace the manned vehicles and move eventually toward fully autonomous vehicles that would just go out and spend months at sea doing their work. And one thing that surprised us right away was that the robotic vehicles weren't cheaper and they weren't safer than the manned vehicles. For a couple of interesting reasons. But what they did, too, was fundamentally change the nature of the work. Woods Hole, which is where I was working at the time, operates, still operates a vehicle called Alvin, which is a manned submersible that--it takes 3 people in a 6-foot sphere, several miles down to the sea floor. And when you operate, Alvin, you have two observers or scientists, and then one pilot. They go down at 8:00 o'clock in the morning; they spend a couple of hours getting to the bottom; spend a few hours exploring the sea floor; and then spend a few hours coming back up. And then they return to the Mother Ship and tell everybody what they saw. What we found with a vehicle called Jason, where it descended as a robot but it had a fiber optic cable, connected all the way up to the ship--so everything you saw on the sea floor was relayed up to the ship in real time into a large kind of NASA-like [National Aeronautics and Space Administration] control room, and in that control room you could have 20 or 30 people experiencing the sea floor all together. And these would be scientists from different disciplines and engineers and even people from the media. And the experience of exploration changed quite radically. It was more of a group experience. More of a social experience. And then often you could connect by satellite link to hundreds of people in an auditorium somewhere back in the United States or anywhere in the world. And the whole experience changed rather radically. And those were not necessarily things that we anticipated. Not the traditional kind of automation, replaces the people. What it did do is push the people to a different place, and changed the nature of the work that they did. The robots didn't do any science on the ground. They didn't really do any exploration. But what they turned out to be really good at is digitizing the sea floor in incredible precision resolution. And that allowed the scientists to explore in the data from the comfort a computer workstation, often at a time that was even remote from several months from the time that the vehicle was collecting that data. Russ: Talk about how the scientists felt about that. Because that was a very interesting cultural phenomenon. Guest: Yeah. There was a lot of conflict over it over the course of the 1990s. A lot of scientists felt that remote science wasn't really science, that they really wanted to visit the sea floor. If you didn't actually physically inhabit the place you were studying, it wasn't really the right kind of science. Which is interesting, because I've dived in Alvin and submarines; and you can see very well out the window, but you don't really feel like you are in the place you are looking at. You are encased in a titanium or steel hull that's protecting you from the elements. So, it's a remote presence of its own kind. But it took a long time for people to accept that using robots to do remote exploration might actually also be exploration. Russ: And that really changed, you suggest, when some of the robots were able to give us a peek at some aspects of underwater events and tell us things we really didn't know much about that were really powerful. Guest: Yeah. There were certainly things that the robots could do that we just couldn't do. I mean, just before I joined this group, they sent a small robot, Jason Junior, down the grand staircase of the Titanic, which was much too dangerous for a human-occupied submersible to go down. They could get very close to hydrothermal vents. They could also stay down for days and days at a time--which vastly explored[?] the amount of time you get on the sea floor. And so there's a whole set of phenomena that are sort of different. And what we also found was that the move, the progress, was not necessarily toward full autonomy where robots that were just going off on their own. You always try to stay in touch with the robots as much as you could, over the course of the 1990s and the 2000s, we did in fact get autonomous underwater vehicles that didn't have the cables connected to them. But you still always wanted to talk to them, even if it was only a couple of bits at a time to keep an eye on them and let them know what to do. And then they would always come home and bring their data back, and you'd download the data, and again explore the sea floor by exploring the data. And those are the kinds of vehicles that surround[?] the Air France 447 wreck. Russ: 1307 I find it fascinating that--I have to say that I like Tom Hanks as an actor, but Castaway is one of my least favorite movies ever. And one of the things I liked least about it is when he names the basketball 'Wilson.' Or whatever he names--it's not a basketball. Something he finds. It's kind of an ad for the sporting goods company. You know, if I wanted to pretend I had a companion, I'd probably name it something other than 'Wilson.' But I find it interesting that these devices that are not sentient have human names--Alvin and Jason. They have acronyms, also, of course. Which you'll know by heart and I don't remember from reading the book. But, there's a certain--I don't know--affection there. Is there? What's that like? Guest: Well, yeah, that's-- Russ: Do you think of them in an emotional way? Guest: Uh-- Russ: You talk about when ABE (Autonomous Underwater Vehicle) died--got an obituary in The New York Times. So talk about that. Guest: Yeah, I think, speaking as an engineer who worked on these systems, I never found them sentient at all. And, you know, in fact they were really quite dumb and inert pieces of technology. And it was always a struggle just to get them to do just the simple things you wanted them to do and nothing else. And at the same time, there is a tension between the kind of public conversation about these robots. They were named by their inventors, to be sure. It wasn't something that was added by the Press. But there is a kind of break between the way that the people who are most closely connected to them think about them and the way that the stories get told about them. And that's true with the rovers on Mars, as well. The people who use them even describe them sometimes as robot geologists, even though they don't actually do any geology at all.
15:02Russ: So, the book deals with different sets of extreme environments: air, space, water, and war. Let's talk about space for a second because you mentioned the Mars expedition. We have a tremendous--I think most people have a tremendous romance about space travel. But we've had some terrible accidents. People who have died. And so there's a natural impulse toward robotics rather than sending people to Mars. The movie The Martian has come out recently; I haven't seen it but we like the idea, thinking of traveling to other places. But, of course, they are extremely hostile. So, my question is: Do you see that continuing--the use of robotics for space exploration? And: How much autonomy is there on Mars? So, talk about the rover--it's not guided in the way that the submersibles are. So, talk about what it's doing that is somewhat autonomous, to the extent that it's autonomous at all. Guest: Well, so, with the rovers on Mars you have a 20 minute time delay between when the data is transmitted, either to or from Mars, and when it's received. And that translates to a 40-minute--more or less an hour--time delay, when you give a command, before you see the results. And practically speaking, for the Mars exploration rovers, that turned out to be sort of a once a day cycle--sort of, they would upload commands and then get the data back. Even given that, there's still a fairly limited amount of autonomy on the surface. The vehicles are not making much in the way of decisions on their own. They do some basic internal housekeeping--you know, if they lose touch with back home they'll go into certain predictable modes. And certain times the engineers who drive them from the ground will give them some autonomous features to maybe get around an obstacle in the short term. But for the most part, they are still guided from the ground with a fair amount of control. Even when they are autonomous, it's limited in time. You maybe say, 'Go do this, and think on your own for an hour,' a few hours, or a day or so, but you really want those things always reporting back home and always under the control of their human operators. That's one of the themes of the book, is the ways that autonomy can be very useful, but it's always constrained and it's always wrapped up in a human wrapper of sending instructions and receiving feedback or data. Russ: And on the moon--I think you said all but one or every one of the landings they turned off the robotics and did it by hand. Talk about what happened there. Guest: So, on Apollo 11, famously, about 200 feet above the surface, Neil Armstrong reached up and he switched off the automatic targeting feature. The computer and the lunar module were perfectly capable of landing in a kind of fully automated, hands-off mode. Armstrong turned off that feature and landed it, sort-of by hand: he had his hands on the joystick. But it was still a digital, fly-by-wire system: all of his commands were going through software and the computer was aiding him to a great degree in what he was doing. And then after Apollo 11, all 5 of the following commanders also turned off the automatic targeting at about that point. But they were still heavily dependent on the computer and heavily using these kind of digital fly-by-wire modes. And what was really interesting about that story is that in the Apollo story--it's one of the lessons of the book as well--is that it was a very innovative, very cutting edge, digital computer, one of the earliest uses of digital computers in an embedded sense, the way we use them all over the place today. And that highest level of technology did not mean that the landing was fully automated. Actually, the Russian spacecrafts of the time were very highly automated, because they had less sophisticated analog computers. The more sophisticated digital Apollo computers were actually used to create this very rich way of working, where the astronauts could turn off the targeting but keep the other digital modes. And that led me to a conclusion that, throughout the book, in all these other environments, that the highest form of technology is not full automation or full autonomy. But, it's automation and autonomy that are very, very beautifully, gracefully linked to the human operator--where the human can call for more automation as the situation demands it, call for less automation when the situation may not demand it as much. And the sort of perfect balance between the human control and the automatic control--that's really the thing we ought to be shooting for. Not necessarily kind of closing our eyes and falling asleep while our vehicles drive us around. Russ: I don't remember if you told us in the book, but did Armstrong ask permission? Guest: Um, he did not ask permission. He did not have to ask permission. Nobody was all that surprised that he turned that automatic mode off, actually. It was something they had all anticipated. And he had the command authority to do that. Russ: Because of course he's controlling the module. You could argue that Houston's controlling him. But they can't literally control him. I guess they could. They could in theory have some sort of override built into the system that wouldn't allow him to do it--under certain conditions it couldn't be turned off. But he made the decision to turn it off. Why did he do that? You suggest it wasn't ego. He was uneasy[?]. Guest: Yeah. When I first started writing about this, I thought it really was ego. But the more I looked at it, the more I talked to people who have actually done it, both on landing the lunar module on the moon, landing the space shuttle, even landing current-day airliners which have auto-land--these operators, who are very highly expert, really believe that if they are more in touch with what the machine is doing, they have a better chance of responding to something if there is a failure or some anomaly at the last moment. And that again, being deeply involved in these control loops, still dependent on software, still with all the computer aids, still with all the benefits that algorithms can provide for us, but keeping the person involved is something that greatly enhances the reliability and the safety of the system. There are always cases where the engineers who designed the system didn't foresee what might happen. You know, that's what's wonderful about the world--it always surprises us. And the best person to deal with that surprise is not necessarily a programmer working two years before, but the person whose rear end is on the line, who is physically in the environment, who can see what's going on. Russ: And in Armstrong's case, he was worried about the crater, the geography--not the right word--the topology of where he was about to be put. Right? At least that's what he said. Guest: Yeah. He could have actually still used the automatic targeting system to get over the crater. David Scott, who was the commander on Apollo 15 really put it well. He said, 'I came all that way, and I felt like I just needed to be involved for those last moments. It was my rear end on the line.' And again, the computer was beautifully programmed to really help the astronauts even when they had their hand on the stick in a variety of ways. The lunar module is physically impossible to fly it in a purely manual way. It had 16 thrusters and no human could command all 16 of those things in exactly the right way. So you had to fly it through the computer. Russ: You need two octopi to do it. For some reason that reminds me of Guardians of the Galaxy. I'm thinking of like, you know, an octopus with some kind of genetically modified skills. Let's put that to the side.
22:55Russ: There is the issue of overconfidence there, and ego. Especially as technology advances, there is this--I don't know, a human hubris that the pilot can do it better. And sometimes I guess that's not true. I assume it's not true, sometimes. And that could be a problem. Guest: Yeah. I think in the--it's interesting--in the case of the Apollo landings, 6 of 6 attempts succeeded on landings. So, it's hard to argue with that case. Space shuttle-- Russ: Small sample-- Guest: also had an automatic landing feature--small sample, right, but that's what you got: it is 100% Russ: It's the maximum [?] level-- Guest: The space shuttle, too, had an automatic landing system that our taxpayers paid a lot of money to get developed and was never ever used, although all the space shuttle landings were successful as well. Interestingly enough, if you watch the first Star Wars movie which only came out 5 years after the end of the Apollo program, the climactic moment in that film is Luke Skywalker driving through the-- Russ: Spoiler coming, if you haven't seen the first Star Wars film, you want to turn this off, because the next episode is coming soon and you might want to watch from the beginning if you missed any. Warning! Warning! Put down. But go ahead. Guest: Luke Skywalker flies through the trench of the Death Star and at the last moment he turns off his automatic targeting computer and trusts the Force instead. And you see that in a lot of movies--Space Cowboys, same thing. I think it's Tommy Lee Jones turns off the computer and lands the space shuttle manually. That became a kind of narrative trope in science fiction after the Apollo landings. Russ: But it appeals to us deeply. I remember that vividly from the movie; in fact, when you mention it I get goose bumps and I don't believe in the Force. So, I find it interesting how that taps into our desire--I don't know what you want to call it--our romance about our abilities or about things that can't be explained. Yeah, it's kind of like I'm going to close my eyes and shoot: 'I made the game-winning shot when I closed my eyes because I just relied on my intuition.' There's a part of us to which that's deeply appealing. Guest: Yeah; I'm not really sure it's even-- Russ: Sometimes it's stupid-- Guest: so much about intuition as much as the--you know, any automated system is programmed by people. And those programs embody the people's assumptions about the world, worldviews about the world, models of who they think their users are. And to claim that the person who thought the problem through, again, years in advance from the comfort of a cubicle or testing lab somewhere had imagined every possible scenario and perfectly pictured every possible thing that can happen, is just a false claim. There is not always but very often things that can happen in the moment that were not anticipated, and people are very good at handling those kind of things. Not least of those is other people involved; and other people involved in the system. Russ: Yeah. I can't decide whether the fact that you are an engineer makes that claim more persuasive or less. If you were a coder I'd be more impressed. But I'm trying to think of your own experiences and background how biased you are or not against it, because obviously these systems-- Guest: I've written a lot of code for [?] system, I'll put it that way. Russ: Yeah, I remember now. These systems are really the synthesis of code and engineering skill, and of course as you say they can't anticipate--the best engineer and the best coder, no matter how smart they are, can't anticipate every situation, and particularly the interactions that you might have with other people that the machinery or robot can't handle. That's for sure. Guest: Exactly. I think--if you want to talk about a fully automated aircraft--take off from an airport, fly even through weather, even with an engine failure, and land at another airport--we solved that problem 20 years ago. That's a solved problem. But to try to do that where you take off from an airport where other people are using, other people are flying through the airspace; you are flying over the heads of people who might be at risk if you crash and landing at another airport where other people are--that's a problem we're only barely scratching the surface on. The autonomy kind of embedded environment. I call it, in the book, situated autonomy. How do we apply these autonomous systems where they have to live in a world where people are? This is the problem the FAA is dealing with, with drones and other unmanned aircraft. This is the problem of driverless cars. And it's a very rich, interesting problem. But not a simple one. And not one that's amenable to a very purely analytical solution. Russ: You remind me of the only time I've flown in a two-seater aircraft. We lifted off in about 5 seconds of taxiing; it was really exhilarating; we were floating through the air; it was a beautiful day. Heading maybe 150 or 200 mile flight. The person who was flying the plane was actually a coder. And as we got up in the air, I said sort of nonchalantly, 'So, how do we keep from crashing into other planes?' I was looking at the instrument panel, trying to figure out what he was using to avoid a glitch. He said, 'Well, you look around.' I thought, 'Okay, I guess I'll pay more attention.
28:43Russ: But now, you argue that true autonomy is a myth. And we're living in a time when there's an immense amount of excitement--that we're on the cusp of good and bad kinds of autonomy. Why do you say it's a myth? Why aren't we going to get there? You suggest we are only going to get there--it's an asymptote, it's not a destination. Guest: Well, I didn't think true autonomy is a myth; I said full autonomy is a myth, where there's no human involvement. Because we have yet to build a system that has no human involvement. There's just human involvement displaced in space or displaced in time. Again, the coders who embed their world view and their assumptions into the machine, or any other kind of designer--every last little bracket or tire on a vehicle has the worldview of the humans who built it embedded into it. For any autonomous system, you can always find the wrapper of human activity sending out with instructions, coming back with data, or other things that the thing has. Otherwise the system isn't useful. So, to begin with, fully autonomy is an asymptote that way. But again, full autonomy as in the aircraft case--that's the easier case than autonomy in the human world--autonomy situated and responsive to all the complexity of living with other people around. And I think that's really the ultimate goal we should be working toward. It's very challenging. The Air France crash case gives you one of many cases of things that can go wrong in that situation. But we really ought to be thinking about achieving that perfect balance between the human and the autonomy. Because it's going to be there anyway, right? And there's a bunch of stories in the book, including the story of the Predator drone, where it was designed according to this sort of dream of full autonomy and at great expense and great difficulty for everyone involved; it ended up having to be embedded in the human world, like every other system. Russ: Yeah. I think we have a lot of--well, I have some misunderstandings about what drones actually do, and I think part of it's the word 'drone,' which makes it sound like it's off on its own looking for things to strike and kill. One way to think about this is if you imagine building a killing machine and launching it to say, 'If you see Osama bin Laden, take him out,' and then setting it off, that's just not realistic within any--not that it couldn't kill a lot of people. But we would be very uncomfortable doing that because of the uncertainty of it. And so you are suggesting--one of the themes of your books is the constraints we put on autonomy, especially when there are risks involved, and danger and safety and human life. Guest: Exactly. If you were to try to program a drone to go find Osama bin Laden, it would come down to a problem of watching people and interpreting their behavior. Russ: Yeah. Guest: And that's what actually a lot of the Predator and Reaper operators do: they spend a lot more time watching a house and seeing what they're doing, and it's a very tough problem that they're not all that well trained for, is: How do you interpret fuzzy images on video for intent? But people are still better at it than machines are. Because there's a context around it-- Russ: It's a human context-- Guest: and they have [?] or you may need the New York Times that morning to understand what the political situation. And AI (Artificial intelligence) has always had trouble with decision-making within a human context. The Predator and Reaper drones--I tell this story in the book--again, they are really not drones like you say. The Air Force actually banned the term 'unmanned' to talk about those vehicles, because they take hundreds of people to actually operate them. And it's actually a problem because they are so labor-intensive to operate them. They are remotely operated; they do do different things than manned aircraft do. They are really interesting and they raise a lot of interesting challenges, but the last thing they are is inhuman killing machines. Russ: I guess it's like a bullet. A bullet doesn't have a person: it doesn't have a knife thrust or a sabre or a spear. That's the first killing at a distance, and it's obviously directed by a person who aims and fires; and things go wrong. We understand that. Guest: Yeah. And the bullet is actually not a bad--it's an extreme case in some way, but it's a good illustration because the bullet is aimed and pointed by a person; it has a certain amount of autonomy once it leaves the barrel and the person doesn't have impact any more. But it's very short in time, very limited. And, when bullets go wrong is when they don't go where the person wants them to go. Right? So, the autonomy is the failure of the system. When bullets do the thing they want them to do is when they do exactly what the person wants. Same thing with the sort of smart bomb idea. When I--some of the early thinking on this book goes way back to first Gulf War in 1991; there were images on TV of smart bombs selectively destroying targets-- Russ: with tremendous precision-- Guest: with tremendous precision. So all the computers and all the lasers and all that technology, those are not the smart bombs. Those are the dumb bombs. Those are the ones that are going only where we want them to. The smart bombs are the really scary ones where you drop it out of the plane and you don't know where it's going to go, either because of the wind or some other failure of the system. That's what you don't want.
34:19Russ: So, going back to this general question of full autonomy--you just mentioned the New York Times. Yesterday's New York Times had a feature on driverless cars. Here's a short quote:
[F]ull autonomy is on the horizon. Google's self-¬driving cars have logged more than a million miles on public roads; Elon Musk of Tesla says he'll probably have a driverless passenger car by 2018....
What's your reaction to that? Guest: I don't think that's a realistic vision. I think there's any number of ways that you can see there are going to have to be human interventions in driverless cars. There certainly will be all kinds of automated features. Those are good things. They'll potentially improve the safety of driving. But to have a car that you drive down the highway at 80 miles an hour and sleep in the trunk while your kids are strapped in the back seat--I think we're a long way from that. For good reasons-- Russ: Are we a long way? Or no way--it's never going to happen? Guest: Well, I hesitate to say 'never,' but we have 30 or 40 examples in the book of systems that very smart engineers imagined as being fully autonomous and fully unmanned; and as they moved from the research lab into the field, they gradually got human interventions. Just think about it this way: Are you going to get into a driverless car that doesn't have a big, red Stop button for you to stop it in an emergency? What's it going to feel like when you can see things out in the world that are happening that the car is not recognizing the way that you want them to? Russ: But the car is going to be so smart. It's going to be able to recognize a squirrel from a toddler who strays off the sidewalk. And it's going to be pre-programmed to run the squirrel over, because I'm not--I'm more important than the squirrel--but the toddler, it will consult some ethical treatise in real time on Google and know whether to run the toddler over versus kill me: my age, maybe my contributions to society. In fact, it will sample the toddler's DNA (deoxyribonucleic acid) from a distance, figure out whether he's going to be a criminal or not, and know whether to--these are the kind of stories we tell ourselves. Guest: Yeah. Ask an owner of a Volkswagen diesel what it's like to feel like the software in your car maybe didn't show the values that you have. And how good are car companies and software companies at being transparent in their decision-making? So, think about: when you get into a car, you make a tradeoff between a number of different factors. Take risk and performance. Maybe you're late and you're willing to take a little risk and you drive a little more recklessly in order to try to get somewhere fast. Russ: Never. Guest: Maybe you pick up your kids at school and you turn the risk knob down and you say 'I'm going to drive a little more conservatively and be on the safety side.' You make those kind of decisions every time you get into a car. So does practically every autonomy algorithm. They work by optimizing cost functions: What is the balance between fuel efficiency and performance on this particular trip? And very often those values are in conflict with each other. Like, performance and fuel efficiency: get there fast is not the fastest. So, I think what you really want to see is systems that are designed where the user has input into those kind of decisions, where you have the control. Those decisions are going to get made somewhere, either by a programmer back in a cubicle somewhere or are they going to be made transparently in a way that a user can have input into them, so that the car drives according to your values and according to your priorities at any given moment. Russ: Well, I was thinking about your points about autonomy and how things advance but not as far as we might think. I can drive a stick shift. None of my kids can. And it crosses my mind that maybe their kids won't learn how to drive a car at all. In fact--I have four children; my last child at least so far just turned 15. And I wonder: wouldn't it be nice if I could live in that driverless car world and I wouldn't have to teach him how to drive? So, my dad taught me how to drive a stick shift. It was an unpleasant experience for both of us. Teaching my three other children how to drive has been a challenge. And it would be great to-- Guest: As long as that driverless car works perfectly under all conditions, everywhere, all the time. Right? And there's no question that when you have good bandwidth and you are near a cell tower and the sensors are all working at their highest order and the car was inspected last week and there's no ice on the sensors or bird poop on them, you ought to be able to have access to great features. But you really are going to have to be able to move in and out of those features--maybe you are driving away from high bandwidth cell links. Maybe you are driving on dirt roads that haven't been mapped. Maybe you are driving in a lot of different circumstances. You need to be able to move in and out of these autonomous modes. And that presents you with the Air France 447 problem. Which is a problem we should be working on, that we can improve on. But it's very hard to imagine a world where you get in and you have no possibility of having any input into the system. Why would you want to throw away that human insight? Russ: Well, I guess--let me rephrase your point. Obviously, if that toddler comes off the sidewalk and the car says, 'I can't handle this. Your turn,' that's the Air France problem in an extreme. That's not going to go very well no matter what, I don't care how prepared I am. There's really no attractive way to deal with that kind of--there's no easy way to think about that handoff. If it's more, 'Gee, it's kind of a foggy day today,' or 'The cell service is mediocre, the tower is mediocre; why don't you drive?' That's a different level. But I guess when you talk about it, given how poorly we drive now, I'd be willing to take a pretty big tradeoff of autonomy--I'd be willing to accept some very flawed autonomy rather than letting my 15-year old drive that car. So, there is a tradeoff there. You are suggesting that that tradeoff will never be attractive--I think you are suggesting that tradeoff will never be attractive enough to give up full autonomy. And I think what Google and Tesla and others, and to some extent Uber are betting on is that we'll get so close that we'll save so many lives that it will be a huge improvement. Guest: Yeah. You know--there's no evidence that we're going to save lives yet. There may well be. But again, we know a lot about accidents. We know a lot about aviation accidents and we know a lot about car accidents. And it is indeed true that a high proportion of the lives lost and the accidents in automobiles are caused by human error. But what we know a lot less about is how people drive under normal circumstances. And people are extremely good at sort of smoothing out the rough edges in these systems: the stop sign maybe is knocked over or a traffic light isn't working; and people have a way to kind of muddle through those situations. And they do that all the time. Again, back to that--of commercial airline flights, 10% of them proceed exactly according to plan. That's probably about true of your car trips as well. So, again, the claim is not that--I think there's a lot of things you can do with autonomous technology that are going to benefit cars and that you'll certainly want to be able to relax on the highway a little bit and let the car drive; and you'll certainly want to take advantage of all the sensors and the AI and the different robotic algorithms and techniques that are being developed in all these different realms. It's just a matter of whether you are going to be sleeping in the trunk or whether you are actually going to have the ability to stay involved in the system, and whether you can think about technology that will keep you engaged in the world and expand your experience outside of the car rather than push you back into a sort of rarified cocoon into the car with 100% faith in technology that has no possibility of failing.
43:10Russ: We're already somewhat down that road with semi-autonomy: you have collision warning, you have lane-change warnings. And of course that encourages people to text. Or to talk on their phone, eat--many things people do that are semi-cocoon-like but not totally because they are still steering the car. But you suggest that Google has made a mistake--that might not be the right word, but they made a decision at least in their public statements that they are moving toward complete autonomy. Whether they get there or not, maybe we should be skeptical. But you are suggesting they should have tried a different model--maybe that's where they'll end up. How might that work? What are you--give us a little vision of that. Guest: Well, I think almost all the car companies are taking a different approach, right? I quote the senior leadership at BMW (Bavarian Motor Works) in my book saying, 'People buy our cars because they like to drive them. It would be crazy to get rid of that part of it.' And people like to be in control in different ways. And the automobile companies who are much more familiar with what it means to engineer and support and operate a life-critical kind of system out on the road are all taking a much more cautious approach to it. And I think you'll see it play out in the marketplace--these companies are all in competition. They are going to be vying with each other for the best position. There are certain components of all this. But again, the idea that you'll end up in a car that doesn't have a big red Stop button in it--it's hard to imagine that that's actually going to come to pass. That even regulators would allow that. And once you allow a big red Stop button, then you've got at least the beginnings of a handoff, and you have to begin to engineer that kind of handoff. Again, full autonomy is only going to work in that way, if it will work at all, with all of the perfect conditions, everywhere, all the time. And we know that's not how the world is wired. It's just not fully wired that way yet. That in itself means you'll be driving in and out of various levels of states of autonomy. That's how it should be. Russ: So, one argument would be, 'Well, there will be a red Stop button; there will be some override possibilities; there will be some training necessary maybe to get your driver's license still. It will be a little bit different and of course you'll be greatly aided by all those systems onboard. And maybe that red button gets pushed so rarely that it's just an uninteresting feature. I think the question is, for those of us who are overly enthusiastic, which would include me, because of things like Google's self-driving cars "have logged more than a million miles" on public roads. In your book you are very critical. Guest: None of it in the winter, by the way. Russ: Yeah. So, in your book, you suggest that a lot of the "evidence" that it's near is exaggerated or misleading. Why? Guest: Well, to begin with, again, that approach is an approach where you have to solve the problem 100% perfectly to do it at all. And that's just generally not been the approach that successful engineering systems have taken. And, I don't think the evidence is exaggerated. I haven't seen evidence that driverless cars have saved lives. They've also been driven heretofore almost exclusively lately, that kind of car, on well-ordered streets in northern California. Living in Boston, just last winter, the 3-D (3-dimensional) topography of the terrain changed by 9 feet overnight. Because you get three feet of snow, plowed up into 9-foot snow piles. And the directions of the streets change; the very way that people drive was changing rapidly. The Google car still relies on essentially perfect maps in order to make its way through the world, and there's an awful lot part of the world that's not perfectly mapped yet. And maybe never is, because maps are always changing in that way. Again, I think all the autonomous features are great things. I think they are going to come in. I think there's a good chance that some of them will improve the safety of driving, and they may introduce new risks as well. It's just hard for me to imagine that the person whose rear end is on the line, who is physically immersed in the environment and sees--has a situational awareness of what's going on, will never, ever possibly have anything to add to the situation. And we've never seen a system that's worked that way in the field. Russ: But then the question would be, as the general experience--let's say it's 2025 or 10 years from now when we've just recorded the 1000th episode of EconTalk, which would be really exciting. And we're talking about, say, we're elderly--I don't know how old you are but I'm 61, so I'll be 71--that's not good. So, let's talk about my parents. They are 85 now. They live in Huntsville, Alabama, and they are taking a drive to Memphis this morning. Which drives me crazy, because they drive themselves. And in 10 years, God willing, they'll be 95 and 93 years old. And they probably won't be able to drive their own cars. So they call on Uber to take them to Memphis. And will that Uber have a person driving them? Will there be a driver, or will they just get picked up by the equivalent of an actually drone car that will deliver them--I'm not suggesting they will go through the air. Amazon won't deliver them. Which is the other thing we hear that's imminent is that we'll have these things flying through the air autonomously. In 10 years will old people--and non-old people--mostly be able to go from point A to point B without having to--being able to surf the web and eat and hang out and chat? Or will there be a driver, whether it's them or somebody they've hired? Guest: I've got to ask Uber about that, I guess. Russ: Well, they're hoping. I think that's one of the reasons they are worth so much money. But if you were an engineer for them, do you think that's something you'd strive for? It would seem to be. Guest: I guess I would say: Whatever system they are involved in, they ought to have some ability to intervene if it's not doing what they want it to be doing. Russ: No, I think there will be such a system. I agree with you. But for the most part, most of the time, will we be traveling without any human direct intervention? Just like that plane you said--we've solved that problem, the takeoff and landing in an unoccupied-- Guest: Well, again: so the thesis of the book is we can learn about that future by looking at what people have had to do in extreme environments. And when you have a $150 million dollar airliner with a very highly certified crew and a very highly certified system of maintenance and parts control and all documentation and all that, we fly a great deal of those flights under the control of autonomy, and we still feel the need for people to be involved and monitoring, and fairly frequently taking over when human lives are at stake. I think that the mythology that the book really tries to tease apart is that we are moving from human to remote to autonomous, when actually I think what's happening is that the three modes are all converging. And so you will see cars that have autonomous features; you will see autonomous driving for certain times and certain places, for certain applications. But overall the driving system will be a mix of human and remote autonomous systems.
51:26Russ: So, in the area of artificial intelligence generally--we've been talking mainly about robotics but in the area of artificial intelligence generally, do you think machines are getting smarter? Is that a meaningful question? We've had guests on this program who think there's a real possibility and there's very smart people who worry about this. I'm one of the less smart people who is not as worried, but there are very smart people--Elon Musk, Stephen Hawking, I think Bostrom, on this program, who have suggested that we have to really worry about machines getting so smart they become sentient or autonomous and pursue their own interests. Are you worried about that? Guest: I worry more about them pursuing the interests of the people who design them. However smart they may be. We have yet to build a machine that's not heavily influenced by its designers and the things that they built to do it. I think you are much more likely to get killed by a poorly designed robot than by an evil-thinking robot. Russ: Ever? I mean, I agree with you. I'm on your side. What do you think is worrying those folks I mentioned? Why do they think there's--I'm always thinking, well, can't you just unplug it? Why would you code it so that it would be able to do that to you? It would seem to me--it's hard to--there's this worry that--and excitement for some people--that it will just cross this threshold where it will start, you know, automating itself and grabbing people's kidneys and harvesting human beings. It's hard for me to say it without laughing. But there are actually--they lose sleep over it. Smart people do. What are they worried about that we're not worried about? Guest: Well, I think that's a legitimate--what you say, they are smart people; they are legitimately worried about it. As an engineer who has built these systems, I always find them frustratingly dumb. Not to say that they won't always be that way, but they are still fairly fragile, kind of brittle solutions, and most autonomous systems that we make, when they succeed brilliantly, they succeed brilliantly out of particular, well-thought-through, kind of narrow set of things. And they are very difficult for them to move outside of the context for which we've created them. And that's not to say we won't one day. But we still have a great deal of time building robots to do things beyond what they were built, designed for. Russ: Of course, they do get better. One of the interesting insights related to this question that people point out that things that we say are examples of artificial intelligence, people dismiss once they get achieved, and then say, 'Yeah, but they can't do this.' And you make the point that we get deceived by linearity: that we just assume that this kind of progress that we go from, you know, voice recognition and then say, 'Well, that's just mechanical; they can't do facial recognition.' But they are getting better at that, too. But you think the linearity itself is misleading. Why? Guest: I didn't really say that. I mean, I think there's no question that, you know, we've made progress in a lot of realms; some of it's quite astonishing and we can do much better with a lot of things than we could do 10 or even 5 years ago. And one of the things that I talk about in the book is again, robots working within social environments: How do they understand social relationships? How can they observe the people going in and out of a building and try to extract from that what those people's intentions are and what their plans are and whether those behaviors are normal and abnormal. Well, that depends on what you mean by normal or abnormal. I don't see a whole lot of progress in the computer science world at really understanding social relationships. There are a lot of smart people out there who study the social and the political worlds; and there's a great deal of knowledge there. I think there's still a lot of bridging to be done between the AI-robotics world and people who really richly understand human behavior and human relationships. And those things may all well be beginning; and when they do begin, I think we're in the--there's a lot of room there for progress. That's sort of what I argue in the book, again: If we can understand the social relationships between people and between people and machines, that's the road we want to march down. Again, I think some of the rhetoric around full autonomy shows that we're still actually quite primitive in the technical--that the technical community's understanding of the social world is still rather primitive. Russ: Well, the non-tech understanding of the social world is pretty primitive, too. There's nothing to be ashamed of, there.
56:19Russ: Are there any areas where--you know, it's funny: you think about controlling a rover on Mars with a 20-minute time delay or some incredible application you talk about in the book; and then down at the other end you have things where your thermostat learns what kind of temperatures you like and very mundane examples of where human sentience are sort of coming together. Are there areas where you think there's the most potential or where it's being done well that are exciting to you? Guest: Yeah. I think the world of robotics and I think the frontier is situating robots and autonomous systems within the human environments. And when you say 'human environments' you include almost anything that's economically valuable and economically productive. And I think we're at the beginning of an era where we take these systems that have been engineered for full autonomy or matched-with-full autonomy and bring them into the human world and let them respond and react to human systems and human behaviors in entirely new and situated kind of ways, that to me is--it's--a lot of the elements are there for really great progress in that realm. We're not there yet, but it's a very exciting time in that dimension. Russ: What advice would you give a young person who wants to be part of that evolution or revolution of our relationship with inanimate objects? Guest: Study computer science and the social sciences at the same time. Russ: And do you have any worries about the folks who don't have the skills to do that? A lot of people--one of the themes of your book, which I love, is that even the smartest technology is the product of a human creation. Of course some people wonder whether robots and artificial intelligence should be able to innovate autonomously. But what about folks who struggle to do that? Or do you think we can all make a contribution? Guest: I mean, I think everybody can make a contribution in their own way, obviously. I think--I've spent my career trying to educate engineers to think about the way their designs are situated within social and political systems and that's a way to design better engineering systems as well as a way to see them be more successful. So, I guess I would come back to an education question: We can educate people in certain ways and we can educate them in other ways. So, there are certain ways that I prefer to see people educated, where, you know, that we move away from what people are calling scientism, which is the idea that you can kind of calculate everything about the world in advance and that the user is an idiot and has nothing to add. That's proven to be not the way that successful systems have evolved.