Rodney Brooks on Artificial Intelligence
Sep 24 2018

AI.jpg Rodney Brooks, emeritus professor of robotics at MIT, talks with EconTalk host Russ Roberts about the future of robots and artificial intelligence. Brooks argues that we both under-appreciate and over-appreciate the impact of innovation. He applies this insight to the current state of driverless cars and other changes people are expecting to change our daily lives in radical ways. He also suggests that the challenges of developing truly intelligent robots and technologies will take much longer than people expect, giving human beings time to adapt to the effects. Plus a cameo from Isaac Newton.

RELATED EPISODE
Nick Bostrom on Superintelligence
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now...
EXPLORE MORE
Related EPISODE
Benedict Evans on the Future of Cars
Benedict Evans of Andreessen Horowitz talks with EconTalk host Russ Roberts about two important trends for the future of personal travel--the increasing number of electric cars and a world of autonomous vehicles. Evans talks about how these two trends are...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Bill Brown
Sep 24 2018 at 1:19pm

Two other resources I’d highly recommend:

Rodney Brooks’ blog

Interview with Rob Reid

His blog has a great entry on self-driving cars and a great series on AI superintelligence fears.

philip henderson
Sep 24 2018 at 3:51pm

Thanks for another thought-provoking conversation. It’s worth every second spent listening.

One suggestion to consider on the subject of AI. You and your guest dwell on the point that some people vastly over-state the risks of AI by arguing the technology is on the cusp of “taking over” or “learning” how to defeat human intervention — “suitcase” ideas that are just wrong. You mention Elon Musk in passing.

The one discussion with Elon Musk that I heard on this subject focused on a different point, and i suspect (but dont know) that he should not be lumped into the category of dooms-day-ers. Musk argued (persuasively, I thought) to a group of governors that policy-makers should get to work ASAP to consider and assess whether regulatory oversight of certain AI uses might be appropriate. It seems reasonable to me and not hyperventilating to worry that AI combined with other current-state robotics might be ready for this sort of assessment. For example, transparency into the logic used for self-driving cars or buses to respond to pedestrians. Police robots with deadly weapons, which could be used by private actors (e.g., a robot to patrol a fenced-in car storage facility).

best regards, and thanks for the wonderful shows.

rhhardin
Sep 24 2018 at 4:10pm

Coleridge, in Biographia Literaria chapters 5-8, covers why AI can’t work, an argument that still works today.

https://www.gutenberg.org/files/6081/6081-h/6081-h.htm

Todd Kreider
Sep 24 2018 at 4:48pm

1) Here is the quote about Go that I remember from 1997:

“It may be a hundred years before a computer beats humans at Go — maybe even longer. ‘If a reasonably intelligent person learned to play go, in a few months he could beat all existing computer programs. You don’t have to be a Kasparov.” — Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study in Princeton, N.J., quoted in The New York Times, July 29, 1997

Interestingly, A.I. beat the top Go player a year after another A.I. program beat the best human scores in 47 out of 49 Atari games from the early 1980s and a year or two later a modified version mastered those games as well. Last year, a further modification of the A.I. technique at Princeton was able to master every game known on the internet.

2)  It isn’t obvious that Moore’s Law by the original definition of the number of transistors on a chip and performance is dead but will be soon. But most of these new architectures that will keep the popular idea of exponentially faster computers continuing in the 2020s have been around for over a decade.  It isn’t as if the recent status of Moore’s Law alone sparked the research on new architectures, although looking 10 to 15 years out to see that Moore’s Law would likely end by 2020 or so, was a factor.

James
Sep 24 2018 at 5:09pm

I don’t think the dystopia where a small group own all the AI/robots and the rest of us starve without any redistribution is logical. This assumes that people have unfulfilled needs while at the same time all needs are meet by the robots. As long as there is need there is opportunity. If there is no opportunity then all our needs are meet and there is no need for anything.

It also dismisses future emergent technology. Our goal should always be 100% unemployment but if this is possible assumes there is a limit to knowledge.

Anson
Sep 24 2018 at 6:38pm

I enjoyed the Avengers: Infinity War comment. I told my brother after leaving the theater that the battle in Wakanda looked like it could have been handled by a modern military as opposed to a goup of superheroes.

Robert Wiblin
Sep 24 2018 at 8:00pm

Brooks claims that of all the people worried about future risks from artificial intelligence “none have ever done any work in AI itself,” and so they do not understand how hard it is to get AI to function. This is false.

For example there are the authors of ‘Concrete Problems in AI Safety’: Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dan Mané (https://arxiv.org/abs/1606.06565).

Off the top of my head, other people very familiar with ML who are concerned about the future safety of AI include Daniel Dewey, Jan Leike, Steve Omohundro, Demis Hassabis and Shane Legg.

Stuart Russell – co-author of the standard textbook on AI, ‘Artificial Intelligence: A Modern Approach’ – has been one of the most vocal proponents of the idea that more advanced AI systems could act in undesired and dangerous ways.

Guests should hold themselves to hire standards of accuracy on important questions like this in my opinion.

Todd Kreider
Sep 25 2018 at 1:38am

3) I could not believe Brooks gave the hype potential disaster story that he did on this podcast. He said there are “tens of GPS satellites” in orbit and all vulnerable to a terrorist attac. Quick google search says that there are 24 GPS satellites. Brooks doesn’t explain how terrorists will destroy one of those. Hijack a Musk rocket, perhaps?

Brooks: “if a non-state actor gets something up there and blows one up, and in the orbit, orbital planes, just fills that orbital plane, which is a sphere, actually, with debris, and they all start [?] out as they hit the debris. That’s going to be bad.”

Brooks is a computer guy, definitely not a physics guy. Basic physics 101 refutes his alarmist claim that if terrorists launch a rocket to somehow destroy a GPS satellite that the rest of the GPS satellites will all collide into the blown up debris.  I could not believe I was hearing this.

 

 

 

 

 

 

Miles
Sep 25 2018 at 2:38pm

Re the GPS satellite destruction scenario, wasn’t he just referring to the Kessler syndrome idea?
https://en.wikipedia.org/wiki/Kessler_syndrome

Not to say that it is a “today” problem but that it could become one fairly easily. And would it have to be a non-state actor, or just like a North Korea tactic for messing with the more developed world?

Todd Kreider
Sep 26 2018 at 10:09am

What would North Korea gain from destroying a satellite even if they had the ability to do so, which they currently do not? While it happens only once a year, on average, satellites do collide with each other already.

This engineer makes part of point. I don’t know about the numbers but 1) the “bullets” (very small debris) in the vast majority of cases are not traveling very fast.

“…So one “bullet” for every 21,000 cubic km. That does not sound like too dangerous a neighborhood! What happens if start some sort of cascade? There is not much to cascade – 18,000 – “big bits” – if each of them became 1000 “bullets” then we would have 18 million “bullets” + the existing 750,000 bullets. And that is erring on the generous side…That would be one “bullet” for every 853 cubic km AND most of the “bullets” will not actually be going very fast…

“Some time in the future when we have a lot mor,e as in a 100,000 times as much stuff in orbit then the Kessler Syndrome may be possible. If you are worried about communication satellites way up there in geostationary orbit then the situation is even better – there is a LOT more space up there and we have boosted a lot less junk up to those orbits.” https://www.quora.com/Is-the-Kessler-Syndrome-disputed-by-some-scientists

Brooks said: “But, now it is so completely intertwined with that infrastructure that if GPS goes down, we are going to have us some months at least of serious disruption to our country.”

We are less dependent on GPS than Brooks states, but it isn’t going anywhere anyway.

 

Nathan
Sep 25 2018 at 7:24am

Quite an interesting episode.  Watch the first Tom Cruise Mission Impossible movie and notice how that movie has not aged well.  The “high tech” the spies use in the movie is archaic by today’s standards.

I re-watched it recently with my teen daughters and we got kick out of it.

edgar
Sep 25 2018 at 10:38am

[Comment removed. Please consult our comment policies and check your email for explanation.–Econlib Ed.]

Madeleine
Sep 26 2018 at 6:49am

This was an excellent podcast! Ignore the low star ratings, Professor Brooks has just said a lot of things that futurist/scfi fans really don’t like to hear.

I’m a software developer and although I don’t claim to have any special expertise in machine learning, I am skeptical of much of the doomsday hype for similar reasons as Professor Brooks.

I was especially impressed with Prof Brooks’ comments about how computers are able to expertly perform tasks that a human would not be able to do without an enormous amount of background and understanding, so when we see the computers performing these tasks we assume that they also have the background and understanding.  That is exactly what I have seen with my non-technical friends.

I also am pretty interested in linguistics and I’ve read a lot of Prof Lakoff’s work on metaphors and language. (As an aside, I disagree with Prof Lakoff on a lot of things, but I do think he has a point that metaphors are inseparable from language/communication).  I don’t think artificial intelligence is really capable of communicating on the same level as humans do in human language, inseparable from human embodiment/experience. And why should it be? I mean, I don’t think that’s the best application of our resources anyway. Better to build amazingly complex models that predict climate change, earthquakes, etc. These are also problems, like language, which require an enormous number of unknown variables, and the output space is much more limited than what people need to have an open-ended conversation. See this NYT op-ed about the difficulties and how Google has tried very hard and is still unable to do anything remotely resembling an open-ended conversation.

I think people are anthropomorphizing the output of these mathematical models way too much. We’re going to definitely see absolutely terrible computer viruses based on AI and it will be very inconvenient and obnoxious. That threat shouldn’t be underestimated, but it’s also not some sinister SkyNet or something.

Also, @Russ, I don’t really think Elon Musk knows what he is talking about in this area. He’s not a programmer nor a mathematician, he is a businessman who has a bachelor’s degree in physics. I’m never one to dismiss people on lack of credentials because self-taught people are often some of the most creative and insightful, but in this particular area Mr. Musk has not done any work himself and he does seem to fall into a lot of science fiction novel type hype.

Dan
Sep 27 2018 at 7:45am

Madeleine, please check your facts before discounting another person’s input, especially someone who has successfully made a tremendously significant impact in three major industries. Elon Musk began programming at age 10 and sold his first program at age 12. In the first company he founded, Zip2, he was primarily a programmer, and personally made $22M when it sold. So yes, he knows a thing or two about programming. Also, he has proven himself adept at identifying gaps and opportunities where the best experts and biggest corporations were either incapable or unwilling to tackle the issue. Musk has his faults to be sure, but when he smells smoke, I would take it seriously.

Madeleine
Sep 28 2018 at 2:38pm

Dan,

Making a game or whatever he has done in the 1980s is different than studying theoretical computer science ,  which I have done and Musk has not. It’s my professional opinion that it’s 100% safe to ignore his opinion on artificial intelligence, along with his views on many other things (such as securities fraud.)

And you don’t have to believe only me. Very many exceedingly qualified and credible people (such as the one interviewed in this podcast) have the same view of AI that I do.

There are many dire problems in the world. I’m not at all an optimist about the current or future state of affairs. AI, however, is very very low on my list of things that give me anxiety about the future of human civilization. If I wrote all my top concerns out, I doubt it would make the top 100.

Jeremy
Sep 26 2018 at 12:27pm

Thank you for another excellent podcast! 

As someone who works with technology at a large enterprise, I regularly have sales folks (some at start ups) reaching out to demo their products. The promises are regularly inflated and almost never equal the actual performance when an internal prototype / POC is actually implemented (of course, sometimes its a win). 

Actually, the volume of emails and sales calls seems to closely track the ‘Hype Cycle’ for emerging technology with the corresponding stages (Technology Trigger, Peak of Inflated Expectations, Trough of Disillusionment, Slope of Enlightenment, and Plateau of Productivity). AI/Deep Learning is definitely at the ‘Peak of Inflated Expectations’ just like Big Data was years ago and Blockchain/Digital Currency was fairly recently. 

(Aside: Its really fun to combine the Hype Cycle categories: Quantium computing + Blockchain + Deep learning = Deep Quantium Blockchain? The next big thing? Don’t forget to add social media components)

It seems like a lot of whats going on in the Bay Area (Blockchain/Digital Currency, Driverless Cars, Deep Learning/AI, etc, etc) is heading towards some type of consolidation and overall reduction. Especially as the opportunity cost of funds increases (no more free money and decades to show profits) — J

André Pereira
Sep 27 2018 at 10:32am

Great episode. Thanks to the host and the guest.

With regards to the last point of the conversation, I’d like to suggest the work of Bret Victor, and in particular his presentation on “Media for Thinking the Unthinkable”, which is about this issue, and how we can invent new ways to understand new things – in essence, the craft of making these metaphors.

Robert Wiblin
Sep 27 2018 at 10:47pm

In my comment above I showed that Brooks was wrong to claim that of the people concerned about risks from advanced artificial intelligence “none have ever done any work in AI itself.”

To bolster that, just today the world’s leading AGI research lab at Google DeepMind put out a summary of the work they are doing to figure out how to reliably align the goals of ML systems with human intentions, and explained its importance (https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1).

Furthermore, surveys of top ML researchers show that they are worried about safety and value alignment.

Here are some results of a recent survey of authors who published at the top two AI conferences (you can read here: https://arxiv.org/pdf/1705.08807.pdf). In short, these ML researchers think AI will probably be positive and improve gradually, but there is a non-negligible chance things could go very badly or that AI might advance very quickly, and it’s worth investing more now to figure out how to avoid or navigate those scenarios:

“2. Explosive progress in AI after HLMI is seen as possible but improbable. … We asked respondents for the probability that AI would perform vastly better than humans in all tasks two years after HLMI is achieved. The median probability was 10% (interquartile range: 1-25%). …

3. HLMI is seen as likely to have positive outcomes but catastrophic risks are possible. Respondents were asked whether HLMI would have a positive or negative impact on humanity over the long run. They assigned probabilities to outcomes on a five-point scale. The median probability was 25% for a “good” outcome and 20% for an “extremely good” outcome. By contrast, the probability was 10% for a bad outcome and 5% for an outcome described as “Extremely Bad (e.g., human extinction).”

4. …Forty-eight percent of respondents think that research on minimizing the risks of AI should be prioritized by society more than the status quo (with only 12% wishing for less).”

Respondents were also asked: “Does [computer scientist and AI pioneer at UC Berkeley] Stuart Russell’s argument for why highly advanced AI might pose a risk point at an important problem?” (This is basically the argument Brooks’ says nobody serious believes.)

The most common answer was ‘Yes an important problem’ (34%), with a range of attitudes around that (see S4 in the paper for other related questions).

Brooks should argue that AI poses no meaningful threats we should work on now given he believes that – but he ought not say thinking there’s a consensus behind his opinions when there clearly is not.

Madeleine
Sep 28 2018 at 2:50pm

From the article you cited, over 90% of the respondents said that the problem (of artificial intelligence safety) was less valuable to address than other problems in the field.

Oli S
Sep 28 2018 at 7:35pm

Madeline,

Are you sure you’re reading that right? Table S4 gives 22% for “much less valuable” and 41% for “less valuable”. Still a majority (63%), but not 90%. Perhaps you counted the 28% who said it was “as valuable as other problems”?

Madeleine
Oct 1 2018 at 9:10am

Oops you’re right, my bad 🙂

Humberto Barreto
Sep 30 2018 at 1:35pm

On ability to perceive radical future technologies, check out Forster’s The Machine Stops–absolutely amazing.

Dr. Duru
Oct 2 2018 at 4:50pm

I think it would have been more educational to talk more about the policy and/or business errors that can occur from over-hyping the near-term potential of innovation and under-estimating their long-term impact. In the former case, it seems the biggest harm is mal-investment which is a necessary function of the innovation and creative destruction process. In the latter case, it seems the harm is much less unless the eventual impact happens faster than the economy and/or society can accommodate the change. Are there good historical examples of this?

Also, I feel a bit of unease at a general critique of under-estimating longer-term impacts of innovation while hearing all the reasons why people thinking about the potential long-term harm of AI are wrong because of *current* limitations. I don’t think this conundrum was resolved in this interview.

Derek
Oct 24 2018 at 9:57am

Just started listening to the podcast.  Must start by admitting that I’m a doubter on AI – been around at least 50 years, all vaporware, but as is always the case, this time is different (according to people who are tawking their book).  AI is hard enough that now many types of software functionality (ie. autocomplete) is sold as AI.  It’s not.

My son watches Grand Tour with Clarkson, Hammond and May.  These guys are gear heads, not AI experts or philosophers but they’ve hit on a major issue with driverless cars.  When (not if) the car is in a situation in which there is a choice between hitting and probably killing pedestrians or sacrificing the driver, what does it do?  You can adjust pedestrians to make the example more extreme – a dozen babies, if you prefer.

There is really no acceptable answer to this question and therefore, the technology should never be implemented in a broad manner.

NT might make the point that there is always tail risk.  You cannot define it away, though you can ignore it and be Shocked!!! (in a Casablanca way) when it rears it’s ugly (beautiful?) head.

I’m sure there will be math and utilitarian arguments about less total deaths but they don’t really hold water.  Who would accept that their mother or child lost their life so that statistically, some theoretical lives were saved?

It’s not easy.  Probably belongs on the show Black Mirror.

 

 

 

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:33

Intro. [Recording date: August 30, 2018.]

Russ Roberts: My guest is Rodney Brooks.... Our topic for today is artificial intelligence, AI, based on an extremely insightful article you wrote last year for the MIT Technology Review [MIT=Massachusetts Institute of Technology], which we'll link to: "The Seven Deadly Sins of AI Predictions." And we may get into other issues as well along the way. I want to start with Amara's Law--and I may be mispronouncing Amara. You write,

Roy Amara was a cofounder of the Institute for the Future, in Palo Alto, the intellectual heart of Silicon Valley. He is best known for his adage now referred to as Amara’s Law:
    We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

Explain.

Rodney Brooks: Yes. When we see some new technology, it's very surprising to us, and we immediately think of how it's going to be used; and we tend to think that it's going to be quite fantastic. But, in the long run, we sort of discount how much it is going to change the world. And I think computers are the prime example of that. If you go back to the late 1950s and early 1960s and look at movies that mention computers, they were all-powerful, going to do everything. But they were mainframes. So, there were these--built usually using vacuum tubes. They had the computing power of what you would find in a birthday card that plays a tune when you open it. Incredibly tiny amount of computer power compared to anything in our lives today. And, people were afraid they were going to take away every job instantly. The Desk Set,with Katharine Hepburn and Spencer Tracy was about how we were going to replace librarians in companies, because companies, just have--pre-days of Google they had librarians, who find out stuff for the executives. That didn't happen right away. It took 50 years. But I think no one of that time thought that we were going to carry around the world's knowledge in our pockets. Which is what we all do today with our smartphones. If you look at the original series of Star Trek, the episodes made in the 1960s, it was set 300 or 400 years in the future. But their estimation of what computers would be like in that future were grossly less than they were even in the 1990s. There were still mainframes with flashing lights. And if you asked them a question like 'What's the biggest prime number?' smoke started pouring out. It was an impossible question. That's not at all what computers are going to be like in three or four hundred years, because it's not what computers were like just 20 years ago. They underestimated, you know, the progress, in 400 years they were thinking it was way less than what actually happened in 20 years. So, there's the--the underestimating the effect in the long run. And I think, of all the science fiction from the 1960s, from perhaps 2001, A Space Odyssey, was almost accurate in that, it had small screens. It had speech interfacers. The sorts of things that we now have. But most people completely underestimate.

Russ Roberts: And that's because--and this, I assume, is not just computers--they just assume that whatever we have now will just be a little bit better and better and better. They don't--they can't--of course--make the conceptual leap to a smartphone. It's not going to happen [?].

Rodney Brooks: Yeah. And I think we see that this overestimation in the short term, underestimating in the long term: now with self-driving cars. People, you know, were thinking that we were just going to have plop-in replacements for driver--cars with drivers in them. And we were going to have driverless cars by--many estimates were saying 2018, 2019, 2020--

Russ Roberts: Any minute.

Rodney Brooks: Any minute. And, you know, I think it's going to not be like that at all. We see incredible problems with trying to deploy actual driverless cars. Just this last week there was a story: "Driverless cars are here. They are in Tokyo now." And then when you read the details, it's a driverless taxi that makes four round trips per day on exactly the same route. Oh, and there's a person in the driver's seat, just in case.

Russ Roberts: Yeah.

Rodney Brooks: That's hardly a replacement of a Tokyo taxi driver. But in the long run, just as cars totally transformed our cities and our countryside, I think driverless cars will transform our cities again. They won't look like what they look like today, with driverless cars. And we can't imagine exactly what it will be. But, the way we'll get there is we'll start having special areas dedicated to driverless cars, where the laws are somewhat different from where we have cars with drivers. And, over time, they will take over and restructure our cities in some way yet to be determined.

5:53

Russ Roberts: Let's talk about that for one second--just because it's any issue that's come up many times on the program. I had an episode with Benedict Evans where we mused about what those transformations might be. And of course, as you point out, I'm sure we are going to get that wrong. We are going to misunderstand what's coming. But, I'm curious what you think the mix of challenges are for driverless cars. You argue there are three challenges. One challenge is technology. Which is advancing slowly but steadily, as cars without drivers driver along and map out streets. And algorithms try to learn to deal with surprises. The second challenge, you could argue, is regulatory: Are the politicians and bureaucrats going to allow this to happen? And, are they going to have to potentially create the infrastructure that will make them succeed? If that's necessary, and I think it might be, do you have to deal with the fact that there might be a world for a long time where there's some driverless cars and some not, and how those are going to interface in the regulatory environment? Are you going to ban driver cars because they are too dangerous? Are we going to let that persist? All those regulatory issues. And of course there will be people like the taxicab business and the trucking business that will have a stake in keeping the status quo. And then, the third issue, I'd say, is cultural: just the idea of people getting into a vehicle that doesn't have a driver, and the norms that will have to evolve and change to deal with that. Of those three challenges, which do you see as the biggest? Do you feel that they're all going to be solved?

Rodney Brooks: They will be, but not in the [?] replacement form. Let me go to your third challenge there and relate it to the second challenge: the cultural norms. When you pick up, get a Lyft or an Uber today, generally the Uber or Lyft either double parks to let you in, or pulls into a bus zone or somewhere they're not legally supposed to be. And there is a social interchange between you and the driver--I always say, 'Hi,' because I've got that from the app; and then I say my name so they can confirm that I'm the right person getting in the car--because a few times I've gotten in the wrong car. Which hasn't been good. And so there's that interchange. And then, as we're driving, I may change my mind, or whatever. Now, imagine that in the future a mother or father wants to put their kid in a driverless car to take him to soccer practice. And it's a 12-year-old kid. You can imagine letting that happen. Well, now this car is driving along; like, it's stuck somewhere. And it needs some help. Is the kid allowed to tell the car what to do, or change what's going on? If the kid is doing that, is the kid now, in scare quotes, "driving the car?" What's the regulatory environment for that? These are questions which don't come up with a human driver, but they will come up when there's no human driver there. The whole definitions of who is in charge and when should it listen to a person in the car. Should it listen to any adult? What if it's a dementia patient on their way to adult daycare? Should it listen to them? Lots and lots of edge cases where are just going to take a long time to get solved; and there's going to be horrible incidents along the way that will be really blown up in the press. And it's not going to be smooth sailing. And I'm not being a pessimist. I'm just trying to be a realist. These things are tricky.

Russ Roberts: Agreed.

9:53

Russ Roberts: So, we're overestimating the effect in--

Rodney Brooks: in the short term.

Russ Roberts: In the short term in that case because we think it's going to be any day now. And then we are underestimating in the long run because we don't really appreciate how it's going to be transformative. In the article you mention GPS (global positioning system), which is I think something people don't fully appreciate as a technology. I think most people, like myself, tend to think of it like, 'Oh, that's how I use Waze or Google Maps.' But as you point out, it was incredibly underestimated. So, talk about that.

Rodney Brooks: Yeah. It almost got killed many, many times when it was being developed by the U.S. military: people didn't think it was going to be good enough to essentially use for targeting bombs and other sorts of military equipment. But, now it is so completely intertwined with that infrastructure that if GPS goes down, we are going to have us some months at least of serious disruption to our country. And this is true of worse[?] countries. Because we've got these super, super, super-accurate clocks above us, available at all times, our electrical network, infrastructure, uses those clocks on those GPS satellites in order to synchronize the whole grid. If the GPS satellites go away, our grid is going to break, and we'll have to break up the grid very quickly into isolated sub-pieces and not be able to ship electricity across the country as we do. That's one for instance. But there are many, many, many uses of GPS. For instance, it's how we estimate how much ground water there is in vast swaths of the country to predict fire danger. As we've seen, fire danger is going up. All sorts of uses of GPS that no one thought of are now being built into our society, and we entirely depend upon them.

Russ Roberts: Can we talk about that electricity grid for a sec? Did I miss the science fiction movie about that? Because there could be an obvious one that dwells on somebody taking that down. I don't know how easy it is to take down the GPS system. I don't know what that would involve. It's a lot of satellites, right? It's, what--30? 20? 40?

Rodney Brooks: Yeah, but, you know, yeah--some tens. Less that a hundred. I can't remember the exact number.

Russ Roberts: If one of them goes, are we in trouble? [?]

Rodney Brooks: No. If one of them goes, we're not. But what if, you know, some adversary, probably a non-state actor--non-state actors would have it in their interest to do it for their own state's sake--if a non-state actor gets something up there and blows one up, and in the orbit, orbital planes, just fills that orbital plane, which is a sphere, actually, with debris, and they all start [?] out as they hit the debris. That's going to be bad.

Russ Roberts: Do we have a backup?

Rodney Brooks: Well, many of the systems are now using multiple--because there's more than one; the Russians have their own system, and the Europeans are building their system, and so some of the chips[?] will use multiple sources when available. But, you know, they could all go down. And by the way, the GPS doesn't just run. GPS is operated out of Colorado Springs with a--

Russ Roberts: Shhhh. Don't tell anyone.

Rodney Brooks: [?] U.S. Airforce team. If they stop work, in about a week, your car GPS wouldn't know exactly what street it was on. And in 2 or 3 months it would get Town wrong, where it is. That's how much adjustment needs to be done to keep everything in lockstep at the moment.

Russ Roberts: So, that's just a side note. It's just really interesting.

Rodney Brooks: But we totally underestimated how it was going to pervade our lives.

Russ Roberts: Right. Of course.

Rodney Brooks: That's the key point.

Russ Roberts: Anything right now that's out there that you think is being underestimated in important ways, besides--you mentioned driverless cars, but I think people might have some idea what might happen there. Anything out there that--you know, drones, or nanotechnology, or something that you think is being underappreciated that you don't want to--that you can share, because you've already bought the stock?

Rodney Brooks: Yeah. I tend to think that--and you can see sort of megatrends driving this--I tend to think that the indoor farming is going to be much bigger than people imagine right now. People are still thinking that farming is going to work like it's always worked for the last 10,000 years. And my characterization of the last 10,000 years of farming is: You go outside; you put some seeds down; you watch the weather; you complain about the weather; and then ultimately you harvest the crop.

Russ Roberts: If you're lucky.

Rodney Brooks: With--for a number of reasons, that's problematic. Climate change is one of them. Where certain crops grow well is going to change as climate, you know, upends things. The other is the--

Russ Roberts: It's expensive--

Rodney Brooks: if you look at meat production, that's a major contributor to CO2. I just saw a thing recently that there a--the volume of carbon in livestock--just as a measure--the volume of carbon in livestock in the world is 1.6 times the volume of carbon in humans. And, humans have 9 times the volume of carbon of all other mammals. Which is incredible. We've upended[?] the balance so much; and meat production is at the top. We can't continue that. So, you know, we're starting to see synthetic meat companies. Starting to see them on menus--

Russ Roberts: yep--

Rodney Brooks: synthetic meat. I think we are going to have a transformation in our whole food supply system over the next 50 years. Indoor farming and synthetic meat. So, I think people aren't seeing that as much as they might.

Russ Roberts: Cool.

16:19

Russ Roberts: Let's move on to the next argument you make. Which is utterly fascinating to me. Which is: The dangers of treating technology as magical. Which is--I love it because at first it's ironic. We all think of technology as the opposite of magical. Magical is stuff we can't explain; and technology is this mathematical, engineering, analytical set of techniques that's the opposite of magic. But, you say people often misperceive it as magical. What do you mean by that? And then you say that it's an argument that can never be refuted: it's a faith-based argument, not a scientific argument. What do you have in mind there?

Rodney Brooks: Yeah, you know, well, I take this from Arthur C. Clarke, the great science fiction writer, who, by the way, invented the communication satellites. And also was the consultant, also co-author on 2001, A Space Odyssey. And he's the guy who really drove the foreseeing the computer power of Hal in 2001. But he's got this--he has Three Laws. And his Third Law is: Any sufficiently advanced technology is indistinguishable from magic. And, the argument there is that if it's a sufficiently advanced technology, you can't tell what its limits are. And I got to thinking about this because I would have debates with pundits who were saying, 'Oh, we're going to have superintelligence any minute now. It's going to destroy the world.' 'It's going to do this,' and 'It's going to do that.' And I'd try to argue against it. And they'd tell me, 'Oh, but you don't understand how powerful it's going to be.' Well, how do those pundits know how powerful it's going to be? So, as an example, I thought: What if we had time travel and we could bring Isaac Newton back from the past--the late 17th century--and transport him to today in, say, Trinity College Chapel, University of Cambridge. Trinity College Chapel had been around for a hundred years when he was at Cambridge. So, we'd transport him to the Trinity College Chapel of today; and it would look much the same. We'd probably turn off the electric light and just have a few candles around.

Russ Roberts: He'd be very comfortable. He'd feel right at home.

Rodney Brooks: He'd be very comfortable. It's--you know. And then, you pull out an Apple--the Apple being an iPhone, this time. And you show him the iPhone. Now, remember, Isaac Newton is the guy who figured out optics, personally--how light through a prism turns into many colors. now you show him the iPhone, and you show him this iPhone screen, which is bright in the darkness, with all these vibrant colors--that's something that he's never seen, anything--

Russ Roberts: a formal way--

Rodney Brooks: before.

Russ Roberts: That alone--by the way, if you just left it at that, he'd be so intrigued and happy to look at it.

Rodney Brooks: Yeah. But then you start using the iPhone for a few things. You play him a movie. Let's make the movie a country scene of England with common animals--badgers and, you know, English animals. But it's a movie, and he can see it. And so, the content is not surprising to him: the content of a movie is certainly surprising. And that this little screen is showing these creatures and the sound. Then play some music for him--that was around at the time that he was around, so that he'd know the piece of music. And it's coming out of this tiny little thing. It's amazing.

Russ Roberts: Little tiny musicians in there somehow playing little tiny instruments.

Rodney Brooks: And they could go on the web. And you can find this--you go and find this personally annotated copy of his masterpiece, Principia. He--you know, he wrote Principia and then in his personal copy he then wrote in the margins, also, notes about it. Well, you show him those pages. His copy. It's inside this little thing in his hand. But there's all this other stuff. Show him more stuff. Like, counting his steps and, you know, the calculator and how quickly it can multiply numbers and stuff. You know, turn the camera on, and it would turn into a mirror for him. You record him and play himself back. Now, what would he be able to say about what are the limits on this device? You and I know some limits. You and I know that you have to recharge it. If you keep using it for a few hours, it goes dead. He certainly wouldn't think of that. How could this amazing device not just keep working? He won't know what limits to put on it, what it's capable of, what it's not capable of. He'll have no reference. He certainly won't be able to explain it.

Russ Roberts: And if you hired him to work at Apple, what an irony--if you hired Mr. Apple to work at Apple, because he's one of the greatest minds of all time, right?

Rodney Brooks: Yeah; he was an incredibly smart guy--

Russ Roberts: Probably the greatest, if not--maybe you could say he's the second-greatest scientist of all time. But you can make the case that he's the greatest. So, you'd think he'd add a lot to the engineering team. But he would add nothing.

Rodney Brooks: He wouldn't be able to begin to explain this thing. So, when you are asking questions about what it can do and what it can't do, he has no way of knowing, because it's indistinguishable from a magic device for him. And by the way--he was very interested in the occult.

Russ Roberts: Yes.

Rodney Brooks: And, you know, he was very interested in transmuting lead to gold. Maybe this device can transmute lead to gold? It can build this out of magic. Why can't it do that? What are its limits?

Russ Roberts: But--

Rodney Brooks: So, I think that's a good example of--really smart person. Show the person something sufficiently advanced, they are not going to be able to have an hypothesis of how it works, and not going to be able to know its limits. And I think too many of the arguments about the future of artificial intelligence today are made by people who just assume that can do anything. So you can't have a rational argument with them. Because, when you say, 'Well, won't it be able to do x?' they will say, 'Well, of course it will be able to do x. And it will be able to do y and z, also, because it is going to be so'--

Russ Roberts: -squared--x, y, and z squared--to the n actually[?]--because I just want to add a couple of things about Isaac Newton, and then I want to return to the content. But, it was so stimulating, your example. One of the things that it caused me to wonder was whether Isaac Newton could get a 5 on the AP [Advanced Placement] Calculus exam. Now, you'd think he'd have a pretty good shot at that. My wife's an AP Calculus teacher. And I think he'd do well in the class. I think he'd get a good grade. But whether he could just sit down and get a 5--it's not obvious. Which is just fascinating.

Rodney Brooks: He might be really annoyed that we use Leibniz's--

Russ Roberts: exactly--

Rodney Brooks: you know, symbolism rather than his. That might really annoy him.

Russ Roberts: Yeah. Just for listeners who don't know--Newton and Leibniz get co-credit, to some extent, for inventing calculus. The other thing I have to confess--this is really embarrassing, Rodney--but I always thought Principia Mathematica was just a pretentious title he gave his work. And so, when I click through the link to actually look at the manuscript as you suggested he could--where he could have an iPhone in his hand and he could read his margin notes, you could see the first edition--which it's an extraordinary thing to be able to do that. You would have to teach him how to pinch on the screen, as you point out. But, it turns out the whole book is written in Latin. I just thought--I just assumed it was written in English with a fancy title. So, that was very educational for me. Embarrassing.

Rodney Brooks: No, that was what was used for science in the 17th century. It was still Latin. Yeah.

24:08

Russ Roberts: So, on this issue of faith-based, it's ironic, because--we're turning now to the more serious content. When I had Nick Bostrom on this program, whose book--I think it's called Superintelligence--and he suggests that artificial intelligence will become so smart that it will be able to fool us into trusting it, even, because it will understand our brains so well, and our chemistry, it will know how to manipulate us, etc. And I suggested to him, actually, that this is a medieval religious view of God. It could do anything. Anything you think it can't do must be wrong. Because, by definition, it can do anything.

Rodney Brooks: That's why I can't have arguments with Nick, and Sam Harris, and other people because they always resort to that rhetorical flourish--you know, that it's more powerful than I can imagine.

Russ Roberts: But, they could be right.

Rodney Brooks: Well, is any technology we've ever developed more powerful--have no limits? There are limits on humans. There are limits on everything we have developed.

Russ Roberts: 'But they're going to be so smart. They are going to figure out how to get around limits.' I mean, I find that incredibly annoying. I'm with you on this one.

Rodney Brooks: If 1% of what these people believe AI is capable of today--if 1% is true of what they believe is true today--you know, I, as someone who has worked in AI for the last 40 years and led large teams--I would be so incredibly wealthier than I am. It just makes no sense to me. And I'm sorry.

Russ Roberts: But should we be worried at all? And I'm going to just preface that by saying Nick Bostrom's a smart person; but I think Stephen Hawking's smarter. And when he was alive he raised a flag about AI dominating humans. Elon Musk is a smart person. Those are the three I know of. I'm sure there are more. There are smart people who think this is an enormous threat to humanity.

Rodney Brooks: Yeah. Well, on Nick you should go and look at his general work, because his whole work is about how everything is a threat to humanity. And AI is just one of the 20 things that he believes is a threat to humanity. He's worried about us searching for extraterrestrial life, because then it will come and kill us all. He's worried about research into certain nuclear things--because it will kill us all. So, people in AI [?]--

Russ Roberts: Well, it's good to be cautious. It's good to think about the downside.

Rodney Brooks: Yes. But he finds the downside in everything. That's what he does.

Russ Roberts: Okay.

Rodney Brooks: It's not just AI. He's not particularly more expert on AI than he is on search for extraterrestrial life. But that's what he does. That's his schtick. So, as for the others--and including Nick--none of these people who worry about this have ever done any work in AI itself. They've been outside. Same is true of Max Tegmark; it was true of Lord Martin Rees [?]--

Russ Roberts: So, why does that matter? What are they missing?

Rodney Brooks: They are missing how hard it is. That's actually my next point in that article. They make a mistake of performance versus competence. Maybe I can explain that.

Russ Roberts: Yeah, go ahead.

Rodney Brooks: When we see a person perform some very particular past, we have a pretty good model intrinsically in our heads of what that means about their general competence. So, if we see a person--suppose we know that it's a person whose first language is not English--and we see a person taking picture after picture and writing an English caption, a pretty good description of what's in the image. People playing frisbee in a park. A child on a swing. So, they are writing the captions in English. We know that English is not their first language. But we then think, 'Well, this person understands English well enough that we could have a little conversation with them in English, most likely.' We could talk to them about the weather. At least they'd know about the weather. We could ask them how they got here today. We could look at that picture of the kids playing frisbee in the park and say, 'How big is a frisbee?' And we might expect them to answer in metric, 'Oh, it's about 30 centimeters in diameter.' But, we'd expect them to tell us how big a frisbee is. And we'd expect that if we said to them, 'Oh, frisbees are really tasty,' they would look at us as though we were a little nutty and say, 'What are you talking about?' But, when we see an AI system such as back in 2014, I think it was, when image labeling got introduced to the world and got an article in the New York Times written by John Markoff, with a Google image labeler labeling images with 'Kids playing frisbee in the park,' etc., people, I think, saw that level of performance, and assumed that the system had the same level of competence as a human who could do that work. But, the system didn't even know what a game was, what a frisbee was. Couldn't answer any questions at all: Could a six-month-old play frisbee? It wouldn't know. Can a person throw a frisbee three miles? It wouldn't know. So I think people make that mistake. And I think these pundits have seen performance and mistake it for competence. And the AI systems we have today are only very, very narrow performance.

30:13

Russ Roberts: And the analog here would be the driverless cars--I forget which guest it was and I apologize to the person--but somebody pointed out on the program that they are not really driving. They are more like a train--a fixed track they kind of stay on. They can't really deal with surprises anything remotely like a human driver. They are not mimicking what a human does when a human drives a car. That's the most important point. And that's your point about the photograph. If you take a photograph because it looks interesting to you, and a foreigner wants to talk to you about it, and you speak their language somewhat, you can have a conversation about what's in the photograph. But, while the computer might be able to label the photograph, it doesn't "understand" it. But wouldn't the argument be that that's just a matter of time?

Rodney Brooks: Oh. Yes. That's the magic thing. 'Well, certainly we have to assume.' Well, those of us who have worked in this field for as long as I have--you know that AI has been around just over 60 years, and I've been working in it just over 40 years--know how hard each of these little steps have been; and how few of the steps we have towards the superintelligence that these people talk about--we've just got baby steps towards it. I often think--you know, maybe we're building ladders, and people are saying, 'Oh, yeah; we're getting closer to the moon. They'll get to the moon really soon.'

Russ Roberts: 'It's a matter of time.'

Rodney Brooks: Yeah. And it may not be just a matter of time at all. It may take hundreds of years. You know, we've had chemistry for 2000 years; and those great economic drivers of chemistry--you know, if only you could turn lead into gold--but then, more chemistry in everyday life. It's been 2000 years and there's still a whole lot of stuff we don't begin to understand about chemistry. And so, these things are not automatic.

Russ Roberts: Don't disillusion me, Rodney. I thought we had chemistry figured out. My son's a chemist, so I'm going to tell him that he's in trouble. He's got work to do.

Rodney Brooks: We've got some things figured out. But there's a whole lot--we're going to have research in chemistry for a long, long time, still.

Russ Roberts: But I think the reason for that--some of that over-optimism, or, I would call it inevitability, which, like you, I'm a little bit skeptical about--you know, it remains to be seen. But some of that inevitability comes from the skeptics who scoffed at the early days of AI and then were forced to recant. So, 'They'll never recognize faces. A computer will never be able to recognize a face. A computer will never be able to play chess well.' 'It's amazing, yes, that it can play chess.'

Rodney Brooks: I don't think [?]early I said either of those things.

Russ Roberts: 'A computer will never beat a human being in Go. I mean, Go is way too complicated.'

Rodney Brooks: No. No, no, no, no. No. What was said was: Using brute force search will never beat a person in Go. Whereas brute force search works in chess. And in fact, Alphazero, Alphago, don't just use brute force search. They use other techniques. So, I don't see--we didn't know when it would happen, but we were correct in saying, well, at least so far correct, in saying brute force search isn't going to get you there. Alphago had to use other techniques. No. I think that statement is just wrong.

Russ Roberts: But I think the more important point, which I take to be your point, and it's the point that I, as an economist, am drawn to, is that: It's not happening tomorrow. Tomorrow is not going to be this quantum leap where a computer can not only solve a problem you give to it, but can figure out how to solve problems you haven't given to it. It will teach itself; it will learn, not just in the sense of accumulating examples and algorithms and search paths and branches of a decision tree, but will understand how to--I find it absurd--but it will eventually decide--'it will be wise.' 'It will understand tradeoffs and intuition. It will have human capabilities; and then it will use those capabilities to add even more.' So, I don't think that's going to happen. But, if I'm wrong, it's not going to happen tomorrow. It's not going to happen in a week. It won't even happen suddenly. And, the point I understand you making--and correct me if I'm wrong; and maybe I read it somewhere else--but, I thought you were saying as these things take time, we'll understand how to adapt and deal with them as human beings.

Rodney Brooks: Yeah. You know, if we want to continue through my "Seven Deadly Sins," the very next one is Suitcase Words; and you just said your understanding was that computers would be able to learn how to do these things. Well, "learn" is a great suitcase word. A suitcase word is something that Marvin Minsky, one of the founders of AI, came up with where, it's a word that has so many different meanings packed into it. So, you know, we say "learn"--you learn how to walk. You learn how to ride a bike. You learn a new language. You learn your way around a new city. You learn ancient Latin. You learn calculus. But all these learnings are done in very different ways. So, that word "learn" means so many different sorts of techniques. Now, when someone in an AI system gets it to learn something new, they may put "learning" in the title of their paper. But these days, more like the Press Office of the University, which is always looking to hype up what that university is doing, is going to put out a little press release about 'Our scientists as XYZ University have just made a breakthrough and they have computers learning.' Or some sort--they use the suitcase word. We've seen in the last year about computers deceiving, computers cheating. Computers, this. But in each of those cases they use the word to describe--you know, reasonably, the thing it is doing; but then packed around that is all the other uses of that word have not even begun to be looked at. And it's a very brittle version of that word. And so these suitcase words lead people astray. We've seen what's called deep learning--and by the way, the "deep" doesn't refer to deep analysis or deep thinking. It refers to how many layers of network there are: 12 rather than 3. So just the use of that word "deep" leads people astray.

Russ Roberts: Yeah, that's a good one. I like that.

Rodney Brooks: So, we see systems learning to parse out full names, which is why we now have the Amazon Echo and Google At Home able to understand our speech, you know, at least, when[?] I say understand, it will sort of take dictation and turn the speech form into, you know, typed words that correspond. Which we couldn't do 5 years ago. And that's, deep learning has enabled that. But, when people hear that learning was able to do that sort of thing, they think, 'Well, then the computer can learn anything.' And that's just not the case. It's only very isolated, specialized things with a lot of individual work by a big team of scientists to get every new step. You know, when Alphago, which learnt to play Go, was playing the world Go champion, you had 200 engineers there worrying about helping it, and supporting it--

Russ Roberts: Cheater.

Rodney Brooks: and the world Go champion had a cup of coffee. That was his support. So, it's not the same sort of stuff.

38:44

Russ Roberts: Well, I agree with that. But, I think the deeper point--which I love; and I love that idea of a suitcase word. And I assume it's called that because you don't know quite what's in it. It's a bit of a mystery--

Rodney Brooks: Right. You can pack many things into it.

Russ Roberts: But, what I think is deep there--we probably won't get to it today, but you have another article about consciousness and what robots actually perceive. And also relating to your earlier point, we bring, we anthropomorphize, we bring our human understandings inevitably to these new technologies. And, when I learn something, I can learn, say--let's say I learn how to play a piece on the piano with one finger playing the melody, because I've learned how tablature and staff notation corresponds to a keyboard. But, I can't play the piano, obviously. And more importantly, I can't compose. And even more importantly than that, I can't fill my soul and heart and mind with emotion--I'm not flooded with emotion the way I would be if I could play something even fairly simple like "Moonlight Sonata" on my own.

Rodney Brooks: Or the audience is not [?]--

Russ Roberts: Or the audience. That's much better. And yet we assume that when a computer "learns" how to play the piano, we inevitably place on it these human--the way I would--I make a distinction between learning and understanding. So, we can learn how to do something but we may not understand it. And I think there's a certain inevitable--at least at this stage; maybe it will change--but at this stage of computer learning, it does not have the richness of human learning. And yet we assume it does or at least that it will. And that's not necessarily the case.

Rodney Brooks: I agree with you completely. It's that, again, that performance versus competence; it's the suitcase word. They are variations on similar problems. Another thing that these pundits say, 'You know, it's just going to get faster and faster.' I think we've gotten trapped by the last 50 years of Moore's Law and thinking that everything is exponential. Because we've had exponential growth in computer power. Which, ironically have led to improvements in AI without any further thinking. You know, Alan Turing had the essential ideas of how a computer plays chess back in the 1940s, but he had to simulate a lot of computers. Mac Hack, a program from MIT in 1965 embodied those ideas in a computer program, but it was easily beaten by people. And really there weren't any particularly new innovations through the 1990s, when Deep Blue beat Garry Kasparov. It was exactly the same algorithm that Turing had come up with back in the 1940s. And, by the way--

Russ Roberts: That processor [?].

Rodney Brooks: Yeah. And Garry Kasparov has now got a whole business around chess-playing programs. And he has reconstructed exactly the heuristic functions that Turing suggested. And they play a pretty damn good game of chess when you've got a modern computer. Turing got it right, his heuristic functions about limiting the search. But, that's just an aside. But, you know, we tend to think everything is exponential. So, my friends who are economists or others say, 'Oh, but in the last 5 years we've seen such a big such a big jump in artificial intelligence due essentially to deep learning. Surely it's going to get faster and faster now, those improvements.' But, what they don't realize is that the main technical ideas of these algorithms, deep learning, were around in the 1980s: backpropagation is the technical term for how the learning applies. And, there was a big buzz about backpropagation in the 1980s. A lot of people thought it was the future. But then it sort of ran up against limits. And, almost everyone in the field decided, 'Ach, we didn't get it right. It must be something else.' There were just a couple of people--Geoff Hinton at the University of Toronto, and Yann LeCun, who was variously at Bell Labs, University of Toronto, NYU [New York University]--who kept pushing on it. [?] in AI would think, 'Oh, those guys. They are just pushing away. They lost.' But they had three little computer innovations. One was more computer power; one was a better mathematical form of function that's used in the networks to relate the output to the input, which meant that technically you could figure out the derivative on the inputs by just looking at the output. And the third one was something called clamping, where you pre-structure a deep network with 12 layers rather than 3, into little segments of 3 layers by pre-digesting what the concept is going to be by getting it to reproduce its input as its output, and then you the learning go. With those three things, it suddenly--suddenly: since 2008--this backpropagation learning started to work a whole lot better. And in a mere 10 years, it's now become the dominant approach to machine learning. Ten years after the 20 years of pre-work on it. So, it didn't just happen. There was a lot of work to get there. But there were maybe 100 similar things back in the 1980s that people decided weren't going to work. And when people asked me, 'How come you didn't know that deep learning was coming?' Well, we couldn't tell from other 99 this one popped. Maybe one of those others is going to pop some day and we'll see some great new applications. But we don't know which one it will be. I'm pretty sure that a few years from now something else will be the hottest flavor in AI. I don't know what it's going to be, but I'm sure there have already been a lot of research papers written about it. But we just don't know which of the thousands and thousands of ideas that are out there are the ones that are going to work out for a particular rather narrow capabilities.

Russ Roberts: Well, an analogy is: when I think about human longevity, it just was an assumption that we're going to live longer and longer and then eventually we'll have a breakthrough and we'll live to 200. Which could happen; again, obviously, it could. Or it's just a matter of time before we cure cancer; it's just a matter of time because we've made so much progress in the early days of pharmaceuticals. And amazing things have happened over the last 50 years. But it's not like Moore's Law. It's not like every year lifespan doubles because we've figured out better and better ways to keep people alive. It's a much trickier problem.

Rodney Brooks: And in fact most exponentials are not exponentials forever. Facebook was exponentially growing for a while, but it's sort of used up everyone in the world now. So, it can't exponentially [?]--

Russ Roberts: Going to Mars, any day now.

Rodney Brooks: Yeah. And even Moore's Law ran out, when the feature[?] size got down to the point where you could count the number of atoms, you couldn't halve the feature size in the next two years, which is what you needed to keep Moore's Law going. It's run out. By the way, I think Moore's Law running out has been a great service to computer architecture, because for 50 years you couldn't afford to do anything but keep on the narrow path, because someone else would beat you with Moore's Law. Now that you no longer have Moore's Law we're seeing a flourishing of computer architecture--

Russ Roberts: Yeah, it's fascinating--

Rodney Brooks: for the first time in 50 years. And GPUs [graphics processing units] applied to deep learning are an example of that.

46:58

Russ Roberts: Let's talk about what you call Hollywood scenarios, which I like for aesthetic reasons.

Rodney Brooks: Yeah. So, when you watch a Hollywood movie involving technology, usually the world is identical to what it's been like, but one thing changes. And, I think perhaps the best movie, I think the best movie about predicting the future of [?] is actually Bicentennial Man, which is [?] movie. It starred Robin Williams as an intelligent robot. But, the thing I love is this family has this intelligent robot--it can talk, it can, you know, cook breakfast, it can drive. It can do everything. And, there it is in the kitchen and at breakfast time, the mother is doing some of the cooking, the robot is doing some of the cooking--just like the world out there. And one of the kids and the father doing the--the father is reading a physical newspaper. The kids are reading physical pieces of paper. These days, the kids are on iPhones or iPads, and I certainly never read a physical newspaper. I subscribe to lots of newspapers, but I read them all online. So, it was the world exactly as it is, with one change. But that's not how the world really works. Lots of things change along the way. And so, as we get these new AI systems, they get embedded in the world which has changed as part of it. So, this idea that we're going to wake up to a superintelligent--an evil superintelligent system, to me is nutty. Because, before we get to that, we are going to have really nasty AI systems. Before that, we'll have grumpy AI systems, [?] don't like people. And before that, we'll just have disdainful AI systems. And we'll change the world. We won't let it go that way. A great example of a Hollywood movie is the lone inventor who suddenly comes up with a device that can shrink people down to the size of ants. I liken it to, you know, the guy tinkering in the backyard and he comes in and says to his wife--because it's always the man tinkering and the wife, long-suffering wife-- 'Martha, I accidently built a 747 in the backyard.' It doesn't happen that way. It's not just this one thing changes. You have to change a whole structure. And so I think that's how--you know, it has happened, right now, about self-driving cars. People thought, 'It's going to be just the same as today except the Ubers and Lyfts are not going to have drivers in them, and that's what's about to happen.'

Russ Roberts: Yep.

Rodney Brooks: No. It's much more complicated.

Russ Roberts: Yeah. My favorite example--your example of Bicentennial Man reminded me of--this always drives me nuts in movies. I don't know if it offends you when you see it or you just laugh, but in Avengers: Infinity War, the latest Avengers movie, which I found extremely disappointing, there's these hoards of armies fighting in hand-to-hand combat, like they're at medieval times jousting with lances on horses. It's not going to--I just don't think it's going to be that way. But it makes for good screen, filled with stuff.

50:40

Russ Roberts: I want to turn--I want to get to your last point, which was deployment time. Do you want to say anything about that?

Rodney Brooks: Yeah. You know, people sort of think that because of technologies they are [?] going to be deployed very soon. But deployment, especially when there's capital involved, takes a lot longer. We're used to Google and Facebook rolling out new features because their marginal cost for a new feature is zero. Because, every time you use Google or Facebook you download all the code into your browser anyway. So, putting some new feature in--well, the next time you download, all the code in your browser; it's just a different version of the code. But, if it's a physical upgrade to something, it takes a lot longer. And, one of my favorite examples is the B-52. They were built mostly in 1962. They are still one of the mainstays--

Russ Roberts: It's an airplane [?]--

Rodney Brooks: The B-52 is a U.S. Airforce bomber. It's still used in missions today. We often see them flying in Europe or flying elsewhere, where there's something going on. And, there are current plans to keep them flying till 2040. They were built in 1962; and there's talk of extending their life to 100 years. Now, if the Airforce is using 100-year-old airplanes, that says something about how long it takes to change things. You know--they've changed the avionics; they've changed all sorts of stuff. But it just takes a long time to change things over. In manufacturing, manufacturing uses something called Program Logic Controllers, PLCs [Programmable Logic Controllers]. They were invented by a company in Bedford, Massachusetts in 1967, as a replacement for electromagnetic relays. Electromagnetic relays were essentially the technology of Morse Code and telegraph, where an electromagnet, a coil of wire magnetizes something and pulls down a switch. And that was how automation and control of factory equipment was done up till 1967. In 1967, the PLCs were electronic; and then 10 years later a microprocessor-based emulation of those electromagnetic relays. And, the way you program them, still, with ladder logic, is you build a virtual network of electromagnetic relays using a set of rules which make them stable. So, we're using an emulation of technology from the 1950s which was developed in the 1960s. Every factory uses them. When I was writing this article, I just [?]; I went online and looked at the Tesla job openings at their factory, and sure enough: they were advertising for PLC Technicians. So, people who knew how to virtually wire up electromagnetic relays to control equipment. I talked to the major supplier of PLCs in the world--last summer, it was--and I asked them how often they upgraded their software. And they said, 'We aim to do three upgrades every 20 years.' It's a different--when you've got capital equipment, you've got things running, you just don't change stuff like Facebook and Google change it. And we sort of think, 'It exists; it's going to get deployed.' No. It takes forever to deploy. For good reason. You've got this system running, that has to keep running. You can't afford failure. Whereas you can afford failure on your Facebook page or on your Google search. You can't tell the difference because a person is in the loop. When other people are in the loop it's got to work. So, it just takes forever to deploy stuff. And, tech--I call them tech bros, they [?] stupid people, you know, 'We know how to do it better'--and I am reminded of a tweet Elon Musk put out not too long ago saying, 'It's my fault, we overestimated how easy it was going to be to automate the whole factory.' And he took the blame for thinking that it was--he was smarter and it was going to be easy to do it. It's not easy to do it in these real systems. Deployment when physical stuff is involved takes a long, long time. Houses last for hundreds of years. Even our driverful cars that we buy today are going to be around in another 20 years. People aren't going to want to give up that asset that they just spent a lot of money on today. So, changeover is going to take quite a while.

55:26

Russ Roberts: So, that promise, when I think about it, is something--a set of meta-questions. And maybe we'll close on this. So much of, you know, as prompted by your remark about that it replicated the previous coil technology--the PCL? PLC--

Rodney Brooks: PLC.

Russ Roberts: PLC replicated it. So much of our technology today--and we are living in a world that is--Isaac Newton would be shocked by it, but of course somebody from 1975 would also be pretty shocked by it. You don't have to go back--

Rodney Brooks: Yeah. Yeah.

Russ Roberts: so long. 1985 would be shocked by the iPhone. And that's just a small thing. We have a lot of things coming that we haven't talked about. Maybe drones. Maybe medical devices that go inside your body. There's going to be some pretty cool and shocking stuff coming. A lot of it--it's fascinating to me how much of it mimics what humans do. And one of the observations that comes to mind, I've mentioned this before on the program, I think about Andrew Wiles when he solved Fermat's Last Theorem and found out it was a mistaken proof. He was lionized as the greatest mathematician of his era and then it turned out that, 'Oh, it's not true. You didn't prove it. It wasn't right. There was mistake in the proof.' And he agonized for a very long time--I think a year or so. Maybe longer. Trying to fix the proof. And then one day, it just came to him. And if you asked him, 'How did you do it?' he can't tell you. And the brain--maybe there will be a day when we understand how the brain solves problems. That day hasn't come. And the way we solve problems now with technology, is, as you point out, often a brute force version of what humans do, but not quite the same. And when we can, we just mimic what humans do. And, just an obvious example of that is the Kindle. The Kindle is just a book within silicon. It's not a new technology. It's just words on a page. We have some recorded things as well. But, you know, we have the desk top. We have all these things that re-do, in better, faster, more reliable--sometimes less reliable because the machine crashes--things that we do with a pencil and a paper. And I wonder if that's a lack of imagination. Is that going to change with AI at some point? Does this make any sense? And do you have any thoughts on it?

Rodney Brooks: Yeah, no. I think it gets back to a--you know, if you go back to George Lakoff, who is actually having a resurgence recently--George Lakoff and Mark Johnson talked about 40 years ago about metaphors we live by. Much of our language is based on physical metaphor for how our bodies move around, the flow of time, how we tackle a problem--'tackle a problem'--grab. Everything in our language is based really on metaphors, our physicalness in the world. And I think we're really good at using those physical metaphors; and that's what permeates our computers and our systems. But, the brains and our nervous systems are not built on those same physical metaphors. And, when we try to use physical metaphors and things get really complex, it breaks down. I think quantum mechanics is a great example of that: 'Well, is it a wave or is it a particle?' You know, that's the two things they understand. No, it's an abstract algebra. But, stop trying to force it into this physical metaphor of what you understand from your everyday experience. And it's very hard for us humans to do that, which is why quantum mechanics remains this great mystery. So, yes. I think we are very limited as human beings. And that's another reason that, you know--you would be immensely surprised if you were at the beach and a robot dolphin came out of the water, functioning like an actual dolphin; and then it let you know that it had been built by actual dolphins in this, you know, out to sea. Well, maybe us humans are just like that. We can't build these AI systems. Maybe some aliens will come and, 'Oh, look at those silly little people,' you know, 'at MIT. They think they know how to build something as intelligent selves.' You know, 'They're not smart enough.' Same way we think about dolphins. Dolphins are not dexterous enough. They are not smart enough. Maybe we're not as smart as we like to think we are; and I think that's true of many people. We're not as smart as we like to think we are.

Russ Roberts: Well, I think so much of it is--is that linearity that we presume. Right? It's just when you say that, which I think is--I think it's a profound insight into human limitation, but a lot of people in the field--you're unusual. A lot of people in the field that I run into and talk to casually, and when I'm out in Palo Alto in the summers at Stanford, have a utopian confidence that the future is imminent. And, if you press them as to why, they would say, 'Look how far we've come.' And I often think of the Nassim Taleb proverb from Venice, the Venetian proverb he likes to quote which I've mentioned recently: 'The farther from shore, the deeper the ocean.' And I think--the question is: How deep is that ocean? What are the limitations, if any? What I'm kind of dancing around here, not very effectively, is that I don't think it's a coincidence that so many of our technologies mimic primitive technologies. One reason is that they are culturally understandable and acceptable, those metaphors you are talking about. They are deeply embedded in us in all kinds of ways we don't fully appreciate. But maybe that's a big deal, not just a small deal. I look at my kids: I once gave my dad--my dad is 88. When he was about 78 I gave him an mp3 player pre-loaded with EconTalk episodes so he could hear what I was up to. And I mailed it to him. And he called me, and I said, 'Dad, how's it going?' And he said, 'Well, I can't turn it on.' And it came with a manual--a really horrible, one of those tiny, folded-up, thin paper manuals for a cheap mp3 player. And he couldn't read the manual, because nobody really reads the manual: they don't make any attempt to make it work well. But, of course, I bought one for myself, the same model, because I knew this could happen. And I handed it to my 7-year-old kid, and I said, you know, 'Turn this on.' And the kid, of course, he got it on in about 20 seconds by poking around and pushing things. And my dad just didn't have that, that skillset. And so, you wonder: As people grow up in worlds of modern technology with new advanced stuff, maybe the metaphors will change and they will be able to adapt to new things in a way that we oldsters couldn't or can't? Or is it just something about being human and the way our brains work and the way we perceive the physical world that's inevitably part of it?

Rodney Brooks: Yeah, I think you're right there. There's two separate things going on. One is we use these physical metaphors for everything because it's what we're sort of wired to understand. The other is technological adoption. You know, if you take a smartphone today, and you can get someone from 30 years ago and hand it to them, they wouldn't have a clue how to use it. The reason smartphones worked when they came along--I think it was 2007--the reason they worked was people had gotten used to the idea of a touchscreen. Because touchscreens had started to show up--big touchscreens we selected a few things--in various ATMs [automatic teller machines] and airports, etc. But there were no touchscreens in the 1980s. I put personal--the very first touchscreen at Carnegie Mellon University in a research lab in 1988. So, things build upon stuff that is there. The new thing that we had to learn on touchscreens was the pinch and to expand--

Russ Roberts: It's sophisticated now.

Rodney Brooks: Yeah. But the other was just touching buttons. Oh, and by the way: that's just emulating the physical stuff.

Russ Roberts: Yep. It's the same thing.

Rodney Brooks: So, it builds, and it builds slowly. And, yeah. What could be that we can't imagine because we don't have the right metaphorical tools to deal with them; and so, if we ever were to meet up with another intelligent race from some other planet, I imagine their technology would be indistinguishable from magic to us; and our technology might be indistinguishable from magic to them. Because it would just be different in the way we thought about things.