EconTalk |
Benedict Evans on the Future of Cars
Benedict Evans of Andreessen Horowitz talks with EconTalk host Russ Roberts about two important trends for the future of personal travel--the increasing number of electric cars and a world of autonomous vehicles. Evans talks about how these two trends are...
Matt Ridley on How Innovation Works
What's the difference between invention and innovation? Could it be that innovation--the process of making a breakthrough invention available, affordable, and reliable--is actually the hard part? In this week's EconTalk episode, author Matt Ridley talks about his book How Innovation...
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.


Jan 4 2021 at 10:19am

Interesting episode.  I did not expect the conversation to go the way it did, but was happy with the result.

To the closing of the podcast: Matt mentions AT&T and large companies as bureaucratic entities in which there is no one to hold to account.  But the point Russ made the whole podcast about competition and the exit option is ignored.  I am sure he would acknowledge it, but the only place I don’t have an exit option is the government, and he specifically says it is not just governments.  The federal level is the probably the only one that is truly subject to no competition. At least I could leave my state or locality relatively easily.

Google is hard to leave, but if they started to falter on usability and value people would flock for the exits and Google knows it.  They are desperate to stay ahead of the curve and they pour millions into doing so.  Competition is what holds these bureaucracies to account.  Take your dollars elsewhere.  This is far from a perfect solution, but the competitive force helps in the discovery of consumer preference.  People that use google know what the trade-off is.  They may not be aware as to the extent, but everybody knows their ads are tailored and they are nudged into certain decisions.  People seem to be ok with it.  They continue to use the product. I would say that is a market test which google is excelling in.

Jan 4 2021 at 11:28am

In case it is useful to anyone looking for it, the term I’m familiar with for the Catholic doctrine of addressing any problem at the smallest appropriate scale for that problem is “subsidiarity”. (Brought up by Matthew, “rhymed” by Russ with federalism.)

Jan 4 2021 at 5:47pm

Very interesting discussion.

I particularly liked the comparison of Google to someone who offers to repair your dishwasher for free but will take pictures of your house and sell the information to unspecified third parties. Not sure if that was Russ’s creation or Shoshana Zuboff but in any event it was striking and I think a good way to make Google’s invasiveness more real. So, thanks for that.

Russ, listening to the interview, I was struck that it seems your view of markets has evolved from a Panglossian view to a Churchillian one. That is, you used to think markets were the best of all possible worlds while he now feels they are the worst except for all of the alternatives. This seems like a more reasonable assessment but I am wondering if it has any practical consequence. That is, are there any societal problems for which you would advocate a non-market approach?

A related question for Matthew is what would he recommend be done to address his concerns? Does he just want people to withdraw from the Alphabet ecosystem and to avoid driverless vehicles or would he advocate any governmental measures?It seems to me that some of the problems Matthew highlights can only be addressed collectively  and I am curious how either Matthew or Russ [or any commenters who would like to express views] envisions these problems being addressed.

A long-winded way of saying—you’ve gotten me concerned, what can be done?

Jan 6 2021 at 3:11pm

I thought about what can be done too.  I came away thinking that the best solution would be for a mass sharing of this podcast with everyone we know and the resulting emergent order would correct this situation (Pollyannish I totally admit).

My fear is any governmental action will only exacerbate the problem.  There is too much temptation for governmental actors to want to use the power, not reduce or eliminate it, that comes from controlling people with technology.

David Henderson
Jan 14 2021 at 6:24pm


You write:

Russ, listening to the interview, I was struck that it seems your view of markets has evolved from a Panglossian view to a Churchillian one. That is, you used to think markets were the best of all possible worlds while he now feels they are the worst except for all of the alternatives.

Those amount to the same thing: the best is the same as the least bad.

Dennis Drake
Jan 4 2021 at 9:44pm

What an interesting intellectual back and forth.  Always a little (or a lot) over my head but amazed that Russ Roberts interjected a song by Richard Thompson, The Wall of Death.  It is on the only album of Thompson that I own, Shoot Out The Lights.  Been listening to EconTalk since 2008 or so, for a moment this morning I felt I belonged.

Thanks for all the podcasts over the years.

Jan 5 2021 at 10:21am

In a fairly fearless moment, the author commented on the “egalitarian faith” and entitled and imparted “self-esteem” (26:00) which are aspects of what he calls “feminized morality” (26:45) of everyone being equal, equal outcomes, etc.  He then goes into Freud’s discussion of upbringing, the mother feeding the narcissist needs of a child but the father acting as the “image an indifferent other person would have of him.” [start countdown to Adam Smith comment by Russ at 29:50] In my view, this feminized morality” comprises the core “morality” driving the SJW and left. Male things (values, orientation, even that there is maleness) are objectified and vilified. Then they discuss the “rule of law” being an outside (i.e., male-like) observer that is key to our societal health. When you get way out of balance, you get the consequences, which in our case have been dysfunctional societal disruption with nothing but a hyper-narcissistic utopia as its goal.

Mort Dubois
Jan 5 2021 at 9:32pm

I though that this section of the discussion was a very odd way of describing the two main tasks of parenting:  to love and encourage your child, while at the same time teaching them what the world expects of them.  I don’t think it is a requirement that male and female enter into it.

Jan 6 2021 at 2:10pm

Thank you Russ for sharing this wonderful song. This podcast has spoke to my heart and soul more than any in a long time. It prompted me to really ponder what it is to be alive. Thank you.

Here are Richard Thompson’s lyrics for Wall of Death

Let me ride on the wall of death one more time
Let me ride on the wall of death one more time
You can waste your time on the other rides
But this is the nearest to being alive
Let me take my chances on the Wall of Death

You can go with the crazy people in the crooked house
You can fly away on the rocket or spin in the mouse
The tunnel of love might amuse you
And Noah’s Ark might confuse you but
Let me take my chances on the Wall of Death

On the Wall of Death
All the World is far from me
On the Wall of Death
It’s the nearest to being free

[comment elided for copyright violation–Econlib Ed.]

Richie Wohlers
Jan 7 2021 at 9:00pm

Ultimate (Frisbee) is absolutely a competitive game. Though the rules design it to be non-contact, contact will still happen. Much like basketball.


John Alcorn
Jan 9 2021 at 9:44am

Although the conversation took a while to find its sea legs, it then really charted the depths. The discussion of big tech and freedom couldn’t be more timely. I’ve found myself rethinking this episode of EconTalk every day this week; especially the issues around big tech.

It’s hard to make up one’s mind when sharper minds disagree among themselves.

Matthew Crawford offers an eloquent critique of the current model, which rests on implicit exchange of software-platform access and individual user data.

E. Glen Weyl and colleagues advocate creation of systematic “countervailing mechanisms,” designed to (a) increase competition among platforms, (b) introduce user ownership of data, (c) create ‘unions’ of users for collective bargaining with platform firms, and (d) increase government regulation of platforms.

But compare Bryan Caplan, who argues:

“Lately I’ve been hearing numerous demands to regulate Google, Amazon, Facebook, and Twitter. I think these demands are overblown because countless excellent, viable substitutes remain. But the pro-regulation activists’ fundamental premise is not crazy. Regulation can and should rein in grotesquely abusive monopolies who act like they’re entitled to treat people like dirt… and every government on Earth readily fits this description. Yes, even Norway.Let’s get governments fully under control, then argue whether search engines and online superstores are in the same league as the monopolists par excellence.”-EconLog (18 September 2018)

John Alcorn
Jan 9 2021 at 9:50am

There is an elephant in the room in this otherwise wide-ranging EconTalk episode: Partisan censorship by big tech.

Todd D Mora
Jan 11 2021 at 9:32am

I wonder if the children of the current Gen Zs will see Google/Twitter/Amazon/Apple as the “The Man” or “The Patriarchy”, the irony of the progeny cancel culture cancelling the cancel culture would be interesting, to say the least.

Hang on it’s going to be a wild ride, much like the “Wall of Death”  🙂

Jan 11 2021 at 1:59pm

I would like to weigh in as someone on the Left — not to be confused with the Democratic Party — to say that the censorship doesn’t looks quite different from our perspective and we are not fans of it either — especially when exercised by corporate behemouths.

Jan 13 2021 at 3:49am

This was a great conversation, and as someone else wrote, I didn’t expect it to go the way it did, but I’m glad it did. The discussion about Tech Firms was fascinating. My comment is because I felt you left a question unanswered and I have my own answer. I do have to say that it is very rare that, as a longtime listener of EconTalk, I feel that I have the answer to a question raised that stumps both Russ and the guest. But this was one of those times. At the end of the interview, there was a discussion about algorithms ordering our daily life. Repeatedly the question comes up, what is really wrong with that? Both Russ and Matt express a deep discomfort but can’t articulate it. You finally turn to Hannah Arendt, who sheds some light on the topic. But the faceless bureaucracy is just a part of it. Meanwhile, I had to suppress the urge to scream into the podcast, how can you not see what is wrong with that? The first time I wanted to yell at the podcast was in connection with the discussion about the Smart City. I was rather shocked that neither of you mentioned China. This is happening to a large extent in China already. It seems obvious to me that this practice would be akin to fascism (suppression of our individual preferences to the averaged preferences of the masses). But even without the scary totalitarian twist, the whole idea is based on the fact that we are all just a product of algorithms, that our actions are all predictable by algorithm and there is zero creativity in our daily lives. Russ seemed to like the idea of having the algorithm shop for you. I know I’m being genderist, but is this a somewhat male point of view because so many more men than women do not like shopping? Shopping is much more than buying something that you like. It’s looking at all the possibilities and imagining what they might bring and then choosing among them. As an avid cook, food shopping for me is one of life’s greatest pleasures. Why would I want an algorithm to choose my food for me? Then there is the idea that I just might not be all that predictable. I was trying to find my way to a meeting in Brussels once, on foot, because Brussels is a walking city, and twice, a pop-up in Google maps appears to suggest that I stop at a nearby cafe. (This never happened to me before or since, but it was so annoying!!) I had zero interest in this cafe (it was a Hard Rock, if you can believe it, a place I have never been, and it was 9am in the morning) or stopping for a coffee, and it got me off my track so that I was lost for 10 minutes because the network connection took it’s time in recognizing that my declining the offer to show the way there and potentially making me late to my destination so that I could not get my coffee at my destination in the cafe on the ground floor that I was looking forward to before the meeting. And yes, at that time, I was frustrated with the faceless bureaucracy that got me off track. Even if the algorithm somehow got better at choosing a place to stop (e.g., not the Hard Rock Cafe), I don’t do the same thing every day or every trip, and I have serious objections to non-authorized algorithms making my choices for me. This is so obvious to me and I could not believe you guys could not see it. It reminds me of the financial traders in the old days who could not see how their algorithms could somehow ever be wrong. (I was introduced to these as a young sales assistant at an investment bank in the 1980s, and I must say that their authors were mesmerized by their creation, just like the self-driving carmakers.)

I will also comment that the only other time that I recall wanting to yell an answer into the podcast was in regard to the discussion in 2017 on self-driving cars. In the conversation, no one could come up with any possible safety risks, while I could immediately imagine a fundamental safety risk that would be probably impossible to overcome and that is that a human being does not think like a computer and no matter how good they are, computers can never imagine themselves out of a new danger that the human has not precisely described to them, and yet, every drive we probably navigate a new permutation of a familiar scenario every minute. A computer will not be able to do that with 100% success. And there is a corollary to that. Humans make machines, including computers, and humans are imperfect. The machines will eventually reveal those imperfections. It’s why, with decades of engineering improvements, we still have plane crashes and chemical accidents.

Jan 13 2021 at 4:01am

I reread my comment and realized that, in my inarticulate way, I was really just trying to say, and what if the algorithms are wrong?  Isn’t that the biggest problem with all this?


John Knowlton
Jan 14 2021 at 4:46am

Thanks for another great episode! I have already begun Shop Class as Soulcraft and just received Oakshotte’s Rationalism in Politics from interlibrary loan.  Your podcast helps ensure that I have a rich mind life.

Jan 23 2021 at 6:04am

Matthew is grasping at some very important questions, though to me he’s coming at it from an odd angle. I can appreciate the metaphor of the choice of the open road, but that really doesn’t deserve to be the centerpoint of this much larger issue. Most people in the world do not drive, and far fewer drive for the sake of spontaneity, Google Maps or not.

A reformulation of the real issue as I see it is: there is no guarantee that any complex adaptive system will have our best interests at heart, be it corporate profit seeking, government, or a virus. The key questions then are:
1. How much control do we have over these systems?
2. How impactful are they in our lives?
3. How closely aligned are our collective goals as a species with the system’s goals?

On corporations, it seems like the answers are 1. Some, but decreasing pretty steadily; 2. Extremely; and 3. Hard to know, but I think many would say that it’s looking much less positive than it was half a century ago.

Comments are closed.


EconTalk Extra, conversation starters for this podcast episode:

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

* As an Amazon Associate, Econlib earns from qualifying purchases.

TimePodcast Episode Highlights

Intro. [Recording date: December 1, 2020.]

Russ Roberts: Today is December 1st, 2020, and my guest is author Matthew Crawford. He is a Senior Fellow at the University of Virginia's Institute for Advanced Studies and Culture, and author of Shop Class as Soulcraft. His latest book, and the subject of today's conversation, is Why We Drive: Toward a Philosophy of the Open Road. This is an homage to driving and cars, but it's about a lot more than driving: a set a deep meditation on how we might think about our relationship to technology and regulation.

Matthew, welcome to EconTalk.

Matthew Crawford: Thanks for having me, Russ.


Russ Roberts: Let's start with the role of serendipity--in life and in driving. You're a fan of it. Tell me why. Why is serendipity--and for maybe some of our non-English speaking listeners--what is serendipity?

Matthew Crawford: Well, serendipity is something that happens when you don't have a plan. Or, maybe things don't go according to plan. And, in particular, it's when things go well under those conditions--because of course things don't always go well.

And I think that's part of the meaning that's in there with serendipity, is that there's some kind of risk involved, and it involves hope. I like to quote my friend Garnette Cadogan, who wrote this beautiful essay--not about driving, but about walking. And he talks about stepping out onto an urban sidewalkk not knowing who or what you're going to encounter. And he says that serendipity is a secular way of speaking of grace. It's unearned favor.

And, so, I try to tie that to the experience of riding a motorcycle through the woods on a trail where you're not encountering other people, but the trail itself is so full of surprises that it takes total concentration. And when I push the pace, you know, beyond my actual skill set, and it goes well, I feel somehow enlarged. I feel existentially energized.

So, the book begins with this hunch that somehow risk is bound up with humanizing possibilities. And I wanted to explore that.

Russ Roberts: Nassim Nicholas Taleb, who has been on this program many time, calls himself a flaneur, which is a French word which means, I think, to stroll aimlessly--to let your thoughts go where they'll go, to see what you'll encounter without thinking about what that might be.

Of course, a lot of our life is spent trying to reduce that uncertainty, that surprise factor. I know when I go out for a "exercise walk," I do a certain loop around my block. There's no serendipity except who else might be strolling. It's a pretty dull, safe experience. I think it's a great metaphor for how we can think about our lives.

We've talked a lot in the program about the need--I think many young people feel they have a need for a plan; and I understand that need. And for some people, that's appropriate. If you want to be a doctor, you got to start planning early: it's hard to get there. You can get there later in life, but it's really much, much harder. But, for most of us, we're not sure what we want to be, and part of life is finding out what that is. And that serendipity part is enormous. It's a whole different way of seeing life, less as an algorithm to be executed and more as an adventure to be experienced.

Matthew Crawford: Yeah, and I like that last formulation of yours, contrasting it with something more algorithmic. Of course, we've undertaken this grand social undertaking of rendering things algorithmically and automation. I think one way you could think of automation is that it's an attempt to eliminate those moments of openness or serendipity and replace it with machine-generated uncertainty, on the supposition--well, usually safety is invoked. There also seems to be a presumption that human beings are incompetent or not to be trusted. Certainly in the driverless car space, the refrain is that human beings are terrible drivers. And, you know, it's hard not to agree with that, but there is a kind of consistent, I don't know, low regard for human capacities that seems to be operating there.

Russ Roberts: And of course, if we're not careful, we'll meet those expectations. I have to say, when I'm commuting--I don't really commute anymore--but if I'm on the Beltway here in the Washington, D.C. area driving 60, 70 miles an hour at high speed with, oh, maybe 15, 20 feet, 30 feet maybe of space between me and cars around me, maybe it's a little more than that, I do feel like I'm on a bit of an adventure. It amazes me that we don't kill ourselves every time we're out in that. It reminds me a little bit of the Blue Angels, the aeronautic team, how they can swoop and swerve in concert with such small margin for error is an extraordinary achievement. Of course, sometimes they fail. But, I feel that way driving: I do get a taste of that even without being on a motorcycle.

Matthew Crawford: Yeah, and if you think about it--as you are--it's an extraordinary trust where we do extend to one another a presumption of individual competence and paying attention.

You know, Tocqueville, when he came to America and traveled around as a Frenchman back in the mid-19th century, one thing that struck him was that Americans have this capacity to cooperate to achieve practical ends, maybe building a bridge or building a road. And he thought that it was in these small-bore practical activities that require cooperation and coordination that this served as a konursery of certain aspects of the democratic personality. He thought that these capacities were important for collective self-government.

So, that's interesting to think about if we're going to relieve ourselves of the burden of that kind of coordinated action that we see on the road: Does that have any implications for the democratic character and possibly an atrophy of the social intelligence that we are exercising on the road without really thinking about it?


Russ Roberts: Let's say it's because Asia runs in the other direction or at least both directions, right? I think part of the reason that we have the government we have and our relationship with the state that we have, at least had historically, is that it has some to do with American character and nature. Our willingness to give up control and autonomy to a nanny state, to a wiser artificial intelligence, a wiser political class of experts to me is really the underlying problem here.

So, it's--once that skill on the road that you're talking about atrophies--I would call it the ability to look out for yourself and your own dreams and desires--and once you say, 'Oh, they,' whoever they may be, 'can take care of it better than I can, so I'll just turn that control over.' I think that's the source of the problem.

I can't help but think about that incredible scene in the movie Witness when they build a barn in a day. It gives me goosebumps just to think about it. And the idea of the kind of community it takes--it's an Amish community in this case--to make that happen, is just a magnificent example of the kind of cooperation you're talking about. In this case, it's an ordered cooperation that's intended. On the road, it's even more beautiful in a way because it's not intended. It just emerges from our care and self-care and our trust and expectations of the people in the other cars.

Matthew Crawford: Yeah. We have this exquisitely finely evolved capacity to predict one another's behavior. There's some great cognitive science on this that suggests the human mind is essentially organized as a prediction machine. And, where it gets interesting is in this loop of reciprocal prediction where you kind of stabilize your own behavior in order to make yourself more predictable to others.

To go back to Tocqueville, the ability to do just this without the supervision of the state or maybe some technology that does things for us--I think it's not always the state that is the eroding force here--that is eroding of our social capacities, but a kind of supervisory technocratic regime. And, yeah, it's interesting to think about.

Russ Roberts: Driving is just one example . There are many other aspects of our lives that have this structure, and this--I would call it a sort of creeping paternalism.

Matthew Crawford: It's interesting to think how--I mean, often the paternalism these days does proceed under the banner of technological improvement. With driverless cars, I think you can regard that as one instance of this wider pattern in our relationship to the material world, in which the demands of skill and competence give way to a promise of safety and convenience.

And, that safety element is interesting because--well, a couple of things. First, I think there's a dispositional evolution wherein the safer we become, the more intolerable any remaining risk appears.

But also, because of that change in our disposition, I think it makes us more susceptible to claims made on behalf of safety which are not always in good faith. We can talk about red light cameras shortly--

Russ Roberts: Yeah. We're going to talk about this--

Matthew Crawford: But, the larger point I want to make here, though, is that safety becomes a lever of moral intimidation that can be used to arrest criticism of some program that might be pursuing something quite other than safety.

Russ Roberts: Yeah. And of course, sometimes it's just well-intended. We're in the middle of COVID, in a pandemic--it's a perfect example of the kind of thing you're talking about. All kinds of things are claimed to be justified because they save lives--which often is worth doing, saving lives. But, always important to remember that--it's hard to believe, but most of us don't live forever. So, 'saving lives' is a bit--actually it's a bad phrase. It's not illuminating. It covers up what's actually happening--it's extending life is what we're talking about. Which is also a good thing; and I'm all in favor of it. But, I think there is a trade-off there as to what we're willing to give up to extend our lives.


Russ Roberts: Let's talk for a minute before we go back--we'll come back to this in more detail--but I'd like to hear you talk a little bit about the familiar.

So, I think we as human beings--somebody was asking me about bedtime stories for our kids, and I was reminiscing about how my kids liked quirky; I would occasionally tell them a quirky story. But, I had a whole set of stories and stories that I told and that I read to them where everything was predictable, and they found that deeply comforting. They loved--and, the prediction machine is part of that story, the brain--they loved knowing what was coming next, anticipating it, hearing it reinforced. Occasionally I'd surprised them; they liked that, too.

But, I personally--I think I romanticize risk and surprise and adventure, but deep down I think I'm a pretty cautious person. I think I've been on a motorcycle once, Matthew--which is less than you've been on one.

So, I have trouble walking the walk, I think, in this area. But, I do think there is a side of our humanity that likes the predictable, likes to say, 'Take out the risk. Give me the technology. Give me the pill. Give me the regulation. It protects me: I don't have to think about it any more.' And, you're really advocating in this book for a more adventurous role for living. So, defend that or react to my point about familiarity.

Matthew Crawford: Yeah. Well, first, about children. I think children are inherently conservative. They like routine, and they like the familiar; and I'm very sympathetic to that. One of my favorite thinkers is Michael Oakeshott, who talks about conservatism as not the hankering after the past, but rather an affection for the present, for what actually exists and being kind of--feeling gratitude for the present.

So, is that--now, what you just said about risk, yeah. I guess there are two things. You could say that this taste for risk is a decadent phenomena of sort of late-modern culture where we're so safe that we have to seek out risks sort of artificially--bungee jumping or something, whatever. So, you could take it as an index of some gnawing, I don't know, self-doubt at the heart of the bourgeois experience where we're so insulated from the risk of physical harm.

I think for men, especially, it's like there's this question that goes unasked of you--and it's a question you want to know the answer to, which is: How am I going to fare up against a moment of real physical risk? So, there's that. So, that would be to cast it as a kind of, you know, a decadent phenomena.

But, you can flip that around. So, Nietzsche, as you know, talks about the last man, who is this character who appears at the end of history, who is only concerned with his own safety and comfort and convenience. And he finds this contemptible creature.

I think these are both true.

Russ Roberts: Which--what do you mean?

Matthew Crawford: Well, I mean, Nietzsche has a real insight there, that there is something less than human in a being who, you know, is that devoted to comfort and safety and convenience. And, also, you know, preoccupation with overcoming that can itself become, I don't know, a kind of, a form of perversity, maybe. That's what I took you to be gesturing at--

Russ Roberts: Yeah. No, I am--

Matthew Crawford: a kind of perversity there--

Russ Roberts: It reminds me of--wonderful folk song, not really a folk song-- but it's by Richard Thompson, who gets called a folk singer. I think he's the author. It's called "The Wall of Death." And on the surface, it's a song about an amusement park ride. And of course, amusement park rides, they are a little bit dangerous--they can be. In the old days they were actually dangerous. Now, they give the illusion of danger without much of the danger. But, the refrain, one of the lines of that song is, "Let me take my chances on the wall of death."

And I think that--I encourage listeners--we'll put a link up to the song. It's extraordinary. I love the song. The music and lyrics are amazing. But, it's basically making fun of the safer rides at the amusement park. But of course, it's a metaphor for life. And I think the question of being tested--the ideas written in the poem If by Kipling--they're not modern ideas anymore. You're very much a 19th century man, Matthew. I am, I have a little of that, myself.

Matthew Crawford: Yeah. I think--let's make, I think, an important distinction here. So, there's the amusement park ride that's scary, doesn't require any skill, right? It's entirely passive--

Russ Roberts: activity. True.

Matthew Crawford: I want to lump that together with, say, Russian roulette, which is risky, but similarly there's no skill involved.

Now, I want to contrast both of those with, say, riding a motorcycle on a canyon road at 11 where you are--the reason it's so--God, it's so awesome--is that you're at the end edge of your abilities and you're pushing your abilities further.

Nietzsche said that joy is the feeling of your powers expanding. And they only expand if you push it. Right? And that, I think, replicates the experience of childhood--where, initially a toddler, their body, they don't really have control over it; they're learning to walk and climb things; they fall and they get hurt. But they're gaining these new powers, and--I mean, it is a source of joy.

And I think when we start using things with wheels like skateboards and bicycles, it's a continuation of that process where these artifacts become almost prosthetics, sort of extensions of our bodily capacities.

So, I think we are a hybrid creature. We're not simply pedestrians. We incorporate machines into our embodied mode of getting around. And it means that there's almost an infinite kind of headroom of further skills that one can develop. So, there is a kind of athleticism that can involve machines such as riding motorcycles.


Russ Roberts: I might try a different approach on that. I love that, it's beautiful, and I confess--I think what's the analogy for my life? It's, I guess, writing something under a deadline, I feel an exhilaration of--when the words still sing and I'm writing under pressure, it is exhilarating. And it comes from a feeling of being able to write in a way that I didn't have, say, when I was younger. So, that's my closest--I think I have to think about it--but that's my closest thought to that.

But, wouldn't some people say that's just childish? 'I mean, come on, writing--sure, when you're 11 or 8, riding a bike down a dangerous path--but shouldn't you leave that behind? What's the point?'

When I read the book Into Thin Air, I thought, 'What?' Which is about a bunch of people who climb Mount Everest as sort of amateurs and push themselves to the limit. Some of them don't come back, some of them are damaged irrevocably. It's a tragic book. But, part of the tragedy is, like: Why?

And, you're giving part of the answer. We have to push up against our physical limits; it's part of who we are as human beings.

But, couldn't you argue that's just kind of a--it's a mistake? Your book is extraordinary--there are so many things in there where I thought, 'I wonder if he's going to survive? Oh, I guess he will: he wrote the book.' Defend that--defend that level of risk-taking.

Matthew Crawford: Well, I don't know where you're getting that. I'm not a daredevil. I report on some daredevil stuff.

Russ Roberts: That's true. That's fair enough.

Matthew Crawford: So, I go to a demolition derby and check that out; and drifting, which is awesome. So, yeah, I'm intrigued by these arenas of, you know, sort of gladiatorial combat.

And, as you know, I have found this thinker who sheds enormous light on what I found in these venues--he's a Dutch historian named Johan Huizinga, who wrote a book called Homo Ludens. So, that's Latin for man who plays. And he found that play--the spirit of play--means that you're daring something, you're enduring tension and risk and uncertainty. And it also has a social dimension. He said, it combines friendship and hostility, which I thought was really interesting.

So, the examples he gives are ritualized combat, competitive dances--he was looking at different archaic societies. But, also you think of things like the rap battles of the 1990s, where it's stylized, insult trading, and boasting matches. So it's a very macho kind of energy.

And, Huizinga finds play to be the basis of civilization. He thinks that it's in our games that you can find the origins of social order. Because, after all, you have to submit to the rules of the game. And it's usually some play community that sets itself off from the wider community and develops these internal norms and standards. And they push each other ever further. And it's in that pushing, that rivalrous pushing each other, that new forms of creativity emerge.

Russ Roberts: I've talked many times in here about the moment after a football game when the people on the field who have experienced something like what you're talking about--that the rest of us really can't fathom--have that respect for each other. That, after they've tried to really hurt each other, that they somehow manage to embrace and salute each other. We see that in hockey, too, which is an incredibly dangerous sport; and yet at the end of the Stanley Cup, there's a powerful comraderie of shared warriorship that is part of what you're talking about.

Matthew Crawford: Yeah. And I think that is something that is acutely missing from the general run of life today. That experience of solidarity, a solidarity that's forged in--well, you used the word 'comradeship.' I think that's perfect.

Russ Roberts: My dad used to make fun of Frisbee.

Matthew Crawford: Frisbee.

Russ Roberts: I think he'd make fun of Ultimate Frisbee, too--I can't remember that he also made fun of it. But, Ultimate Frisbee is--it is competitive, a little bit--you do score. Regular Frisbee is just pure play without the competitive part. Ultimate Frisbee has this competitive part; but it's been sanitized to be as--the highest risk you generally face, as far as I know, in the casual games are that the frisbee might hit you. The frisbee is plastic. But there's no checking, or tackling, or rugby-like activity. And he viewed that--my dad viewed that--as a sort of: Frisbee generally, as an example of how we have tried to take competition out of modern life. So, talk about that a little bit, because that's related to what you're talking about, you just used that phrase. You defend competition as part of the human experience.

Matthew Crawford: Yeah. I think, you know, we--competition doesn't sit very well with the egalitarian faith. Right? Because people win and other people lose.

Russ Roberts: And we're all winners! Come on.

Matthew Crawford: Yah. Well, that's just it: we have this kind of idea of self-esteem as something that is crucially important to a child, and that we're all sort of equally entitled to it. And, kids need to be imparted with this self-esteem as though it is something one could impart without them earning it.

So, this is, um--yeah--this is a kind of feminized morality. It's a highly democratic morality.

Freud has some interesting thoughts on, you know, the different roles of mothers and fathers in the development of a child. So, in the earliest infancy he called that the stage of infantile narcissisms--so that the mother is like an extension of himself, who takes care of all his needs. And, the child has no sense that the world is something separate from himself. And so, and throughout life, the mother--the, sort of the image, is that she provides this unconditional love.

Whereas, the father's role is to convey to the child the image that an indifferent other person would have of him. In other words, the father is the conduit for these impersonal norms and standards which, if you rise to meet them, the world confers on you a kind of recognition; and it's a powerful recognition.

So, if you kind of short-circuit that process of getting out of yourself--out of that self-absorption of the mother's love through this doctrine of equal esteem--it means that you're not going to achieve that kind of worldly recognition.

And, it's interesting that--we're talking about the superego now, these impersonal demands and standards that apply to everybody, a shared standard. That's really the basis for a democratic social order, to have a single kind of standard that applies to everyone; and you meet it or you don't.

And, I think there's a tension, then, between this democratic morality, which I'm calling feminized--of equal esteem--on the one hand, and on the other hand this requirement that we all answer to the same or some set of shared standards. Because, if you don't have that, you have something more like the [?] regime of special privileges and special exemptions [crosstalk 00:29:07].


Russ Roberts: Hayek writes about this extensively: it's the ability of each of us to anticipate and have expectations based on the norms that we share of what is considered proper behavior, is what allows us to pursue our own desires, but in a way that it doesn't encroach on yours and often allows us to create cooperation through, say, the market, that has extraordinarily beneficial impacts on us both in material and non-material ways, is very Smithian, very similar to what Adam Smith talks about in the The Theory of Moral Sentiments. He invokes the impartial spectator as this observer of your behavior that forces you to step outside your natural self-interest. I don't think Smith read Freud, but it's possible Freud read Smith. I don't know of it. But, that's a very interesting observation.

Matthew Crawford: I haven't read The Theory of Moral Sentiments. Does Smith in any way introduce this idea of the maternal figure versus the paternal figure?

Russ Roberts: No, Smith doesn't have any of--he was very careful not to be politically incorrect, Matthew. More seriously, not that I remember. And, Smith never married, so he was never a father. Probably--he was very close to his mother, lived with her a good chunk of his life as an adult. This is not an area that his armchair theorizing probably would've lit upon, but it's a fascinating speculation that is--again, it's not considered politically correct, but I think we ignore some of that at our peril. That, our urge to treat men and women and fathers and mothers as identical, as interchangeable, is probably not consistent with our genetic evolutionary inheritance, is my guess.

Matthew Crawford: Yeah, and/or our social evolution of the rule of law, for example--I mean, what is that? But--the sort of binding standards that apply to all. Which of course was a signature accomplishment in the Western tradition--the rule of law, as opposed to the arbitrary will of a capricious sovereign.

Russ Roberts: I think about this a lot as a parent, actually. I don't know if it's necessarily male/female differences, but I notice how it spills over into other areas of life. Right? There's a form of parenting that says everything is negotiable--everything is an object of discretion. So: you have misbehaved; let's hear your excuses and maybe I'll forgive you. Versus: you broke the rules, these are the consequences.

And, the rule of law--I've never really thought about it--is the natural extension of that. It's rather powerful. It makes a lot of people very uncomfortable. They want to have exceptions: there's always "extenuating circumstances." And I think people forget that when you live in a world of extenuating circumstances, they become more common--that they're not a fixed, objective category.

Matthew Crawford: I think we're, in this conversation, replicating the whole history of the concept of law. There's an Old Testament idea of law, it really does speak in the imperative voice. And then, in the Christian innovation, there's more emphasis on mercy, I think. Well, I'm not sure that's true actually, that it's really a departure from Judaism.

But, in any case, the idea of mercy means a departure from the strict standards of justice. Whether it's extenuating circumstances, or I think more precisely it's not based on some claim of extraordinary or strange circumstances, but rather it's usually a sort of gracious grant by the political authority that stands outside of justice. It's a moment of forgiveness or grace that is not in any way justified. It stands outside of justice.

Russ Roberts: Sometimes it's cronyism--not quite as attractive.

Matthew Crawford: Sure; that's the problem.

Russ Roberts: Just a quick side-note on Judaism: I think Judaism gets a very bad rap. I think it's useful to have a concept of "the Old Testament God," but it's not the God actually in the Old Testament.

Matthew Crawford: Okay, sure.

Russ Roberts: The actual God of the Hebrew Bible is complicated. At one point God gives a self-description of [Hebrew 00:34:05 שְׁמוֹת]. Compassionate, gracious, slow to anger, full of kindness and truth. We can really write a great book on this Matthew, a Freudian interpretation of the of Hebrew scripture where God embodies both male and female attributes--which of course is the Jewish conception of God: probably both. All of it is politically correct. But let's just move on.


Russ Roberts: Let's talk about self-driving cars. A lot of the book deals with that. I think I was seduced by the claims of the advocates--it was about four or five years ago when they said, 'This is coming. It's just a technical problem. We'll solve it the next year or two, and we're going to save 35,000, 40,000 lives a year in the United States and even more abroad once they can afford these technologies. It's going to be great.' Because--we did an episode with Benedict Evans on this; I've done many episodes on it. The idea of that in my commute I can sit back, put on my headphones, read, work on my computer, have a drink--it's fantastic. What's wrong with that? What's wrong with self-driving cars?

Matthew Crawford: I totally get the appeal. I think if I had a daily commute through bumper-to-bumper traffic on the freeway, I would totally want one of these for just this reason, to sort of free you up to do something else.

So, I guess there's a few ways to approach this. One would be simply to note that this merely technical challenge, engineering problem, has turned out to be a lot more challenging than they were saying or thought it was going to be even five years ago. So, the horizon when this is supposed to happen has been pushed back and back. And there's been a lot of consolidation of these firms, kind of--investors starting to get skeptical about this. So, there's that.

There's also the problem of--which sort of maps onto the more general problem of automation where we're talking about the disruption in the labor force that's likely to happen. So, as it turns out, about two thirds of the states in the nation, this is the case, that: men without a college degree, the number one occupation for that demographic is some form of driving--delivery, trucking, whatever. So, you're talking about a massive--if this were to come to fruition, a massive dislocation in the labor market in precisely that demographic that is the sort of natural home of that middle American radical who stands behind the populist kind of moment. Right? So, you're talking about intensifying a political tension in a big way.

Russ Roberts: Well, there's millions of those people, too. It's--sometimes people don't think about how many cab drivers and truck drivers there are. It's millions of people. The economist and my youthful self would be prone to say, 'Oh, but they'll find new jobs. New technologies will come, will spring up to replace them--the jobs that have been lost--because people have more money and capital to--'

Matthew Crawford: Wait. Who has more money?

Russ Roberts: 'We.'

Matthew Crawford: The economy, the gross domestic product [GDP].

Yeah, I like that. It's funny how when you aggregate like that, all these problems disappear.

Russ Roberts: Yeah. I saw recently in Michael Blastland's book, The Hidden Half, I think he tells us--I forgot where saw this--the defender of European Union [EU] explained to the British working class woman that GDP is going to go up if we stay in the EU and it'll go down, and she says something like, 'Not my bloody GDP.' My GDP is not ours. Who's ours? Mine's what I care about.

It is a very relevant point that the aggregation hides some strong distributional impacts.

And then the economist waves his or her hands and says, 'Yeah, but they'll get training.' And my answer is usually, 'And, their children will be better off. Their children will inherit a better world because they'll have all these new opportunities that will come along.'

And, there's some truth to that. I still believe that--mostly--when I look at technological change and trade.

But, there are some people that are getting left behind; and we either need a different education system to help them originally. You said, these are the people who don't get it college degree. I don't think everybody should get one by the way. I'm very much against that as a policy.

But, there are other ways to cope with this socially, other than to stop technological change. But, you're just--I don't know where you are on that spectrum. Do you think we should just think twice about it, or do you think we should stop it?

Matthew Crawford: I don't think the issue is even technology. I mean, there's a kind of techno-mysticism that talks as though all these things are inevitable. And then there's the hand-waving about we'll all be better off, and everyone will be retrained.

But in fact, what you're talking about is very particular firms with huge lobbying presence in DC [Washington, D.C.] arranging things to remake our infrastructure in ways that will result in massive new concentrations of wealth and transfer of wealth. That's not technology. That's political economy. And I think we very easily confuse the two.

And, further, there's a kind of program of inducing such confusion. And, one element of that is this assertion of the inevitability--'This is coming'--which kind of demoralizes any kind of political opposition to it.

So, this idea of technological progress as this inevitable thing, I think it does a lot of work on behalf of whoever has the kind of relationships with government often to bring about some vision and impose it upon everybody else.


Russ Roberts: So, that sounds pretty sinister. I'm going to defend the other side a little bit; and then I want to give you some credit for getting me to rethink some of my thoughts on this area. But, the part I want to push back on is that some of the things that we've achieved--just your smartphone, GPS [globalized positioning system], Waze--which is the anti-serendipity. The closest thing to serendipity is that the voice of Waze telling you that--is recalculating, as you've made a wrong turn and it's now going to re-steer you back onto the right path.

But, it is an extraordinary human achievement of what we've been able to do and the collection of human knowledge or access to it via technology. And the fact that a--forget self-driving, let's talk about a self-parking car. I mean, that's a glorious thing.

I'm being facetious; but the part that's serious here is that our ability to mold the world around us with the combination of brains and bytes--digital technology--is really quite extraordinary. And, I don't think we should denigrate it relative to things like, say, the physical command that, say, riding a motorcycle on a canyon road does--because I don't think they're that dissilimar as examples of human flourishing.

Matthew Crawford: Yeah. So, in the book, as you know, I have a very concerted celebration of Internet technical forums as these venues where people share knowledge.

So, where I have most benefited from this is in the hot rodding scene, where people have pushed the state of the arts of the internal combustion engine to places no one could have anticipated even 20 years ago. Right now, I'm building an engine that will get about six times the horsepower it was designed for. That's based on a couple of things: One is electronic engine management, so it's going to be all sort of digital--every parameter of the motor controlled--and there's a do it yourself platform for doing that.

But, more important are these technical forums where people are sharing knowledge. And, you may have come across this in the YouTube tutorials where it's so easy to find how to do something that would have completely stumped you before that.

So, there's an example of--I call this 'folk engineering.' It's this widely disseminated- quasi-collective efforts of innovation that cumulatively can really take us to new places. And it's all made possible by the Internet.

So, this is not an anti-technological screed. I mean, I am, myself, a technologist.

But, I do bring, as you rightly detected, jaundiced and cynical presumptive skepticism to the re-making of things in this quasi-compulsory way by Big Tech, which I think doesn't quite deserve the mantle of progress that we just automatically defer and grant to it.

And I think people are waking up to this: that's no longer a presumptive thing we're willing to extend to Big Tech. And especially in Europe; but here, too, I think that honeymoon is over.

Russ Roberts: Yeah, and it happened over the last five years or so, for me. I used to extol the virtues and then I got uncomfortable with it, and now I realize: yeah, maybe I should be uncomfortable with it. It's not as--it shouldn't be celebrated, perhaps, as we are tempted to have it be.

I just want to mention to listeners that your remarks about folk engineering remind me of Matt Ridley's episode recently on how many of the great innovations of humanity came from people just messing around who weren't trained in the area, or just hobbyists, and that's the highest compliment you can give somebody sometimes.

Matthew Crawford: Not only that, but I'm very intrigued by the history of thermodynamics-- the scientific theory. So, I was a physics major in college, and so, theoretical thermodynamics was just trapped in this dead end, back up until, I don't know, the mid-19th century, I guess?

Now, mechanics at the time were working out how to make steam engines. So, there are certain relationships between temperature, pressure, and volume that were observed and incorporated into the designs. And it was that development of the steam engine that caused a breakthrough in physics--in theoretical thermodynamics. It got them off the caloric theory of heat--that it's a fluid.

And, Aristotle, actually, points to this phenomenon where he says it's people who 'Are in intimate daily involvement with nature,'--he's talking about craftsmen--'who sometimes are able to see things more clearly than the, kind of, theoretician.' So, I mean, obviously, it works both ways.

Russ Roberts: Yeah. It's just a beautiful thing. And I think our desperate attempts as creatures to alter our environment--there's something very heroic about it.

Matthew Crawford: And, it being widely dispersed and millions of people doing it--that's a key thing there. Right?

Russ Roberts: Sure. You were talking about the Internet as a way that knowledge gets collected and coordinated. The marketplace does that with prices in certain settings, where the knowledge I have in my local setting--area of availability--of usefulness, of alternatives, of substitutes, their prices, that knowledge is incredibly dispersed. It's not collected in any way. And what the marketplace is able to do is utilize that knowledge despite the fact that it's not collected. What the Internet is doing for hobbyists and do-it-yourselfers and hot rodding and other things, of course--knitting, we had Virginia Postrel talking about textiles recently--what the Internet is doing is allowing that to happen without the mediating role that prices usually play in collecting information.

Matthew Crawford: Interesting. Yeah. So, what does that--we're sort of back to the early enthusiasms that surrounded the Internet: that information will be free. So, I think there are some of these cases that we're talking about now where those hopes have been realized.

But, you're the economist here. And, you're comparing the market and its mechanisms of discovery with the Internet, where there is no price, because there's cost, there's no marginal cost of exchanging information. That seems like a fundamental difference.


Russ Roberts: Yeah, there could be such a cost, but often there isn't. There's no commerce, is how I would describe it. Often these things are exchanged as a form of--not barter--just exchange. You know, I willingly, freely--I'm hoping to maybe get some reputational advantage: I put an entry and I fixed something on Wikipedia, I enter something into Wikipedia, I put up a video that I'll never collect any money for that tells people how to unclog their drain--that wouldn't be me; that was hypothetical. I'll be the one looking at that video and trying to figure it out.

But, they are different, but the underlying--they're solving a similar problem.

The problem is, is that knowledge is dispersed in the brains of individuals. It's not collected and cataloged. This is one of the deepest insights into economics. It's not fancy but it's still very deep. I associate it with Hayek, I associate with James Buchanan: that the market is--I use the word 'discovery'--discovery process. So, if I want to know the best way to deal with the high price of some product, I can't look it up. I have to discover it. And someone has to innovate it sometimes; and someone has to find out, 'Oh yeah, this material could be used for this purpose that wasn't available before.' But, the incentive to do that has been created by the scarcity of a different product.

Matthew Crawford: Yeah. I love the theoretical ideal of the free market as this epistemic kind of instrument. There's a profound insight there into the nature of knowledge, the limits of expertise when it's enclosed in a single cranium and all that.

If we return this back to our political discussion, some people say, 'Well, it's a beautiful idea; there's never been an actual free market.' I don't know about that, but it does seem to be the case that, let's say, over the last 30 years, there's been the party of the markets. In the United States it has been the Republican Party.

Russ Roberts: They talk a good game. They really don't do much.

Matthew Crawford: Well, yeah. It's like crony capitalism is what we've got. So, the distinction between monopoly pricing and/or an economy in which the relationship with the state through lobbying is an absolutely crucial element. That's very different than this idea we're talking about.

Russ Roberts: I don't think the fact that the "free market" is an abstract ideal-- I think that's not a ending argument. The people who make it they treat it like, 'Oh, see--the whole thing is nonsense. it doesn't exist.' I think that misses the point. It certainly misses the point in the way that the economists that I respect talk about the market. They understand it's imperfect, flawed. The question is: compared to what? And, does it achieve the kind of things that we're talking about here?

To bring these two threads together, I would argue that there's nothing perfect or necessarily ideal about the workings of the emergent order that commerce creates. The desire for more, the desire for profit, the desire for better, the desire for cheaper, that we all have as consumers or as producers--these are things that yield outcomes that are often glorious; but they also often lead to outcomes that are not so attractive.

So, I think the mistake that economists have made is in gilding that lily: treating it as if somehow everything that comes out of these decentralized choices must be good.

You can make an argument like that. That's the flawed argument. But you don't want to then go the other way and say, 'And therefore, none of this orderliness can be created without,'--fill in the blank. Yes, you do need property rights, you need some kind of--it's not anarchy argument. But, the question is: How much of that do you need? And I think often you need very little government supervision to yield outcomes that most people would agree are very attractive. The question is: Do you want to empower that government to then hand out goodies? It's hard--well, go ahead.

Matthew Crawford: No, I appreciate that. So, there are different forms that government involvement can take. One that's worried about, by libertarians, is the regulatory function. And the worry there is, again, an epistemic one: you don't know what you're doing in the way that price has this revelatory quality about people's actual order of preferences.

So, there's that regulatory function, but then there's this other kind of backdoor function of state capitalism of picking winners and losers. And, it's interesting that the 'party of the market'--so called--became the party of big business. Maybe it always was. So, there, that's a matter of kind of trying to get on the right side of decision-makers in government.

But, we're not talking about regulation now, we're talking about setting things up in a way that prefer one kind of industry over another. I don't know, I've kind of wandered off there.

Russ Roberts: No, but I think that's a great--that's a very relevant point. I think the conflation of being pro-market with pro-business is evil. It's a terrible mistake.

Matthew Crawford: And yet it's so common.

Russ Roberts: Yes it is. And I think what you're saying is the reason it's common--and I would agree with you--is it's convenient. It's a useful myth to believe that what's good for General Motors is good for the nation. What's good for the stock market is good for the nation, we're told. These are not true. And the privileging of certain types of entities in the name of a grander outcome is the road to Hell.

Matthew Crawford: So, there's another tradition here that I think in some ways is quite parallel to the free market ideal, but has completely different origins, which is the distributivist ideal. I don't know much about this, but I associate it with Catholic social thought. But, the idea is that decision-making should be devolved to, sort of, the lowest or most local level.

Now, some things are appropriately decided at the highest or most common levels, but as a general principle to devolve decision-making down. Which, of course, fits this ideal of the market, where it's the exchanges between individuals--that is, the ground, the sort of epistemic ground truth. And, if you're going to sort of decide things in a way that abstracts from that, you need to be aware of possible unintended consequences, and all that.

So, we should maybe both go and read some of this--I think it's called 'Distributivism.'

Russ Roberts: I don't know anything about that, but it strikes me that the parallel is Federalism--the idea that political decisions should be devolved into the local Canton in Switzerland, or the state or the municipality in the United States. And it's--a lot to be said for it, and it's got a lot of challenges.


Russ Roberts: But, I want to come back to this issue of Big Tech, because you have a lot of--I want to give you credit for making me rethink some of my thoughts on this. We had Shoshana Zuboff on the program--worried about Google. And, I pressed her and said, 'You know, they're trying to sell me a bunch of stuff.' I get it. It's a little bit like Google and these large tech companies--the analogy I have is the repair person who comes into your house to fix your washing machine and they say, 'Oh, I'm not going to charge you.' 'Oh, wow, that's so nice.' 'But, I did take a lot of photographs of stuff in your house so I could learn about what your preferences are and I'll be sending you some ads for those things because I've learned something about you. I sell these photos to companies that like that. Are you okay with that?'

Actually, they don't ask you if you're okay with it. They just say, 'Oh, it's a free repair.' You go, 'Oh, this is great.'

So, one of the ironies, I think--or, not ironies, paradoxes--is that Google is "free"--which is amazing, because I get incredible value from it in many ways. But, they are selling stuff, it's just not directly. I'm the middleman. I'm the middle person. I'm the product that they're selling. They're selling access to me.

And, one part of me says--this is what I said to Shoshana Zuboff,--'Well, if I don't want to buy, I don't have to. What's the harm? Don't I really want ads that are tailored to me?'

And, you got me to rethink that a little bit. You can take a stab at it first, and then I'll give you credit if you want.

Matthew Crawford: Well, I mean, I learned a lot from Zuboff's book, The Age of Surveillance Capitalism, and sort of take that up and ask--I have a chapter titled, "If Google Built Cars."

So, just to rehearse the basic logic that she lays out--so the cynics' dictum is: If you don't know what the product is, you're the product.

But, it turns out that's not quite right, on her account. What you are is a source of behavioral data, which is then manufactured--it's this raw material which is manufactured into a prediction product, which is then sold in this sort of open exchange of real time--it's a behavioral futures market, is what she calls it.

Now, so, the ideal in this surveillance economy is to be able to intervene in the very moment where your behavior is being analyzed in real time, and you're susceptible to being nudged one way or another. And, this is going to happen beneath the threshold of awareness. So, she talks about all the kind of subtle means of doing that.

So then the question is: Well, what if this, the economic logic of the Internet, were to slip the bounds of the screen and start to order the physical environment--where you don't have the option of unplugging?

And I think the best example of that is this idea of the smart city, where you know, everything would be surveyed; and things like trash collection, police protections, deliveries, the allocation of scarce road surface at different times a day--all this would be managed by an urban operating system. And presumably, Google would make the trains run on time because they're good at that.

So, what's the downside? Well, it's something a little harder to articulate, but essentially you're talking about the city being run now not by a democratically-elected city council, but by a cartel of tech firms using sort of proprietary knowledge that is utterly obscure to you and inaccessible.

So, what you're talking about is a loss of any control over the institutions that you're living within. So, that doesn't sit very well with our liberal political tradition.

Russ Roberts: Yeah. We talked about that, she and I did. And I thought--my first thought--was 'Well, a smart city compared to the ones most of us live in where the trash doesn't get picked up; so well, traffic is hideous; you name it --there's a certain appeal to it,' I get. You could choose not to live there, in theory, that would be the equivalent of unplugging. But, of course, the goal would be to have every city be smart: Who would want to live in a dumb city?

And I think there's no traffic accidents in this Oz-like place. There's also no m an behind the curtain, by the way. It's just an algorithm that's moving the trains around and the cars and the groceries. I don't have to go to the grocery if I don't want to; I can shop instantly. He knows when I'm out of stuff--it's fantastic; I only have to log on and it just shows up in my box--and it's a physical box, all my stuff.

And I think that's a very appealing vision. So, tell me why we should be afraid of it. And I think maybe we should be. It's hard for me, but I think maybe we should be.

Matthew Crawford: Yeah. Right. So, there's this great quote from Eric Schmidt, the head Google or former head of Google, I'm not sure--

Russ Roberts: Former, I think--

Matthew Crawford: where he said, 'I think people who don't want Google to be answering their questions, I think they want Google to be telling them what to do before they even know that they have a question.' It's something like that: it's not the exact quote.

But, I thought it was a very revealing idea. Is that, kind of, Google becomes our trustee. So, as opposed to a [?] utility answering questions as more nudging and steering thought into channels that seem desirable to Google.

And it's not simply a profit motive. If that were the case, then, you know: what we'd be talking about here is just a cynical exploitation or something.

But it's not that. If you look at Google's priorities in the realm of search, which is its core business, what you see is this quite paternal, maternal--I'm not sure which--mentality of wanting to create a choice architecture. Right? This is the nudge idea. One that will be salutary and embody the right values.

So, you're not just giving people what they think they want. You're giving people options--choices that are highly curated.

And, as we've seen, that curation is a highly political thing. I mean, the tech firms are now--they've dropped the facade and are intervening in elections with perfect openness. Right?

So, it's quite breathtaking. The democratic pretense has been dropped. And we're now talking about full-blown technocratic paternalism.

And I think it's enraging people, really. It's really feeding this sense that our institutions are out of control with this kind of, this expertise that feels empowered to simply take things in hand and suppress dissent or even try to manage the information environment in such a way that other possibilities don't even show up.


Russ Roberts: Now, we're off the rails in so many ways, and I think that what you just said is, you know, deeply disturbing. I think economists--I'll include myself in this, shamefully, that, 'Oh, they're just trying to make money'; and of course, there's constraints, there's competition: I mentioned they'll have trouble, they always have--

Matthew Crawford: Competition--

Russ Roberts: Well, there will be. There has been. There always is. When firms are thought to be invincible, it's surprising how often they turn out to have competitors. It always appears like this time will be different. Maybe it will be. Maybe they won't struggle to maintain their situation. But, I'm thinking about IBM [International Business Machines], which was invincible. The car, the Big Three, they'll never get--they've got effectively a cartel. OPEC [Organization of the Petroleum Exporting Countries]. We can think of lots of cases where competition seemed unlikely and yet it came along.

But, put that to the side. I think the deeper point you're making is that these folks, first of all, they have so much profit they can indulge in all kinds of things that have nothing to do with profit. And they do.

Matthew Crawford: They're states, right? [crosstalk 01:06:44] governmental entity--

Russ Roberts: I don't want to say that. I think that--you can say it, but I'm a little uncomfortable saying that.

I think the question is: What do we do about it?

And then the challenge there is the traditional methods of Antitrust I think don't work very well. They don't apparently hurt consumers the way the old-school monopoly did by jacking up prices. Google is still is as cheap as it ever was--zero. But that's hiding what the real price is.

I'm want to read a quote from the book which I thought was thought-provoking about Google; and say a little bit more about it, and then we'll take it home.

But, you say,

Has anyone bothered to ask why the world's largest advertising firm, for that is what Google is, is making a massive investment in automobiles? By colonizing your commute, currently something you do, an actual activity in the tangible world that demands your attention, with yet another tether to the all-consuming logic of surveillance and profit, those precious 52 minutes [the commute], of your attention are now available to be auctioned off to the highest bidder. The patterns of your movements through the world will be made available to those who wish to know you more intimately for the sake of developing a deep proprietary science of steering your behavior. Self-driving cars must be understood as one more escalation in the war to claim and monetize every moment of life that might otherwise offer a bit of private head space.

Matthew Crawford: Yeah. I guess we could just stop there.

Russ Roberts: Yeah, we could.

Should I be afraid of that? Is it scary that they're monetizing that? Don't I like it when I show up in a town and Google knows I've got the plane reservation because they've read my email and they know I buy coffee because they've seen my Amazon orders and they tell me where the best coffee shop is. So, I think--go ahead.

Matthew Crawford: Well, is there--

Russ Roberts: They anticipate my question. My question is: Where's the best coffee shop? They just immediately send it to me when I land. As soon as I land, they know I've landed and they send me the best coffee shop.

Matthew Crawford: Well, I guess what we're talking about, even in this fairly benign version that you've just articulated, is still a fundamentally different way of inhabiting the world.

So, maybe--right--so, what is the source of unease about this? Somehow that there's this benevolent entity surrounding me and presenting options to me that are optimized based on my previous behavior. It means that I'm--right--a determinate thing that's known. That Google knows me better than I know myself, because they have more systematically looked at my past behavior and found the patterns that I'm not even aware of.

So, right--I start to be maybe like a test particle in this sort of field of forces, being managed beyond the rim of my awareness.

I don't know, does that creep you out? I don't know.

Russ Roberts: It does.

But, your book--and we'll close on this--your book stands at the barricades and says: Stop. Your book says, 'This is not a world we were made to live in. we will lose something precious when we are those particles being pushed around by behavioral incentives. And we're going to be pretty lonely[?].' I'm very sympathetic to your view. I think that's because I'm something of a 19th-century person in a 21st-century world.

How are you going to get other people to join you? You wrote a book, which is great. And I think it's an eloquent defense of serendipity and giving up these things. But, I wonder if most people are on our side.

Matthew Crawford: Well, you just mentioned--you used the word 'lonely,' which prompted a thought for me.

So, the picture we've just laid out of being a test particle in this field of forces, a kind of determinately known entity, it is a very lonely picture. And I'm reminded of Hannah Arendt who talked about social atomization as one of the preconditions for totalitarianism. So, insofar as your--I don't know this case can really be made--that kind of the way we normally know about the world is by interacting with other people; and we're embedded in communities.

I think right now with the COVID pandemic, we're really feeling this heightened atomization. It's almost like a turbocharged version of the trajectory we've been on.

So, atomization is one element to worry about. But there's this other great bit in Hannah Arendt where she talks about bureaucracy as--the Rule of Nobody, is the way she put it. So, what she means is that--

Russ Roberts: It's the administrative state.

Matthew Crawford: Yeah. It means there was not just the administrative state. It's also all these commercial entities that sort of order our lives in very far-reaching ways, but which you can't address.

So, just yesterday, I got my first cell phone bill from AT&T (American Telephone and Telegraph) after getting a new phone, and it's wildly different from what I agreed to in the store. So, you know, the usual thing: I call, I was on hold for literally an hour before giving up. I'm going to try again today. But, it's this sense that there's no one you can grab hold of by the lapels and hold to account.

And she points out that, that is the definition of tyranny: power that is not accountable--right?--and is not operating with your best interests at heart.

So, that experience is endemic in modern life--of interacting with bureaucracies that you can't even address yourself to. There's no one to--you can't get angry at the poor shmuck in the call center. Right? So, this feeling of being subject to the Rule of Nobody, she suggests--Hannah Arendt--that this is the source of the simmering rage that so many people feel. This is the context of what she's talking about--protest movements in the 1960s.

And of course, we're living through a similar episode now--sort of rage. And I have to think that this feeling of being subject to an arbitrary, unaccountable power that you cannot address is playing a significant role in this moment of rage.

So, that's obviously an element in, when we're talking about life being ordered by algorithmic firms that are utterly opaque.

Russ Roberts: My guest today has been Matthew Crawford. His book is Why We Drive. Matthew, thanks for being part of EconTalk.

Matthew Crawford: Yeah, thanks for having me, Russ.

More EconTalk Episodes