Can Artificial Intelligence Be Moral? (with Paul Bloom)
Dec 25 2023

Morality_AI.jpgIt seems obvious that moral artificial intelligence would be better than the alternative. But psychologist Paul Bloom of the University of Toronto thinks moral AI is not just a meaningless goal but a bad one. Listen as Bloom and EconTalk's Russ Roberts have a wide-ranging conversation about the nature of AI, the nature of morality, and the value of ensuring that we mortals can keep doing stupid or terrible things.

RELATED EPISODE
Nick Bostrom on Superintelligence
Nick Bostrom of the University of Oxford talks with EconTalk host Russ Roberts about his book, Superintelligence: Paths, Dangers, Strategies. Bostrom argues that when machines exist which dwarf human intelligence they will threaten human existence unless steps are taken now...
EXPLORE MORE
Related EPISODE
Ian Leslie on Being Human in the Age of AI
When OpenAI launched its conversational chatbot this past November, author Ian Leslie was struck by the humanness of the computer's dialogue. Then he realized that he had it exactly backward: In an age that favors the formulaic and generic to...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Matt
Dec 25 2023 at 8:53am

Thanks so much for having Paul on. He’s fantastic. I don’t always agree with him, but he is thoughtful and acts in good faith. (And recognizes his fallibility.)

Matthew H
Dec 26 2023 at 4:52am

Great episode guys, well done. Your discussion of evil reminded me of this classic comedy skit where a WW2 German soldier has a moment of realization: https://youtu.be/ToKcmnrE5oY?si=4vvzozOCXD2LVyqF

Shalom Freedman
Dec 26 2023 at 9:14am

This conversation focuses on Morality and begins with the question of the possibility or impossibility of a moral AI. But it seems to me it from its beginning reveals a certain kind of, of not confronting an immoral reality. t. Paul Bloom is a distinguished member of the faculty of Yale, the same university whose students have sided with those calling for Israel’s destruction and praising Hamas. It is a university which shares with other elite institutions of higher learning, the policies of admitting students on the basis of their identity and not their merit, hiring and promoting people because of their political views, intimidating students to be quiet and not express freely their views in class, substituting the search for truth based on evidence with preconceived close-minded opinion. As for the massacre of Oct 7 there was not condemnation of this heinous crime but either denial of its ever happening or justification for it

 

The talk also contains a major bit of moral fiddling in the middle. Two sides in a conflict are not always equally responsible and culpable. They may each have their ‘narrative’ but one is closer to truth while the other may be based on lies and historical distortions. In the course of the conflict Palestinians have justifiable grievances, but they have been the vicious terrorist peace-denying party.

 

In the last part of the conversation Paul Bloom points out the enormous difference in values between those under thirty and those over forty in America. Russ Roberts attributes some of this as perhaps coming from their not marrying and not having children. But then there is no connection made with what has been happening in major European societies but also to a lesser degree in the United States. And no understanding how the Western world the free world is endangered by radical Islamic minorities with high birth rates. Here the connection could be made once again to the ’woke world’ of the universities and the loss of real academic freedom. It seems that far more worrying than what AI will do to deprive us of our humanity, at least at the moment is the worry about what parts of humanity may do to undermine freedom in the Western world as a whole.

 

 

 

Joseph Palange
Dec 26 2023 at 10:19pm

So much of this conversation felt to assume that there is morality without agency. As if a man who is locked in chains but has the desire to murder, is moral because he has been barred from doing so.

It brings me to Frank S. Meyer’s argument with Russell Kirk. Without freedom and liberty, there’s no virtue and there’s no avenue to cultivate virute.

Elad
Dec 30 2023 at 1:26pm

I don’t recall anyone claiming that limitations automatically result in moral behavior. However, limitations can potentially contribute to the overall well-being of society. For instance, if we prevent a murder from happening, it could save another person’s life.

David Gossett
Dec 28 2023 at 12:08pm

NY Times once did a podcast episode and revealed that it took 200,000 calls to reach 5,000 who where willing to participate in a poll. Even within those 5,000 respondents, they did not get nearly enough Republicans, so they had to weight the overall poll.

The 5,000 who responded were at the extremes. They don’t care who knows they are woke or who knows they are MAGA. They have signs in their front yards and tons of polarizing social media posts.

The other 195,000 do not want to take a chance on their views getting leaked. They are all hiding in plain site with diverse views and strong opinions, but nothing at the extremes.

Morale of this story. America is fine. All we see in polls, in news media and on TV is the 2.5% who are perfect for selling ads and keeping people glued to the screen. But it’s these 2.5% “shouters” who give the other 97.5% (including Russ and Paul) the impression that the world is a mess. It’s never been better!

The key to a happy life is realizing this is all a scam job to sell us more stuff.

As a side note, Russ is the first person in almost every episode to say the world is better today. This is the first time I have ever heard him say the opposite.

Ben
Dec 29 2023 at 10:06am

One AI risk is the ability to dramatically increase the reach of those who want to control others. Perhaps the most common tools of tyrannical presidents, kings, and dictators has been the use of criminal codes and regulations to achieve their goals.

Lincoln said that to get rid of bad laws, they should be fully enforced. Perhaps AI’s ability to fully enforce 100s of thousands of regulations will trigger some push back on the number and scope of criminal codes and regulations.  Or, perhaps we will lose our ability to push back.

Loved Joseph Palange’s reference to Frank S. Meyer’s argument with Russell Kirk. Without freedom and liberty, there’s no virtue and there’s no avenue to cultivate virtue.

 

Robert W Tucker
Jan 1 2024 at 2:29pm

I look forward to these interviews and admire how skillfully Russ brings out the best in his guests. I found my attention wandering in this episode. The conceptual and empirical prerequisites of a moral act seemed not to be well understood by Mr. Bloom. At times it even appeared that there was confusion between programming a rule and acting on a moral precept or constraint. As a result, the episode’s subject was not well addressed.

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:37

Intro. [Recording date: December 5, 2023.]

Russ Roberts: Today is December 5th, 2023, and my guest is psychologist and author, Paul Bloom, of the University of Toronto. This is Paul's fifth appearance on EconTalk, last year and February of this year talking about his book, Psych. Paul, welcome back to EconTalk.

Paul Bloom: It's great to be back. Thanks for having me.

0:55

Russ Roberts: Our topic for today is a recent piece you did in The New Yorker, "How Moral Can A.I. Really Be?" And, it raises a raft of interesting questions and some answers, but it raises them in a way that's different, I think, than the standard issues that have been raised in this area. The standard issues are: Is it going to destroy us or not? That would be one level of morality. Is it going to destroy human beings? But, you're interested in a subtler question, but I suspect we'll talk about both. What does that title mean, "How Moral Can AI Be?" What did you have in mind?

Paul Bloom: I have a substack which came out this morning which talks about the article and expands on it, and has an even blunter title, which I'm willing to buy into, which is: "We Don't Want Moral AI."

So, the question is, just to take things back a bit, a lot of people are worried AI [artificial intelligence] will kill us all. Some people think that that's ridiculous--science fiction gone amok. But even the people who think it's ridiculous think AI has a potential to do some harm--everything from massive unemployment to spreading fake news, to creating pathogens that evil people tell it to. So, it means there's a little bit of worry about AI. There's different solutions on the board and one solution is, 'Well--and this is proposed by Norbert Wiener, the cyberneticist, I think 60 years ago--says, 'Well, what if we make its values align with ours? So, just like we know doing something is wrong, it will know, and it won't do it.'

This has been known from Stuart Russell as the Alignment Problem: build AIs that have our morality. Or if you want, put "morality" in quotes, because you and I, I think, have similar skepticism for what these machines really know, whether they understand anything, but something like morality.

And, I find alignment research--in some way my article is like a love letter to it. This is the field I should have gone into, if it was around when I was younger. I have a student, Gracie Reinecke, who is in that area and sometimes gives me some advice on it, and I envy her. She says, 'Going to work in deep mind and hanging out with those people,'--So, I'm interested in it.

And I'm also interested in the limits of alignment. How well can we align? Are these machines--what does it mean to be aligned? Because one thing I point out-- I'm not the first--is: to be aligned with morality that you and I probably have means it's not aligned with other moralities. So, in some way there's no such thing as alignment. It's, like: build a machine that wants what people want. Well, people want different things.

Russ Roberts: Yeah. That's a simple but profound insight. It does strike at the heart of what the so-called deep thinkers are grappling with.

4:02

Russ Roberts: I want to back up a second. I wanted to talk about the Norbert Wiener quote, actually, that you just paraphrased. He said,

We had better be quite sure that the purpose put into the machine is the purpose which we really desire.

I just want to raise the question: Isn't that kind of a contradiction? I mean, if you're really afraid it's going to have a mind of its own, isn't it kind of bizarre to think that you could tell it what to do?

Paul Bloom: Yeah--

Russ Roberts: It doesn't work with kids that well, I don't know about you, but--

Paul Bloom: You know, there was a joke on Twitter. I don't want to make fun of my Yale University President, Peter Salovey, who is a very decent and warm and funny guy, but he made a speech to the Freshmen saying: 'We want you to express yourself and express your views, and give free reign to your intellect.' And then, the joke went, a couple of months later, he's saying, 'Well, not like that.'

I think, is--what we want is we want these machines to be smart enough to liberate us from decisions. Something as simple as a self-driving car. I want it to take me to work; and I'm just sitting in back, reading or napping. I want to live, have it liberated, but at the same time I want it only to make decisions that I would have made. And, that's maybe easy enough in a self-driving car case, but what in cases where I want the machine to be in some sense smarter than me? It does set up real paradoxes.

Russ Roberts: To me, it's a misunderstanding of what intelligence is. And, I think we probably disagree on this, so you can push back. The idea that smarter people make more ethical decisions--listeners probably remember, I'm not a big fan of that argument. It doesn't resonate with me, and I'm not sure you could prove it. But, isn't that part of what we think we're going to get from AI? Which strikes me again as foolish: 'Oh, I don't know what the right thing to do is here, so I'll ask.' I mean, would you ever ask someone smarter than you for what the right thing to do is? Not the right thing to achieve your goal, but the right thing that a good human being should do? Do you turn to smarter people when you struggle? I mean, I understand you don't want to ask a person who has limited mental capability, but would you use IQ [Intelligence Quotient] as your measure of who would make the best moral decision?

Paul Bloom: You're raising, like, 15 different issues. Let me go through this[?] quickly. I do think that just as a matter of brute fact, there's a relationship between intelligence and morality. I think in part because people with higher intelligence as smarter people can see a broader view, and have a bit more sensitivity to things of mutual benefit. If I'm not so bright and you have something I want, maybe I could only imagine grabbing it from you. But, as I gets smarter, I can engage--I could become an economist--and engage in trade and mutual benefit and so on. Maybe not becoming nicer in a more abstract sense, but at least behaving in a way that's sort of more optimal. So, I think there's some relationship.

But I do agree with your point--and maybe I don't need to push back on this--but the definition of intelligence, which always struck me as best is a capacity to achieve one's goals--and you want to jazz it up and achieve one's goals across a different range of contexts. So, if you could go out and teach a university lecture and then cook a meal, and then handle 14 boisterous five-year olds, and then do this and do that, you're smart. You're smart. And if you're a machine, you're a smart machine.

And I think there's a relationship between smartness and immorality, but I agree with your main point. Being smart doesn't make you moral. We will both be familiar with this from Smith and from Hume, who both recognized that--Hume most famously--that you could be really, really, really smart and not care at all about people, not care at all about goodness, not care--you could be a brilliant sadist. There's nothing contradictory in having an enormous intelligence and you use it for the goal of making people's lives miserable.

That's, of course, part of the problem with AI. If we could ratchet up its intelligence, whatever that means, it doesn't mean it's going to come nicer and nicer.

And so, yeah: I do accept that. I think intelligence is in some sense a tool allowing us to achieve our goals. What our goals are comes from a different source. And I think that that often comes from compassion, kindness, love, sentiments, but don't reduce to intelligence.

Russ Roberts: How much of it comes from education, in your mind? At one point you talk about, you say, "We should create machines that know as humans do that it's wrong to foment hatred over social media or turn everyone into paper clips," the latter being a famous Nicholas Bostrom--I think--idea that he talked about 100 years ago on EconTalk, in one of the first episodes we ever did on AI and artificial intelligence. But, how do you think--assuming humans do know this, which there's a lot of evidence that not all humans know this, meaning there are cruel humans and there are humans who work to serve nefarious purposes. Those of us who do feel that way, where does that come from, in your mind?

Paul Bloom: I think some of it's inborn. I study babies for a living and I think there's some evidence of some degree of compassion and kindness, as well as some ability to use intelligence to reason about it, where it's bred in the bone. But then--plainly--culture, education, parenthood, parenting shapes it. There's all sorts of moral insights that have come up that are unique through culture.

Like, you and I believe slavery is wrong. But that's pretty new. Nobody is born knowing that. Thousands of years ago, nobody believed that. We might believe racism is wrong. And, there's new moral insights, insights that have to be nurtured. And then: I didn't come up with this myself; I had to learn it.

Similarly for AIs: they'll have to be enculturated in some way. Shared intelligence won't bring us there.

I will say one thing, by the way, about--and we don't want to drift too much into other topics--but I do think that a lot of the very worst things that people do are themselves motivated by morality.

Like, somebody like David Livingstone Smith says, 'No. No, it shuts off. You dehumanize people. You don't think of people as people.' There is, I think, such a thing as pure sadism, pure desire to hurt people for the sake of hurting them.

But, most of the things that we look at and we're totally appalled and shocked by, are done my people who don't see themselves as villains. But, rather he says, 'No, I'm doing the right thing. I'm torturing these prisoners of war, but I'm not a monster. You don't understand. The stakes are so high. It's tough, but I'm doing it.' 'I'm going to blow up this building. I don't want to hurt people, but I have higher moral goods.' Morality is a tremendous force both for what we reflectively view as good and reflectively view as evil.

11:55

Russ Roberts: Well, I like this digression. Let me expand a little bit.

One of the most disturbing books I've never finished, but it wasn't because it was disturbing and it wasn't because I didn't want to read it--I did want to read it but I'm just confessing I didn't finish it--but it's a book called, Evil, and it's by Roy Baumeister. And it's a lovely book. Well, sort of.

And, one of the themes of that book is exactly what you're saying, that the most vicious criminals that almost everyone would say did something horrific, they'd say put them in jail. I'm not talking about political actors like the world we're living in right now, in October--in December, excuse me, of 2023. Got October on my mind, October 7th.

I feel not just justified in what they did, but feel proud of what they did. And I think there's a deep human need--a tribal need, maybe--to feel that there is evil in the world that is not mine and unacceptable. It is unacceptable to imagine that the people that we see as evil don't see themselves that way--

Paul Bloom: Yes, that's right--

Russ Roberts: Because we want to see them as these mustache-twirling sadists or wicked people. The idea that they do not feel that way about themselves is deeply disturbing. That's why that book is disturbing. It's not disturbing because of its revelation of evil--which is quite interesting and painful. But, the idea that evil people--people that we will often dismiss as evil--do not see themselves that way. We just sort of assume, 'Well, of course they are. They must be, they must know that,' but they don't. In fact, it's the opposite. They think of themselves as good.

Paul Bloom: There's some classic work by Lee Ross--I think it's Lee Ross at Stanford--where it's on negotiations; it's on getting people together who have a serious gripe. Palestinians and Israelis being a nice current example. And, this sort of common sense, very nice way of thinking about it is: Once these people get to talk, they'll start to converge, start to appreciate other side. But, actually Ross finds it's often the opposite. So, you're talking to somebody and you're explaining, 'Look. This is what you've done to me. This is the problem. This is the evils that you've committed.' Then to your horror, the other person says, 'No. No, no, no, you're to blame. Everything I did was justified.' People find this incredibly upsetting. I think there's this naive view which is, if only I could sit with my enemies and explain to them what happened, they would then say, 'Oh, my gosh. I've been evil. I didn't know that. You were totally right.' But of course, they think the same of you.

Russ Roberts: Which is hard to understand because you're not evil. They are.

Paul Bloom: That's right. It's a simple perspective-taking problem that they have.

15:01

Russ Roberts: There is, I think, a version of this which is somewhat true, in some settings. It's a famous episode in World War I, and it's captured in a song by John McCutcheon. It's a song I've always loved, even though the sentiment of the song I think, is somewhat accurate and somewhat inaccurate. It's called "Christmas in the Trenches," and it's about a Christmas Eve in the trenches between England and France on the one side, and Germany on the other. Somehow a soccer ball gets produced and they play this rousing, clean game of soccer. They exchange pictures of their children and their wives. And they realize, 'Hey, without the uniform, we're just human beings.'

And I think there's a part of us that wants to believe that deeply. Of course, there's another part: World War I was a particularly foolish, tragically foolish war, where seemingly nothing moral was at stake.

I can imagine that soldiers could relate to one another. And of course, soldiers are usually drafted. They're not choosing to try to slaughter their nationals from another side. But, yeah. There's a sort of naivete about the human heart. And what's at stake in many, many wars and situations: that if we just sat down, we'd realize we have the same values. A lot of times we don't have the same values: we have really different values. That's problem Number One.

Problem Number Two is: often those values conflict, say, about land in the case of the conflict here in the Palestinian-Israeli conflict. There's religious differences that are on top of that. So, it's a particularly tough problem.

Having said that, there are wonderful organizations here in Israel that try to bring Palestinian and Israeli children together. There's a choir, for example, that brings them together to sing. I think those are good things. I think finding our shared humanity is a really great idea, but unfortunately there are sometimes limits to what it can achieve.

Paul Bloom: Yeah. I think so. This kind of gets us back to the theme of the New York article which is, everyone's into alignment--make our machines as moral as we are.

Then there's been some naysayers, and one of them I particularly like is a philosopher, Eric Schwitzgebel, who is a very sharp philosopher. And he says: 'What a humble goal. What a kind of dispiriting, sad little--make them as moral as we are. We're not so hot.' All of this violence and cruelty we do, so much of it's motivated by morality. When don't we give up alignment and make them more moral. Use their super-intelligence to figure out moral issues, get them chugging away at these problems, so they come back and they say, 'No. No, you're doing it wrong. You think this is good and this is bad. Nah, this is good and this is bad.' And then we listen to them.

I wouldn't have talked about it if I don't--I admire Eric's work. I think part of his logic is right. My complaint isn't that he's wrong. My complaint is we would never abide by that. We would never abide by an AI who said, 'Oh, you want me to drive you to the bar? I don't think so pal, because you promised to help your kid with his math homework. You want me to set up this military endeavor? No. No, you're on the wrong side. I'm going to make you surrender. What are you having for dinner? Well, it's not going to be factory-farmed meat. We're actually going to drive--' For these cases we say, 'No, no, no. I think what I'm doing is right. I don't want my tools to override me.' I think we would never abide by such a situation. We want AIs aligned enough not to kill us, but maybe other things, like not to drive over pedestrians and not to be too racist, or whatever. But, beyond that, we just want them to be our tools.

19:12

Russ Roberts: Well, it raises a really good, I think, distinction that I don't think I've read anywhere, but I'm sure it's out there. Tools don't have morality, by the way. My hammer is--I can use it immorally on your thumb, but my hammer does not decide one day to jump up and slam your thumb. We'd have to train it not to do that because it's a tool. And as a tool, it's put in my hand. I buy it or I acquire it or I borrow it and I use it as I see fit. As you say: my car, I expect it to take me to the steakhouse for the factory/farm food, or to the bar to have a drink with my buddies instead of helping my kid. It's my tool. By definition a tool means it does my bidding. It may do it imperfectly. It may not be a well-made hammer. But, it does my bidding.

The whole debate here, in some sense, is this fundamental question of whether it will not do one's bidding, and therefore has to be inherently designed to be restrained from doing its own bidding--paperclips--or the bidding of a bad bidding, right? A bidding that a wicked person or a careless person would put upon it. Again, the idea of a hammer having to have some kind of announcement, just like a truck when it backs up, beeps. If a hammer is about to be within a certain distance of a thumb, it would have to call out and warn the person to move their thumb. Isn't that really the issue here? If it can't acquire a mind of its own, do we have to be afraid of it?

Paul Bloom: We don't. We don't. But, of course, AI tools have something approximating a mind of its own, and then the issues come up.

I think the issues might even come up with simpler things. I was on Twitter--which I always just get enraged and get into other people's anger. And, somebody was complaining about a car that--apparently people were talking about cars--why do cars go above the speed limit? The speed limit is the speed limit. Why? Just because mechanically you go above the speed limit, we should keep to the speed limit. So, why do we allow new machines, why don't they have them just not go faster? Give it some latitude for passing and for an emergency. But, do they have to go that fast? People are enraged. So, people of course, they give these fantasy[?] things: 'Yes. But, what if somebody was in trouble and I had to race there to save their lives?' But, that's not what's upsetting them. They want to be able to drive their car as fast--they want to be able to choose to break the law.

And I think if the cops came and arrested them, they'd say, 'Well, that's fair. You're cops. I broke the law.' But, my car's not going to stop me.

I don't want my tax preparation software to take the plans of my house and say, 'Your home office isn't that big.'

And, we see this right now. I know you use ChatGPT [Chat Generative Pre-training Transformer], and there are all sorts of instructions that it will not follow. And, my sense is: it's compromise, which is, to some extent we say, 'That's good.' We say, 'I don't mind that GPT-4 won't tell me how to produce a deadly virus.' And that's good. But, there's all sorts of things--I don't want it to tell me not to write a story that's too unpleasant because the unpleasantness makes the world worse. It's none of its business.

Russ Roberts: Yeah. That's great. We had a version of that. There were cars, I don't know whether they were--you had to pay for this feature or whether some people wanted it imposed--but it would take your breath, do a breath analysis, and it wouldn't start if you were drunk. And, there's a piece of me--the classical liberal side of me--says I should be allowed to choose that feature for myself if I want. But, there's something disturbing about the state imposing that.

The same with the speed limit. The idea would be: It would know how fast--your car knows how fast it's going, most of the time pretty accurate. And it would just have a--I forget the name of it, there's a name for it--that keeps you from going faster than that amount. We can do that, I suspect. I'm sure we could find some examples where we do that happily. I can't think of them right off.

Paul Bloom: I think it's fully compatible to say we shouldn't drive past the speed limit. But, we do not want a machine to force us not to. We value our autonomy more than our morality in this case.

Moral AIs have the potential of stripping away our autonomy. There's a certain dystopia you could imagine. And maybe we're not that far from it--this isn't as wacky as existential risk--where we all have AI plugged into everything. The AI is this endless nanny state wired in, where, you say something rude to me over Zoom and it cuts it out and replaces it with something more polite. Where it--

Russ Roberts: It may have done that just now.

Paul Bloom: Who knows? Who knows what you're trying to say and it gets all translated.

You know, where I have my tax software and it's all linked up to all this information about myself, so it just puts in honest stuff. There's certain points where you say that might be legitimate. I wouldn't mind a hammer that wouldn't let you bash people's heads in with it. But, I don't want a hammer that won't let me pick it up when I'm supposed to be preparing my lectures.

Russ Roberts: Or, in a moment of frustration, let you put a ticket[?] and swing it through one of your own walls. Not someone else's wall--

Paul Bloom: That's right.

25:24

Russ Roberts: So, you say, yeah, maybe it's not so far away. You can think of, sort of, maybe two different ways you get to that world, both somewhat alarming to me. The first is a top-down governmental social credit score that--people talk about China imposing--you jaywalked or you cut off someone in traffic, or you were rude or too brusque with someone who needed your comfort, and you get knocked down a few points; and then you don't get into college. I think those of us in the West find that frightening.

And then--I think it's important to think about why it's frightening. There is some reduction of autonomy there. There is a certain combination, nanny state and Big Brother, that's frightening. But there's also, I think, we could make a moral case for social credit scores that were flawed, and a benign dictator. There aren't any, but let's pretend for a minute. A benign dictator would have that kind of leverage over the citizens of the country, maybe, that would make it a better place.

And, that human urge, that utopian urge--which of course often turns into dystopia. But if we take it at its purest level--that we want to use our tools to make the world a better place--then turns to the second kind of abuse of these kind of tools, where, like you say, it's sold--this driverless car is sold--that it will not take you to a meat restaurant, say, because meat eating is immoral. And we could make a long list of things. Nothing--it doesn't have to be as subtle as skipping out on your kid's homework session. A set of social causes, that this is--'You want to go to a football game? Football is bad for human beings. It leads to brain injuries. You should be going to an Ultimate Frisbee match. I'll take you there. Or I'll stay in the driveway. Your choice.'

Paul Bloom: Or, 'I'll park half a mile away because you have not been walking very much and you want to be a bit healthy.'

Russ Roberts: Exactly. Even better, right? No, that's even perfect. 'When you sat in the seat, we registered that you are 2.3 kilograms above the BMI Index [Body Mass Index] you should be at. And, I won't--I'll take you to where you want to go, but not exactly all the way.'

Paul Bloom: Yeah.

28:07

Russ Roberts: So, why is there--now I'm way off track but I don't care--why is there this human urge to have such tools and impose them on others?

Paul Bloom: Right. I mean, I love you're going off track because it made me think for a bit that the AI problem I was concerned about is just the manifestation of another issue we always fight about. Which is: the tension between morality and freedom.

So, again, getting a bit current: you take the case where people are saying all sorts of really ugly anti-Semitic stuff at universities and so on, and there's an impulse to say, 'Well, this is terrible stuff. We should stop people from doing it. We should arrest them.' And then in many countries they have that kind of model, when they have certain speech. Same with blasphemous speech: very offensive. Same with speech that targets out, you know, trans-people or gay people. 'We don't like that. We should stomp it.'

And then there's another movement that says, 'Maybe we don't like it, but we should give people the autonomy to do things.' And in part, this is based on humility because we could be wrong. Maybe the anti-Semites are right and we are blind and they could be right. And then it's humility, but it's also respect: Give people a respect to be within certain limits wrong and even hateful.

So, you have that debate, which is played out all the time.

I think we're reliving this, when it comes to how much power we want to have for AI, where there's an impulse to stop AI from allowing us to do bad things. And you could really imagine--ChatGPT already has tons of that. There's all sorts of instructions it won't follow.

And I'm not talking about causing a deadly pandemic. I'm saying: 'Write a hateful message.' It won't do it now.

And, the same impulse--the censorship impulse, but I don't want to put it in a bad way--says, 'Good, it'll make the world better. Strip the world from this hate and this nastiness.' And then the more libertarian impulse says, 'We might be wrong; and anyway, people have a right.'

And so, it occurs to me that the same issues that we're living at right now regarding personal freedom--and this isn't just speech, it's everything from seat belts to playing football--will come back again as AI becomes more and more of a tool that could constrain what we want to do.

And to answer your question, we have at least many--we have two impulses. One impulse is: we want to make the world a moral place. We want to act good. We certainly want others to act good. I don't like when people say terrible things to each other. I don't like when people foment hate. That's the morality part of it.

Then there's the freedom part of it. I think there we find a lot of variations. I think some effective altruists would rank freedom as a zero: unless it makes people happy it doesn't have any intrinsic value. Other people say it's the absolute core to being a person and deserves enormous priority.

Is that how you would see it playing out?

31:27

Russ Roberts: Well, I think there's another piece to it. I've always thought of it this way and maybe it's not fruitful I like it, get your reaction. We are born dependent. We come out of the womb, we can't feed ourselves, we can't walk. We're totally helpless. And, if you have children, what you watch is the birth of autonomy. You watch the birth of 'mine,'--the phrase 'mine,'--meaning, I will have that. Whether it's a banana or a stuffed animal belonging to a sibling, or whatever it is. We have this powerful urge for independence and autonomy.

And then if you're a parent, you also--

Paul Bloom: I heard a story of--I'll interject. I'm sorry. I heard a story of two toddlers sitting in back who are fighting as they do in cars. And the parents said, 'There's a line between the seats. You each have domain over your own half of the car.' And then one kid threw an utter tantrum because his brother was looking out his side of the window. So, yeah. Just to expand on your point.

Russ Roberts: Well, that's a perfect example of: you're free to do whatever you want as long as it doesn't interfere with my life, unless I don't like what you're reading. In which case, is that a negative externality in the language of economists?

Anyway, so kids--as a parent you watch your child grow independently of you, and take on attitudes, activities, etc. And sometimes you restrict those because you think they're not safe. And the child often rebels against those restrictions. And that continues into adulthood. As adults, we don't like other people to tell us where we can drive our cars, for example.

At the same time, I think there is this other impulse that is not moral--an authoritarian impulse. I think it comes from parenting. Right? We're all children, and many of us are parents. And, when you are a parent, for a long time you do run the life of your infant child and then your toddler child and even your pre-adolescent child.

And when they get into adolescence, all of a sudden this conflict becomes quite clear. And the child wants to express its own autonomy and independence. And you, often--I think we're a little bit hardwired to want our children to do what we want. Wnd we tell ourselves it's because it's for their own good. But I always wonder whether that's always the case.

And then when we move into the adult world and we have the political frame and the political issues that we're talking about--of paternalism, nanny state, and so on--I'm not convinced that the nanny state is merely motivated by the fact that, I want you not to smoke because I know what's best for you. I think some of it is: I want you not to smoke because I want you to do what I want. And I think the authoritarian impulse is a very unhealthy and destructive one, but I think it's in there.

Paul Bloom: No. I think that that's right. I was considering a sort of binary distinction between the autonomy you're talking about and the morality issues.

As a good parent, that's what the kid struggles with. You want to give the kid some freedom but, 'No, we can't play in traffic, because it's too dangerous.' And that's an easy case.

But, there are hard cases. You have an eleven-year-old and he wants to puff on a cigarette to see what it's like. I don't know. Like, 'Okay. Maybe do it and get sick and see what happens.' But, these are complicated.

But, there's a third ingredient, which is--this comes under a million names--an authoritarian urge to power. And, I think we get a satisfaction out of controlling people. I think bad parents have way too much of it, because they have tremendous control over a helpless being and they abuse it. Not necessarily sadistically, but somewhat arbitrarily. It's still: 'You have to do it because I want it to be done.'

And, we like having that control over other people. And, you're right: this is the sort of libertarian complaint, which is--a line from Ronald Reagan saying the most scariest words in English language were from the government, 'We're here to help you.' And the reason is--this is conscience--they don't really want to help you. They have their own goals and their own desires.

And, yeah, I take seriously the idea that some of what goes on in, say, the moralistic world of speech, for instance, isn't merely that I think the speech you're doing is harmful and should be stopped. It's also: I kind of get a pleasure from telling you, you can't say things. That's a lot of power for me to have over another person.

And we're primates; and it's an ugly part of ourselves, but being able to dominate people, getting able to get them to do what you want them to do, is kind of heavy stuff that gives us a sort of rush.

Russ Roberts: Henry Kissinger passed away recently. I think he's the man who said, 'Power is the ultimate aphrodisiac.' I think he's onto something there.

37:10

Russ Roberts: I want to bring in a different theme. I think a lot about John Gray, the interview I did with him, and we'll link to it. And I think it highlights a point in your essay that--maybe we'll come back to your essay now, Paul--about how morality differs so much across culture.

So, John Gray's point is that--he has more than one--but the one I'm thinking of is that we who are children of the West, who are the result of the so-called Judeo-Christian values, have a utopian impulse that comes from Jewish and Christian sources. A messianic impulse, an impulse toward the end of days, an impulse toward perfecting the human experience, an impulse toward redemption of the world. And I think the tech world and its embrace or unease with AI [artificial intelligence] is related to that Judeo-Christian morality. That--and I mentioned earlier--why should we care what other people do, that they're moral or we think that they're doing the wrong thing? Sometimes because it hurts us. But, a lot of times I think it's because of that Judeo-Christian culture, the water we swim in without even realizing it. That our culture takes as a given that's almost never talked about, that the world is improving and that we are heading toward a destination. If you're a believer, you think it's the Second Coming; or in the Jewish perspective, the coming of the Messiah for the first time. If you're a secular person, it's that technology will improve us to the point, in fact maybe become perfected through human action to create a superior being that would not suffer from the moral failings that we human beings have, that you alluded to earlier. What do you think of that?

Paul Bloom: I think Gray's diagnosis might be true, and I don't buy onto the second part of it that we are heading towards something, some sort of perfection. But, I will endorse the first part, which I'm well aware, he disagrees with and mocks, which is: I do think we've been getting better. I've been persuaded by people like Steven Pinker and Peter Singer, Robert Wright, that we are getting better. I'm much happier. I'm much happier as a Jew to be living now than living in Egypt at any other time in history. I'm much happier--if I were gay, if I were trans, if I were a woman, if I was--this is just on average better times.

And, this leads me to talk regarding the cross-cultural variation which is, I'm a moral realist. So, if you ask ChatGPT what does it think of two men getting married and holding hands, it says, 'It's fine.'

You ask, 'What do you think of killing these people for doing this?' 'It's terrible.' I say, good, you are now aligned with my values. I'm aware that many people in the world--I don't know if it's most people--many people in the world would view those as disgusting and immoral claims. They say, 'Your GPT has failed the morality test. It should know that it's disgusting and horrible and destroys humanity to have gay men being together.' And, my stance on this is: Well, this happens to be a way in which my side has it right and your side has it wrong. And it's very hard to argue such things, to make such moral arguments. But, I think it is tied with progress. I think a world which has these sort of liberal enlightenment values will, in the long run, do better--better for people, better for everybody--than a world that views homosexuality as punishable by death.

That was a lot. I'll let you respond to that.

41:18

Russ Roberts: No. That's all right. We've probably had this conversation before, and I'm always happy to revisit it because I haven't fully--I'm uneasy with my position. I can defend it. I have a viewpoint that I want to hold, but I know that there's a nagging unease with my viewpoint. My viewpoint being that it's--I'm more John Gray than you are, I think, and more John Gray than Steven Pinker--that I think a lot of the progress we've made--we've made progress in some things and not so much in others. And, some of this progress has come at a cost. Some of the attitudes that yield the outcomes we like, maybe have other consequences that are not so straightforward.

You say you'd rather be a Jew now than any other time in human history: Not on October 7th in Kibbutz Be'eri, at the Nova Music Festival. So, you have to take--the Holocaust. When we've talked about this before, I think I'd probably say, 'Well, what about the 20th century?' And I think the attitude of Pinker and others is, 'Well, we made some mistakes in the 20th century but now we know those things are wrong.' You could argue that fewer and fewer people think it's okay to kill Jews, for example. I'm not sure that's true.

Paul Bloom: At least Pinker would say about the 20th century is: Absolutely horrific. Terrible, terrible, terrible--

Russ Roberts: Sure--

Paul Bloom: Until you compare it to the 19th century. And that really sucked.

Russ Roberts: Yeah. I think he's wrong. I'm not sure. The 20th century is materially a much more pleasant place than the 19th, and that leads to many--there are many, many good things that come from that material bounty, which I concede.

But, the human heart hasn't changed. And the question is, is there cultural evolution that comforts us? In other words, the attitudes you're talking about, the changes on average, many of them are very positive. So, should we say that that's enough? I certainly concede the material progress, obviously as an economist--I've written about it and championed it. It's only in my later years that I've wondered whether the other parts of the human experience have not advanced nearly as much, or in fact have become worse. I'm thinking about despair, a sense of meaning, a sense of belonging.

I'm sitting here in Israel in December of 2023. There are many dark days here, but there's also a tremendous amount of purposelessness and solidarity and unity and love, as this country tries to cope with the aftermath of October 7th. Forget whether you agree with the military response, just the social response of the country to help people who struggled through what has happened is very uplifting. That also is part of the story of course: there's the dark side of the human heart and there's the bright side. I, at least for now, tend to think there's both. I'm not so confident that either side has changed much relative to the past.

Paul Bloom: I agree. I like the line of the human heart hasn't changed, because I think that that's true. I am against slavery and that is a good thing. But, me, the same person, if I was raised in a different time, I would be entirely in favor of slavery and benefit from it. Nothing of my fundamental human character has changed. It is just culture has changed in the same way that I'm more scientifically informed in an earlier version of myself--without being any smarter. I've just been lucky enough to get the accumulation of history.

I guess what I'm curious--I have my guesses, but you agree that things have gone a lot better, and you agree that things have in some ways, maybe our basic character have changed and stayed the same. I'm curious what you think has gotten worse over the last thousand years. And, I guess I would guess it would be a certain secularization and some stuff that's lost through it?

Russ Roberts: No, not necessarily, actually. I want to add to your point about slavery, though. I actually reinforce it and go against my claim. I think it's more powerful than you make it. Not only do we believe that slavery is wrong, the people in the past--it's not just that we think that it's wrong. The people in the past who were slave owners--and we did a wonderful episode with Mike Munger on this that we'll link to--it wasn't that they were slave owners and thought, 'Well, I can do this because I'm more powerful. I have the guns.' They actually thought they were doing something good. So, the revulsion against slavery isn't just: people used to do that because they could, and now they know it's wrong; and now they don't because they think it's wrong. They thought it was right before. I didn't say that very well, but I think you know what I meant.

So, I concede that, for sure. I think that's a very--it's not unimportant and it's very important. And I think we've made progress in our attitudes towards many things.

I think what I have trouble with: it's not the secularization, although I am religious, but that's not--I think many people find meaning from lots of things besides belief in God or communal prayer, a thousand other things. I think what I see around me that I find troubling is a rising suicide rate among young people in the West, that I think is a fact. Sometimes these kinds of so-called facts are statistical artifacts in the way data are gathered. But, I think it's true. I think there's a much smaller feeling of belonging. I think as human beings, our connections--I understand there are people out there who are introverts and don't need to connect to other human beings as much as others. But, for many human beings, connection and a feeling of belonging and a feeling of purpose are very, very important.

And I think many of those things have been lost--maybe because of the death of religion in many circles. But, I don't think that's all of it.

And what worries me, as a free-market capitalist sort--and this is the closest outcome to something vaguely anti-capitalist--I worry that our culture has made the despair and lack of connection, and lack of purpose part as a consequence. I worry about that. I'm skeptical of it, but I worry about it. But, I do think the underlying problem is real.

Another way I'd say it: the country I used to live in and love, the United States, seems to be pulling itself apart, as is much of the West. That doesn't seem good. I see a lot of dysfunctional aspects of life in the modern world. Am I being too pessimistic? Am I just an old person now?

Paul Bloom: Well, you're asking the wrong guy. You should ask a young person. I have some--I have sympathy for that. I was reading something about Donald Trump the other day, and I turned to my wife and said, 'Are these the end of days?' Between AI and what's happening in your little part of the world. And, the fact that the next American election has a reasonably high probability of having somebody refuse to concede and throw the country into some sort of tumult, it seems like difficult times.

I guess I have two thoughts. One thing, I agree with you, I think that within a period of years, you could say suicide rates have gone up, depression rates have gone up. Deaths of despair, that sort of argument. And, I think there will always be these ebbs and flows.

But, I'm more interested--so maybe I asked the question poorly. Do you think there's been an overall decline over the span of hundreds or thousands of years? The statement Pinker claimed is that on average in the long run, we humans flourish more, we treat each other better, and so on. At least over the span of hundreds of years. That's compatible with saying, 'Man. The last decade has been a train wreck.'

Russ Roberts: Right. And I do think, of course, you don't want to over-weight--read[?] through recency bias--say, the last 10 years or last 20.

50:33

Russ Roberts: And, I think one way to put your question that brings it into stark relief with the alternatives is: when would you want to be born? Would you want to be born in the year 1000 with no dentists, etc., etc., etc.? So, in that sense, I think Steven Pinker is wrong about the violence part. I think his data on the violence--I'm a Nassim Nicholas Taleb fan on this. I think his presumption that the decline in deaths over time is not persuasive. As Taleb would point out, or others: one nuclear mistake will make all that look really bad.

But, I think the more interesting question is the one that you raise, which is: Do we treat each other better?

I think there's a lot of evidence that we do treat each other better. It's not just women, Blacks, gays, Jews--the obvious cases. I wonder if people just treat their children more politely, more humanely, as corporal punishment declines. Do they treat their buddies more thoughtfully? I like to think they do. Is it true? I conceded gladly that the material well-being of the modern era brings lots of benefits beyond just toys. Are we more moral in how we treat each other without the tribal issues of skin, color, race, religion, and so on? I don't know.

Paul Bloom: It's an interesting question--

Russ Roberts: It feels like we do. I think I might concede that.

Paul Bloom: If I had to pick up one thing that goes against my side, you might say that the notions of hospitality to strangers have faded. It used to be, at least reading over ancient texts and so on and the Bible, it used to be a big thing. Somebody comes from another town and you really have to take care of them. You have this obligation to take care of them, to feed them, to treat them with respect. That is something I feel that we've lost as we've gotten into much bigger societies.

I think there are things that we've lost. Still arguing against my side, I think there are people who miss the smaller communities with all of their costs and benefits. The cost being savage gossip, enormous social control, but the benefits being nobody gets lost.

Russ Roberts: Yeah. I think about that when I think about the homeless problem in America.

Paul Bloom: Yeah, that's right.

Russ Roberts: I don't believe people should be locked up for mental illness. At the same time, the fact that many of the people on American streets have serious mental problems, addiction problems, and we walk by them--I mean, I used to give them a dollar in the good old days when I lived there, as long as there weren't too many of them. I don't know what I'd do now in some situations. But, there's something very beautiful about the fact that we allow them to live in the way that they choose to live. We don't arrest them. We don't lock them up and put them in what used to be called an insane asylum, which is a horrible place. At the same time, nobody cares enough about them to take care of them, help them. Many try, and it's a beautiful thing--but when you see the tent encampments in American cities these days, I don't exult in the freedom of the individual there. I feel great sadness. I don't have a simple solution for it either, by the way because I don't want to lock them up again. And people debate about how much of it is mental illness versus other economic pressures. But something doesn't seem quite healthy there.

The college campuses--the loss of honest discourse, the fear of saying something wrong that you'll be judged for--something's gone wrong there. These are trivial things compared to slavery. I concede that immediately.

So, I think the other issue that needs thinking about is that, I have no problem conceding that economic freedom writ large has helped change the standard of living of humanity by the billions. That's a good thing. I don't have any problem with the idea that there's cultural evolution, and that's a good thing, that much of it's been productive and means people lead more pleasant lives. I think the question is whether the so-called Enlightenment Project in and of itself is the source of all that. And I think that's a more complicated question.

Paul Bloom: Yeah. I think that is. And then there's also the question of whether the Enlightenment Project is coming to an end. I think that--I'm always shocked--you mentioned age. I'm always shocked an extent to which life issues, at least in the United States but I think worldwide, the difference between people, say, over 40 and under 30, in all sorts of things. In the Middle East issue, for instance. In America, people over 40 are pretty staunchly pro-Israel. People below 30 are pretty staunchly pro-Palestinian.

Russ Roberts: Correct.

Paul Bloom: And it's not a sort of subtle--it's really a thing.

Maybe related to this, but there was some poll saying a lot of young people in America have no idea the Holocaust has happened. It's just not part of historical training. Something you and I probably had as our, like, bread and butter--like, this is the world we live in--is no longer present for Jews, either. I know very few people of my age who identify as non-binary, but when you get below 30 you get, not a majority, an enormous amount of people, issue, traditional gender categories. Attitudes about free speech have a similar diagnostic. The people say that the young people are going to become--they're going to rule the world. And, if it continues in this trajectory, it's going to be a very different world than the world that you and I ruled.

And, I could say, 'Well, okay. Things change. I want to watch this.' But, I feel--I feel personally sad that certain virtues, negative virtues, of open discourse and free speech superseding people's comfort sometimes, that that is going to become just an old-fashioned view. As--they will view that as they viewed people who were sympathetic to restricting women a vote.

Russ Roberts: I don't think it's like factory farm meat though. I mean, there are consequences--if it came to be the view that factory-farm meat was evil, and many people hold that view--

Paul Bloom: I do--

Russ Roberts: and the world ate less factory-farmed meat, which meant that fewer cows were raised, and maybe none--that--you could debate whether that would be good for the cows or not, in the short and long run.

58:02

Russ Roberts: But that is very different, I think, than giving up on free speech. As you say, the Enlightenment Project might be coming to an end. And that's a very different thing than people having different preferences than you and I have.

And I think a lot of it, by the way--and I'm really going to show my age--a lot of it is when people say that, 'Well, young people become older and then they'll be more like old people.' Not if they don't marry and not if they don't have children. A lot of the reason that old people are the way they are, I think--it's a speculation--is because they've gone through the cauldron--not the cauldron, I don't know what the right word is--anyway, the experience of marriage and children, which broadens you in certain ways and changes your preferences in other ways.

It's this generation, the under-thirties, that you're talking about, many fewer of them are married. It's not the only reason they're not like you and me. But many fewer are married, many fewer have children than in the past. And that's going to change the world we live in all kinds of interesting ways. Might be interesting to watch or sometimes painful.

The other thing I think about is this: The smartphone. Which, when I look back to my old blog posts when I used to blog back in, say, 2007, I used to romanticize the smartphone--with great eloquence, of course. I don't know if it was eloquence or not. But I thought it was such an extraordinary triumph of human ingenuity and creativity. I still do. But, its social impact is quite complicated. And, I'm not sure it's good.

Paul Bloom: I'm not sure it's good, either. I don't have a settled view.

I know, like, John Haidt--my friend John Haidt--has very strong views about it. And then there's a lot of debate. And a lot of debate that happens is around whether the social pathologies that get associated with smartphones, are they worldwide--as the smartphone is--or are they only in some countries and not others? And that's how you would argue it.

But, just on a gut level, it can't help but transform us, in that for the most part we're never alone and we're never bored. And I love it. I go in a streetcar to work every day and I'm listening to a podcast. I'm not listening to your podcast while I'm doing it. And I'm also playing the Spelling Bee, because the podcast isn't enough. God forbid there's part of my mind that's free to wander.

And then I also check my email or texts[?], because what if--and revealed preferences: it reveals preferences. If I didn't love it, I wouldn't do it. But, I could recognize in the long run, I am not the kind of person who they were making 100 years ago. I couldn't go into the woods for a long period of time and be happy. I think I've lost a little bit of the ability in my mind to make up its own entertainments.

I will say, however, it's not inevitable. My older son--don't want to embarrass him, won't mention him by name--but in the midst of all of this, as a late teenager, he said, 'I'm going to take the summer, part of my summer and read Russian novels.' He spent hours each day just sitting, reading Russian novels. He was undereducated in that way. And, not online, doesn't read things online. He somehow stepped out of it. He's a perfectly modern guy, lives in a modern world. But there are people who don't have to be caught up in this.

Russ Roberts: Yeah. There is a bit of a pendulum. All right--it swings. There is a bit of a backlash. But, I'm shocked at how little there is in my own life. I enjoy it so much in the short run. And the inability to not pull my phone out and check x, y or z--it depresses me a little bit. That's my problem, it's not our problem.

Paul Bloom: I have a theory, which is: it's people like you and me who are most vulnerable. Those who are raised with it develop a sort of immunity, and so on. But, those who, as adults, we started getting the full--it's like we've gotten accustomed to cocaine too late in life to develop, and it just swamped us. Now life without cocaine is unimaginable. And so, I wonder whether we are hit the worst of it.

Russ Roberts: Yeah. Digital cocaine. It's like the parents who keep their kids from eating sweet cereal--

Paul Bloom: Exactly--

Russ Roberts: Then once they leave the house, it's Cocoa Puffs 24/7.

Paul Bloom: Exactly.

Russ Roberts: My guest today has been Paul Bloom. Paul, thanks for being part of EconTalk.

Paul Bloom: This has, as always, been great.