Will MacAskill on Longtermism and What We Owe the Future
Sep 5 2022

What-We-Owe-194x300.jpg Philosopher William MacAskill of the University of Oxford and a founder of the effective altruism movement talks about his book What We Owe the Future with EconTalk host Russ Roberts. MacAskill advocates "longtermism," giving great attention to the billions of people who will live on into the future long after we are gone. Topics discussed include the importance of moral entrepreneurs, why it's moral to have children, and the importance of trying to steer the future for better outcomes.

RELATED EPISODE
Agnes Callard on Philosophy, Progress, and Wisdom
Philosopher and author Agnes Callard talks with EconTalk host Russ Roberts about the state of philosophy, the power of philosophy, and the search for wisdom and truth. This is a wide-ranging conversation related to the question of how we learn,...
EXPLORE MORE
Related EPISODE
William MacAskill on Effective Altruism and Doing Good Better
How much care do you take when you make a donation to a charity? What careers make the biggest difference when it comes to helping others? William MacAskill of Oxford University and the author of Doing Good Better talks with...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Ian
Sep 5 2022 at 9:12am

A great chat, I’ve heard a few podcasts with Will discussing effective altruism and the longtermist view and you did a great job coming at things from a bit of a unique angle which allowed him to expand on some of his underlying views without simply rehashing talking points from the book.

One point I felt was a little unfair was where he gave the broken glass example and your question was essentially, “what if someone just doesn’t care, why should they care?” In my view this is a bit of a red herring in discussing the value of future others because caring about the future presupposes that we should even care about the present. It’s an important question whether we should care about others but I felt doesn’t relate to longtermism (or perhaps I misunderstood the context of your question).

There are two main reasons I personally am not convinced by a longtermist view, one which you touched on briefly but the other I don’t think was explored. Firstly the point you raised – even if we accept that future people who don’t yet exist should be equally valued as if they will come to exist, there is the problem that we just don’t know much about how civilisation will unfold over the next 100 years let alone beyond that time. Therefore any efforts we can reasonably make to protect or improve the position of people 10,000 years from now are the same actions we should take for preserving and improving our current world for the next 10-30 years. I’m not convinced we should be dedicating any resources to prevent future hypothetical problems when there will be so many intervening factors that would break the chain of causality and make our early contribution redundant. That said, I agree we need to dedicate more resources than we currently do to improving things over, say, the next 10-15 years, so perhaps my objection wouldn’t result in many different actions/decisions to someone with a longtermist view.

The second reason I fail to be convinced on longtermism is probably more to do with my temperament which is less optimistic than Will’s. Perhaps this is unfair but the picture that is painted for me is almost a utopian future of humanity, I’m simply more pessimistic that human irrationality and tribal behaviour would prevent this from being realised. Will has some sense of this possibility where he talks about one future which could consist of a global authoritarian regime that we’re locked into, to the detriment of those future generations. On the other hand his ideal world would work through important ideas on their merit, which is simply not how humans do things. However it’s likely that my pessimism partly stems from the perceived prospects for my place in our present world rather than a fair evaluation of humanity as a whole.

Rob Wiblin
Sep 5 2022 at 1:42pm

Hi Ian – past colleague of Will’s here. Just wanted to respond on two points.

>The second reason I fail to be convinced on longtermism is probably more to do with my temperament which is less optimistic than Will’s.

He didn’t get to discuss it in this interview, but whether you expect the future to be good or bad has little effect on the force of the arguments for longtermism.

If you think the future is likely to be good, then you’ll be particularly enthusiastic about ensuring it occurs (i.e. you’ll be excited by opportunities to reduce the risk of human extinction). On the other hand if you think the future will probably be bad if we survive, then you’ll instead prefer to do things that shift the trajectory humanity is on, so that we’re more likely to get a positive future and less likely to get a bad one (e.g. ensure there can’t be slaves or the equivalent in the future).

>Therefore any efforts we can reasonably make to protect or improve the position of people 10,000 years from now are the same actions we should take for preserving and improving our current world for the next 10-30 years. 

This is an important possible objection to longtermism as a distinctive project. Will spends a substantial portion of the book attempting to show that while many things improve both the short-term and long-term, the things that *best* benefit the long-term aren’t necessarily the same as the things that best benefit the short-term. In a sense this isn’t totally surprising as it would be a bit of a coincidence if the two were identical.

A key reason for this is that so little focused work is done to make the very long-term future go well specifically, that compelling opportunities to do so remain un-taken. As more people do what useful longtermist-specific work seems possible, then we’ll hit declining returns and eventually the actions that benefit the short and long-term will look more and more similar.

I can’t recapitulate the whole book here of course so if you’re curious you’ll have to take a look and see if you’re persuaded. 🙂

Have a great day!

Aidan
Sep 5 2022 at 4:56pm

Funnily enough, if a woman’s life expectancy is 75 years, 9 months is 1% of her life. So if she is carrying a future person in her womb it is entirely moral to ask her to sacrifice 1% of her life to bring a future human into existence. A newborn given up for adoption to a prosperous family would have a wonderful life, it’s a no brainer. I presume that is the kind of thing we are talking about when we think about effective altruism, right?

Russ Roberts
Sep 6 2022 at 12:29am

I’d say that’s more of an issue with the morality of utilitarianism, the underlying philosophy of effective altruism. Upcoming episodes with Kieran Setiya and Erik Hoel will discuss utilitarianism more directly.

DWAnderson
Sep 6 2022 at 1:23pm

As a long time listener I am moved to drop a note that this is one of the best EconTalk’s I have listened to. I thought the wide ranging discussion of morality was excellent. The amount of ground covered really well by both Russ and Will over the course of the hour was quite impressive.

 

Steve
Sep 6 2022 at 1:55pm

I don’t think we know enough about how the future will unfold to do a good job tipping the scales towards future people in our present-day decision making. Trying to do good better, we are just as likely to do good worse.  I’m not even sure what those who come after us will consider a good outcome or what they will think constitutes human flourishing. Imagine that our ancient or medieval antecedents incorporated us into their decision making. They would have applied their values. At various times and places, this might have included ensuring the continuation of enslaved labor supporting a leisured aristocracy, an aggressive warrior ethos, misogynistic paternalism, or god-appeasing human sacrifice. If they had been successful in incorporating us into their decisions, would we be happy with the result?

Bert
Sep 7 2022 at 11:11pm

Yes like that piece of glass somebody encounters 1000 years in the future.  Maybe they will consider it a great cultural artifact.

AtlasShrugged69
Sep 30 2022 at 11:54pm

What if no one EVER stumbles upon the broken glass? What if it just sits there forever and no one is actually harmed by it? What if the glass-breaker has cancer and will die in a couple days, shouldn’t the use of their limited and valuable time cleaning up the broken glass be given more weight than the small inconvenience of someone in the future getting a cut on their foot? Utilitarianism is such a joke

AtlasShrugged69
Sep 6 2022 at 2:52pm

MacAskill argues we should spend some portion of our current resources in order to preserve a chance of human morality developing in his preferred direction (though he doesn’t know what that is), and to avoid man-made catastrophic events from occurring. He loosely bases this on a Utilitarian calculus (but goes on to state that you don’t need something that extreme, and that most theories of morality should support his argument…) and claims that future lives of those not yet born should be given the same weight as humans currently alive, but because future generations will be more numerous than our current global population, future generations deserve greater consideration.

Where to begin…

What happens if a natural disaster (not man-made) takes out most of humanity? (ie volcanic eruption, comet, etc..)?
What happens if future generations continue to reproduce at lower and lower rates until humanity’s population actually declines?
How do you decide which programs will actually help future generations thrive? (Steve’s comment above is spot on)
Shouldn’t we prioritize our resources on helping as many people currently alive who are suffering, rather than future generations which are NOT guaranteed to actually come into existence?

It’s possible MacAskill’s book addresses these questions, but after listening to him fail to distinguish why we should strive to emulate Benjamin Lay over Adolf Hitler or John Brown, I think I’ll give it a pass.

Umberto Malzone
Sep 24 2022 at 11:35am

Comments above heartily seconded. I’ve never read Ayn Rand, but I do read AtlasShrugged’s comments with great satisfaction.

“Where to begin?” really is the thing to say. To make any sort of plan for the future, many assumptions have to be made, and I doubt you’ll get any massive consensus: most likely, what future planning you will get will be… likely the same as it is now, and I suspect anyone who thinks through this problem will take AS69’s possibility of helping people who are currently alive.

Because Russ Roberts isn’t much of a science fiction reader, I thought I’d add another question for AS69’s lineup:

MacAskill’s longtermism is oriented towards future people, but why stop there? If he’s OK with postulating orienting towards a million year run for humanity, why not orient to an similarly intelligent species that may arise after humanity, like the mighty beetle civilization that may arise post-humanity?

The oft-mentioned Hayek quote bears repeating

“The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.”

The longtermism aside, I thought Russ Roberts was on fire for the second half of the episode, so I am glad I listened to the whole thing.

Shalom Freedman
Sep 7 2022 at 2:46am

Will MacAskill introduces the interesting concept of Longtermism. He tells Russ Roberts how we should act now for the long-term future of humanity which he says will probably include trillions of human beings. In doing so however he makes a number of omissions and presumptions which undermine his case. First of all, he seems to not truly define the situation of humanity and the priority of helping it now, and how difficult and even ‘screwed up’ that situation is. What exactly is the moral duty of each individual in regard to the war between Ukraine and Russia, the Iranian threat to commit staticide on Israel, the Chinese treatment of the Tibetans or the Uighurs? Or if he is speaking of the moral debt of humanity as a whole to the trillions of the future then where is the decision-making power of that humanity?

He seems too to base the whole project on a simplistic utilitarian view of what well-being consists in both for individuals and humanity as a whole. He too takes for granted that human beings will in fact be what humanity will develop into, when there are all kinds of scenarios out there about our hybridization with AI, or a multi-intelligent-humanly created heirs. He seems to assume a kind of future for us similar to the life we have now, when we seem to be living through now is a much more uncertain future than ever before. He also ignores problems like the ‘greying of humanity’ which is most likely going to produce for a considerable time just too many of us living too long while too few of us are being born. Imagine a world with trillions growing feebler and feebler.

Despite not giving much credence to the fundamental idea of the book I found the conversation lively and interesting. Russ Roberts raised the possibility of future generations being so much more well-off than ours that they do not need the kinds of sacrifices MacAskill would have us make for them. He also points out that most people do invest in future generations by investing in their own children and grandchildren. He also raises the question of the ultimate justification for the kind of moral argument the book is making.

I had a certain sympathy for Will MacAskill as his simplistic example of a kind of absolute moral goodness was taken apart by Russ Roberts, but I do not think good intentions are enough to really help a very conflicted humanity now.

[Minor fixes done to mistyped guest’s name and to formatting–Econlib Ed.]

Mike
Sep 7 2022 at 12:08pm

Their discussion about religion was very interesting.

Will and Ariel Durant were awarded the Pulitzer Prize in 1968 for The Story of Civilization. In their book, The Lessons of History, they state (and this is from their research as historians) “There is no significant example in history, before our time, of a society successfully maintaining moral life without the aid of religion.” I think if you look at the situation today the cracks are appearing. I’m not talking about someone with mental illness but everyone has seen someone shoot someone in some secluded alley that happened to have a video camera. Do you think the shooter lost sleep over it? Of course not. That person has no religion. Many of the current atheists were brought up in some religion and they are living those values. Their children or grandchildren won’t have those values. It will be “all about me, whatever gives me an edge, as long as I am not caught by society.” Once we reach the tipping point that will be the end of the line.

Ben
Sep 7 2022 at 2:11pm

There was a book about creative thinking written maybe 40 years ago that related a thought experiment that car designers entertained. What if wheels were square? Supposedly this was a catalyst for ideas about sophisticated suspension systems that would instantly make adjustments based on the road surface immediately in front of tires.

This episode reminded me of that story. Have we closed our minds or simply failed to challenge our minds to potential ideas that could make the world a better place in the near future, say the next 10-30 years?

Ensuring our survival seems like the best objective we can have for our descendants. When pushed by Russ, MacAlish focused on the importance of avoiding a nuclear war, which is at least partly under our control. I believe that we will keep adapting to our circumstances as long as we avoid nuclear war (or something worse in the future) and somehow survive.

A big concern I have with Longtermism is that problem solving is done best when the root causes of the problem are identified and well understood. Problems that have been with us for 75 years remain unsolved in part because we do a lousy job of determining root causes. Politicians have been treating symptoms with poorly conceived public policies and the problems remain, or in some cases, are getting worse. How can we hope to solve unidentified problems in the future when we can’t identify the problems and/or the root causes of problems right in front of us.

And as Russ says, the older he gets the more he appreciates the phrase, “it’s complicated.” One complication that I like to refer to is the idea that we can hold 99 variables constant to determine the impact of a single variable. Obviously not compelling.

Plus, we live in a time where opinions are no longer changed by math. If people believe something to be true, most continue to believe it despite the math. They appear to believe the contradiction is bad math, either because of incompetence or purposeful manipulation for corrupt or ideological reasons. And…it is often true per the old saying that figures don’t lie, but liars figure.

Without faith in God, I would have very little optimism about the future.

Ethan
Sep 10 2022 at 8:51pm

Will: “Somewhere you have to draw a line in the sand”

This is the “god shaped hole.”  Turns out God fits very well.

Why not embrace emergent morality and draw a line with god?

Ethan
Sep 10 2022 at 9:33pm

Put serfdom and Post-Dutch slave trade in a box marked “Cronyism” and let’s call the rest of slavery a market mechanism. Progress comes from realizing there are bigger gains from trade than usurpation. Slavery (communism, patriarchy, localism) fades due to self-interest.  It’s not inevitable, that’s teleological and foolish. It’s progressive in a Smithian sense.

 

Also Will says: “the dividing line between argumentation and brainwashing is a hard one to draw…we want to distinguish between brainwashing and rational persuasion” – as if the distinction can be made a priori without sneaking values in.

Ant
Sep 12 2022 at 3:22pm

Hi Russ,

Long time listener here and I love your work. I was however, a little disappointed by this conversation. You asked a lot of great questions, but it also seemed like you reflexively rejected Will’s position because of its connection to utilitarianism. It would be great to hear some more engagement with his points like the bit about doing a little bit more now to make the future better (even if it’s not everything an extreme utilitarian view would demand of us).

I’m deeply worried about climate change. I don’t think our politics is currently capable of dealing with long-term collective action problems and that a relatively catastrophic degree of climate change is far too likely. I’m worried about having children not because of their impact on the climate, but because of the potential for their lives to go very badly if a catastrophic climate scenario unfolds. It would be great to hear your thoughts on these issue further.

Thanks for all your wonderful thought provoking conversations 🙂

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:37

Intro. [Recording date: August 11, 2022.]

Russ Roberts: Today is August 11th, 2022, and my guest is philosopher Will MacAskill of Oxford University. He was first here on EconTalk in 2015, talking about effective altruism and his book Doing Good Better. His latest book and our topic for today is, What We Owe the Future. Will, welcome back to EconTalk.

William MacAskill: It's great to be back.

1:00

Russ Roberts: Your book opens with a rather fascinating thought experiment that you took from Georgia Ray's "The funnel of human experience," and it goes--I'm going to read this--the opening of your book goes like this:

Imagine living, in order of birth, through the life of every human being who has ever lived. Your first life begins about three hundred thousand years ago in Africa. After living that life and dying, you travel back in time and are reincarnated as the second-ever person, born slightly later than the first. Once that second person dies, you are reincarnated as the third person, then the fourth, and so on. One hundred billion lives later, you become the youngest person alive today. Your "life" consists of all these lifetimes, lived consecutively....

Your life lasts for almost four trillion years in total. [Russ: "Aside, meaning there have been about four trillion years of human life on earth up till now."] For a tenth of that time, you're a hunter gatherer, and for 60 percent you're an agriculturalist. You spend a full 20 percent of your life raising children, a further 20 percent farming, and almost 2 percent taking part in religious rituals. For over 1 percent of your life you are afflicted with malaria or smallpox. You spend 1.5 billion years having sex and 250 million giving birth. You drink forty-four trillion cups of coffee....

You experience, firsthand, just how unusual the modern era is. Because of dramatic population growth, a full third of your life comes after AD 1200 and a quarter after 1750. At that point, technology and society begin to change far faster than ever before. You invent steam engines, factories, and electricity. You live through revolutions in science, the most deadly wars in history, and dramatic environmental destruction. Each life lasts longer, and you enjoy luxuries you could not sample even in your past lives as kings and queens. You spend 150 years in space and one week walking on the moon. Fifteen percent of your experience is of people alive today.

That's your life so far, from the birth of Homo Sapiens until the present. But now imagine that you live all future lives, too. Your life, we hope, would be just beginning. Even if humanity lasts only as long as the typical mammalian species (one million years), and even if the world population falls to a tenth of its current size, 99.5 percent of your life would still be ahead of you....

If you knew you were going to live all these future lives, what would you hope we do in the present? How much carbon monoxide would you want us to emit into the atmosphere? How much would you want us to invest in research and education? How careful would you want us to be with new technologies that could destroy or permanently derail your future? How much attention would you want us to give to the impact of today's actions on the long term?

I present this thought experiment because morality, in central part, is about putting ourselves in others' shoes and treating their interests as we do our own. When we do this at the full scale of human history, the future--where almost everyone lives and where almost all potential for joy and misery lies--comes to the fore.

This book is about longtermism: the idea that positively influencing the long-term future is a key moral priority of our time.

Do you want to add anything?

William MacAskill: No; thank you for the reading. It was lovely to be able to hear that. And, yeah, I mean, I think that can get us into the core issues, which, a little later on in the book, I state as just the idea is that future people count morally, there could be a lot of them, and we can really make a difference to their lives.

4:45

Russ Roberts: My first question is: Helping people today does help people tomorrow in many ways. So, in many ways we already take account of the people who will come after us.

William MacAskill: Yeah, that's absolutely true. So, when we innovate or build better institutions or have a better, more moral culture, all of those things do benefit future people, too.

It would be surprising though if that was the best way of helping people, the things that we all then[?] do. In particular, because there's things that negatively impact the future, as well.

So, most famously now, marginal CO2 emissions. But, I also think certain other forms of technology. So, advances in biotechnology, I think pose great risks for the present and for the future. And so, I think we should be attending to what we do today. What are the things that are really helpful, not just for the present but also for the future? What are the things that are actually most worrying when we take a longer-term perspective?

Russ Roberts: Well, I think the hard question--I think most people would agree that people in the future matter. The question is: How much? And, you're arguing, I think, that we don't take account enough of the future because there'll be so many people--unless there's a catastrophe--there'll be so many people and they will live for, presumably, a very long time. So, I think you're arguing that morally, they count for more than we count because they're more numerous. So, a sacrifice on our part that leads to a benefit in the future should be morally demanded of us because so many more people will benefit than are harmed today. You're very much utilitarian, I think, in this book. Am I right?

William MacAskill: Yeah. I'll clarify my view a little bit. Where that's close to correct, but not fully correct.

So, I think people in the future have the same moral worth or moral status as people in the present. I do think there are additional reasons that are not about people being present, but are about relationships we have with people in the present. So, I think I have a different set of moral duties and moral reasons with respect to my mom, than with respect to someone who I've never met on the other side of the world.

That being said, I still have a lot of moral reasons with respect to that person on the other side of the world. I can't willfully harm them. If it's easy for me to make their lives much better, I think it's morally important for me to do that.

I also think you're completely right, that I think the numbers really matter. So, if I can save one life or save ten lives, it's more morally important to save the ten. And that's, again, because all people count equally, so the interests of ten are just more important than the interests of one.

There's a tricky--you then mentioned sacrifice, and here there's just a tricky question of how much does morality require of one? The standard utilitarian answer is that is morality is extraordinarily demanding. If I can sacrifice myself in order to save the life of someone who I think will do a little bit more good than me, or in fact would even be a little bit happier than me--perhaps they just have more wellbeing--then I'm morally required to do that.

That's the kind of most extreme view that one can have in terms of moral demandingness. And, it's not a view that I want to defend in this book. Because, at the moment we're, like, so far away from that margin--where, in terms of effort that we spend really trying to think about the long-term impacts of our actions and explicitly trying to positively guide the long term future--how much of world GDP [gross domestic product] is that? I don't know: 0.1%, 0.01% or something? It's very low. And so, on the current margins, if we get to 1%, I would be over the moon. And, that is a kind of far cry from having to answer the question of: Okay, if we did have to sacrifice 99% of the world's resources, would that be the right thing to do? Would we be required to do it?

Russ Roberts: So, you're saying if we had to sacrifice 1% to achieve something good for the future, that's a relatively easy case to make in your view--because of the magnitudes involved?

William MacAskill: Exactly. And, I think you don't need to have anything nearly as extreme as utilitarianism to justify that view. I think that should be true on a very wide level[?] variety of moral views.

9:51

Russ Roberts: Okay. Let me take the opposing view, which is the following. Those people in the future, they're going to be so much richer than us. They should be sacrificing for us. I mean, we are endowing them with a platform, a base level of wellbeing and intellectual knowledge, that's going to grow over time. So, 100,000 years from now, those folks are going to live such extraordinary financial lives. We can debate what kind of levels of real happiness they might have, and we'll get to that later--you deal with that in the book--but they're going to be wildly more materially better off than us at current trends, unless we mess up, unless somebody messes up in the next 100,000 years. So, why should we sacrifice anything for them?

William MacAskill: Terrific. Excellent question, and I think there's two answers.

So, the first is that, unlike the kind of assumption that's normally not argued for in economics, I don't think we should be certain that people in the future will be much better than us. To take the year 2300, I would say it's like 80% that are better enough than us, and maybe a lot better off--you know, given kind of compound technological progress. However, 20% chance that actually perhaps they're worse than us.

I don't think that's crazy at all. I think there could be widespread catastrophe from the result of new technology, all-out nuclear war, worst case pandemics and engineered bio-weapons, or even just, like, stagnation and then decay of society, like kind of global fall of the Roman Empire. All of those things are just on my mind.

The second thing is that this argument, that future people will be richer than those today, that only applies to what economists call marginal harms.

So, if I'm making myself a little poorer in order to make someone in the future a little bit better off--and again, I completely agree that if that's what's going on, then yeah, that financial benefit in the future matters much less. But, now we take something different, like: Is that person in the future enslaved? It really doesn't matter where--how rich they were beforehand in this thought experiment. If they're now enslaved, that's not like a marginal harm, a marginal financial harm. It's something quite different. Or, if the person dies altogether. Or, something that raises more philosophical issues is if they never come into existence in the first place.

But, in general, I am not looking at these, like, marginal differences in how well-off we are today versus in the future. Instead, I'm looking at either just, like, catastrophic events that would make the world much worse or much poorer, or changes to values where perhaps they have much greater resources than we do, but they're using them for bad ends. They're using them to purchase slaves or inflict enormous amounts of suffering. And, those things, I think the idea they'll be better off is either not true or isn't applicable.

Russ Roberts: Well, if a nuclear destruction or pandemic spread by bio-weapon release, what's on your mind? I suspect you don't sleep very well at night, Will, but--I'm somewhat, a little worried about you. But, I understand what you're saying, especially having read your book. You're focused on preserving the opportunity for these billions of people in the future to flourish.

William MacAskill: Absolutely.

Russ Roberts: And, you want to make sure, and in not just that they will come into existence--meaning not just that we'll avoid a nuclear disaster or a global pandemic that's of unimaginably worse consequences--well, they're imaginably worse, sorry--with dramatically worse consequences than, say, COVID. And, you're also worried about the fact that again, to take an example, it's not exactly in your book, but it could have been, that an authoritarian leader would take charge of the world, inflict enormous pain on the billions of people that will come in the future. And, those are the kind of things that we ought to be focused on, but that preventing that and encouraging the survival of the race, human species going into the future to enjoy all these benefits.

William MacAskill: Exactly.

14:37

Russ Roberts: Now, one argument that comes to mind is that, you know, we're doing pretty well without longtermism. That is, the focus that you want to bring to the moral calculus--to take more account of these future folks--you could argue we're doing pretty well, right? Nobody's been focused on these issues. You're trying to bring that focus; but hey, here we are 250 years or so into the Industrial Revolution; lifespans continue to rise. We've had some blips lately, but lifespans are rising, standard of living dramatically--dramatically--higher for enormous numbers of people. And, one could argue that the kind of focus that you want to have us have, of more concentration on steering the future wisely, is actually either hard to do or ill-advised. Because we don't know how to do it. What's the case for why we should care about this? Isn't it kind of going okay?

William MacAskill: Yeah. So, my first thing I'll say is, like, I'm sympathetic to thinking at least as a baseline, what are the things that we've been doing in the past that have just, like, worked really well, including in unanticipated ways. And, we don't have, like, maybe a great story with detail about how that will transfer into the future, but let's just keep doing this kind of more of a good thing. I'm certainly sympathetic to that as a baseline; and I do think that supports kind of, you know, making institutions better and more trusting and open-minded and liberal culture, and of course, innovation as well.

However, this argument, like, 'Hey, we've been doing well, things have been getting better,'--it does seem a little brittle to me. So, here's a question. What was the probability that the United States and USSR [Union of Soviet Socialist Republics] would have an all-out nuclear exchange? And, if I had to guess, it'd be something like one in three. Could have been higher. And now, let's go to that world where there was one. Would we be having a conversation which is like, 'Well, we're doing pretty well. Things have been[?] getting better. We've been, like, sure, we've been growing a lot. We've been having a lot of technologies that give us greater power. But it's generally been fine.' I think we probably wouldn't be having that conversation.

And so, I would like us to be in a world where that risk of all-out nuclear war was not one in three: instead it was more like 1%, or 0.1%, or basically as low as we could get it. And, that's precisely because I think, in insignificant part, we have been doing, what I would say is surprisingly well, I mean, if you just look at what is human history like--and it's a dark place. People in the late[?] past had miserable lives. They lived--the majority of people were in some form of forced labor is my best guess. The world was extremely patriarchal. It was an enormous amount of suffering and ill health.

So, I guess I kind of agree that, like, there's a certain set of culture and institutions combined with innovation that's, like, going pretty well.

And, there are just meaningful risks of that not going well.

And, we've seen the first warning signs in the 20th century on both the value side and the technological side. Technological side, we saw nuclear weapons. On the value side, we saw totalitarian regimes, even a rising out of democracies. And, I think it would be very unlikely that the Nazis won World War Two, given how history actually was. But, it's not a crazy--you know, it's not crazy to imagine a bigger role of history such that actually it was Nazi fascist values that took over. That they were successful in establishing a 1,000-year regime. And, again, if it were you and I looking at that world, I don't think we would be saying, like, 'Oh, well, things are going well. So, I just want to reduce those risks down.'

Russ Roberts: Well, they weren't going well really in 1945, for sure. We don't have to have the Nazis win or nuclear war. We had--between Fascism and Nazism, I don't know, 100 million people died short before their time.

19:12

Russ Roberts: Now, that raises a different challenge to your claims, which is: you want to push the importance of morality and putting it front and center. The problem is which morality? Certainly the fascist and the communist thought they were doing something that was good. They enlisted the support of millions. They still have the support of millions. There are many, many people who still think Stalin was a good guy. Not so many for Hitler, but there are a few. And, it's hard to know what's right. And, I don't know if philosophy--you argue that we've bent toward a more moral world over the last few hundred years, 500 years. There's a lot of evidence for that, I agree, but there's some evidence that we're--maybe not.

William MacAskill: Okay. So, the key distinction here, I think, is between fairly narrow totalizing moral views. And, basically I agree with you that moral ideology can be a very scary thing. The Nazis or Stalinists have some very particular vision of the future and wants to implement it and were willing to justify atrocities in the name of it. And, that's something that should really scare us.

Here's a different perspective, though. This is a perspective of saying: We don't know what's morally right. We're probably still very far away from the kind of morally best view. If there even is such a thing as a morally best view. Such that more enlightened future people would think of us as maybe a little better than the Romans, but not enormously better. And so, what we want to do is build a society that can have a great diversity of moral views and a kind of culture and institutional setup such that those views can debate and reason and experiment. And, we can learn over time which the right moral view is. And so, the kind of best ideas went out on their merits rather than via conquest, for example.

Russ Roberts: Well, I really like that. Although, as we know, things don't always win out on their merits or which merits are chosen. But, I think that there is a decentralized aspect of the book, for a few pages anyway, where you worry about the lack of diversity. A very thoughtful point you make that the worldwide response to COVID [coronavirus disease] was quite uniform. There were variations at how much people were locked down, how much authoritarianism was imposed, but there wasn't a lot of experimentation. Most of the countries of the world did something very similar; and we lost an opportunity to learn a lot. And, I think that observation is very important.

William MacAskill: Absolutely. So, there's a theme in the book that I don't really make explicit but here it is, which is: attention, where some of the major risks of catastrophe are kind of failures of global coordination. So, the risk of a nuclear war, or development of technology that could destroy us, or carbon emissions as well--where the push there is towards greater centralization of the world. However, what I want to say is there are risks on that end, too, where greater centralization could mean we stall moral progress. In the worst case, the simplest case, you've got a world government; it's a dictatorship; there's another ideology that's locked in forever. But, even if we just have this gradual homogenization and people stop really trying to make moral progress, because people think like, 'Oh, we've gotten to the pinnacle,' in the way that maybe the Romans actually thought they were the pinnacle of civilization, I think that could potentially be a catastrophe, too.

And so, when we're thinking about what sort of institutions do we want, we want to kind of thread this needle where you can have diversity of moral views, experiments, best ideas winning out, while at the same time kind of mitigating the worst risks.

And I think we've seen some evidence. I mean, it's interesting. You know, I think of the U.S. Constitution as this bold and, like, by historical terms, a remarkably successful attempt to thread that needle. Obviously, I'm not claiming it's perfect, but shows that you can have both of these things at once. Though, it is notable that the forces, I think, push towards centralization because the United States--okay, well it could have been this kind of laboratory of democracy in all of these different states pursuing very different things, and maybe they have--each state kind of has--large state governments, and at the Federal level it's kind of more minimal.

Historically that's just not what happened, of course: there was, like, consolidation. And, there's strong evolutionary reasons for that, because the United States is in competition with other nations. It'll be more powerful. And, again, I'm not making any comment on right[?], is this overall good or bad? But, it's something that we should notice as like a general trend. And, though it's less salient as a catastrophe that would wipe out 99% of people, the catastrophe that would result as then a failure to make moral progress over the long term, I think is just as real and something we should be talking about just as much.

Russ Roberts: And, it's an interesting side note as to how much the rise of the Federal government, which I really think dates, actually fairly recently in U.S. history to the 1930s.

But, that that phenomenon is in some sense an irresistible force in a world of competition. Certainly Europe has moved toward a more centralized situation, or tried to.

Now, we also have to talk about the fact that it's not designed. It's not like people sitting around saying, 'Hey, it would be good if we were bigger.' Some of it is a grab for power, especially a grab for power without accountability, which I think is the reason the EU [European Union] has not gotten much, much stronger and more powerful than it otherwise is. And similarly, the United States: there is some pushback on that consolidation for reasons of fear of the kind of things you're talking about.

25:52

Russ Roberts: Let me read another--a short excerpt from the book.

William MacAskill: Sure.

Russ Roberts: You say,

Future people count, but we rarely count them. They cannot vote or lobby or run for public office, so politicians have scant incentive to think about them. They can't bargain or trade with us, so they have little representation in the market. And they can't make their views heard directly: they can't tweet, or write articles in newspapers, or march in the streets. They are utterly disenfranchised.

Now, I think that's not true. There's a sense of which it's kind of true literally, but it's not true in the effective sense because of the way people come into the world. And, I felt you neglected this aspect of the human experience. Which is to say: those future people you're talking about are our children, our grandchildren, or they're somebody's children, grandchildren, great-great grandchildren; and they're not disenfranchised. We care about them quite a bit.

Now, it's true I care more about my child than my grandchild if my grandchild is not born, but the potential for my grandchild to be born, which I have in mind, is not ignored. I don't ignore it. Now you could argue, yeah, but 20 generations, so now is so distant, I don't think about it.

But, the fundamental principle that the future is born out of the present through the family, seems to me to take care of some of the things you're worried about.

William MacAskill: I agree that it takes care of some of the things. If we imagine a world where people didn't care about their children or their grandchildren at all, I agree we would be in an even worse place. However, you are right that this drops off pretty quickly. And, I mentioned earlier, lifetime of typical mammal species, about a million years, that would mean 700,000 years to go. Okay, let's take someone--

Russ Roberts: I just cheered. For those of us not watching the video, I just gave a silent cheer that Will reacted to. We have 700,000 more years--few, or worse. We've already used up 300,000 of our million. It's only 700,000 left. Could go the other way.

William MacAskill: Yeah. I mean, we also--I think we could last much longer than a million years. Earth will be habitable for hundreds of millions, and I don't think the sort of natural catastrophes that typically kill off species necessarily needs to kill off humans.

But yeah, so, people cared about the kids and their grandkids and that's an important force for some amount of concern for future generations. I don't think it nearly matches the scale of concern that would be morally appropriate given that the vast majority of people are not even people's great-great grandkids, but kind of live past that point.

And, even secondly, my point about them being disenfranchised is that: if we take action for future generations, it's via the views and values of people who do participate in markets, who can vote.

So, an analogy could be with non-human animals. Now, you might say--I might say--'I'm really concerned because there's a lot of non-human animals. I don't think people take them seriously enough.' And, you might say, 'Well, people care about nonhuman animals. They care about their pets.' And so on. And like, yeah, it's food, but they are also disenfranchised.

Let's just look empirically at what happens to animals, then.

Well, pets get treated pretty well: The 80 billion animals that are kept almost all in horrific conditions and then slaughtered, they have really terrible lives.

And, I think there's an analogy between that and future people where, for sure, we have some amount of concern about animals; not nearly as much as we should have. And that means that we inflict enormous and unnecessarily suffering on them.

And, I think the same thing kind of happens to the future where there is a certain amount of concern, but not nearly as much as I think there ought to be.

Russ Roberts: Well, the animals' suffering is a really interesting issue. You're a vegetarian--you talk about that in the book. I'm not, but I think it's a serious moral question. And, I think a person who pretends to be moral, as I do, has to confront this. I like to think of myself as a moral person, so what the heck--what am I doing eating meat? And, I could say, 'Well, I don't eat it that often.' But that's not a very good--'Yeah, I at least only torture or torment animals a little bit.' I think, like slavery--an issue we'll come to in a minute, which you talk about quite eloquently in the book--I think most people who held slaves--many--found ways to convince themselves that it wasn't such a bad thing. And, I think many of us who eat meat have found ways to convince ourselves. And, we might be very wrong about that, just as those people, I think, who felt morally comfortable with slavery were wrong, certainly with the benefit of hindsight.

So, I think that you make a good point: that, it could be that my concern for future generations is something that came to my concern for animals, that I have a story to tell, will probably be okay, they're going to be richer than me and I could therefore--maybe I'm just fooling myself and finding ways to do what I want to do rather than what is correct. That's very possible.

William MacAskill: Yeah. Well, perhaps you'll have concluded one way or the other by the end of this conversation.

Russ Roberts: Yeah. Well, I already read your book, Will, and you haven't won me over yet; but it could be this marginal hour could be that what puts me over the edge, for sure.

31:48

Russ Roberts: I'm going to ask you a tougher question, maybe. Maybe it's an easy one.

William MacAskill: Sure.

Russ Roberts: Let's say I don't have any kids. I actually have four and just had my first grandchild. So, I'm more focused on the future I think, than I was a month ago.

William MacAskill: Okay. Congratulations.

Russ Roberts: Thanks. But, let's pretend I don't have any kids or I'm not a very particularly emotionally connected parent or grandparent and certainly 10 generations from now just doesn't have any salience for me. And, let's suppose we believe that, say, climate change is going to have a catastrophic impact on humanity. I'm a little more agnostic on that and--well, not agnostic, I'm a little bit skeptical of that, of the catastrophic part. I'm open to the possibility that it could be bad, but the catastrophic part I think is a low probability; but you could say, 'Well, but still such a bad downside, you should be very focused on it.' Okay. Well let's say I'm not a particularly nice person and I ate a ton of meat because I don't care about animals and I fly everywhere because I'm not worried about carbon dioxide emissions.

And, you're telling me I should worry about somebody 700,000 years from now?

Why? Why should I care?

Let's say they're never going to come into existence. That's a deep philosophical question, which you look at in some length in the book.

But let's say they come into existence and their life's worse than mine, because there's a lot of plagues and there's bad moral views and institutions have degraded and there's been a loss of civilization. So, what? Why should I care about their happiness? I got my own. Why should I care about theirs? What's the argument there. They're not my kids. Because I don't care about kids or don't have any. Why should I care about them? Why isn't my happiness paramount? I think that's a repugnant view--

William MacAskill: So, I think it's first--

Russ Roberts: First, I should just say: I think it's a repugnant view, but I'd like to hear why you think it's repugnant.

William MacAskill: Great. So, I think that, I mean there's one view you could have where you just reject any sort of moral reasons at all. You're just a pure egoist. Put that to the side and we can come back to it if you want.

But there's a second view which is that: Yeah, I think of moral reasons to people who are in the same generation as me, that perhaps I interact with, but not with people in the future. And, I just think that's a very morally unintuitive view. So, imagine--I give this story in the book of I'm hiking along a trail. I brought some glass, the glass shatters. Should I clean up that [?] glass after myself? And, I think the answer is yes. And, why? Well, someone might come along and cut themselves on it.

And, supposing we know that that will happen, does it really matter whether that person cuts themself tomorrow or in a year's time or a decade? or even if it was 100 years or 1,000 years? And, I think intuitively: no. As long as you're certain that that's going to happen, then harm is harm, just whenever it occurs. And, I think we can explain that on more theoretical grounds as well, where morality in part is about impartial. It's about taking seriously the interests of anyone who you're going to affect, especially when it comes to potentially harming them. And, does mere location and time matter? That seems like a pretty weird thing.

Russ Roberts: Well, I agree with that 100%, but why should I care about today? So, I break the glass and it's a nuisance to pick it up. I'm in a hurry. Why should I care about those other people? Why do I have an obligation to them?

William MacAskill: Okay. So, I mean, here are the fundamental questions--

Russ Roberts: By the way, one answer would be: Because I'm going to feel bad. I'm going to feel guilty. But I'm taking a case, a repugnant case, where I don't feel bad. In fact, I think I'm a sucker if I stop and pick it up. I'm going to just go on and do my thing.

William MacAskill: Yup. Great. So, this is one of the deepest questions in philosophy, which is: Why ought I to be moral? And, I think ultimately there's no non-circular answer.

Why should you pick it up? Because you have reasons towards other people. And, I should note that there's an equal argument for: Why should I care about myself? What reasons do I have acting in my own interest rather than acting in the interests of others? Supposing I take that position and this, like, anti-amoralist or something. What reasons could you give me for saying, 'Oh no, you should really care about yourself"? I don't think there would be any non-circular reasons. You would have to appeal to things like, 'Well, going to the movies will make you happy.' And, then I could always ask, 'Well, why should I care about being happy?' Or, 'It will give you a feeling of accomplishment to achieve this task,' and I could say, 'Well, why should I care about that?'

Ultimately, if you ask this, like, 'Why should I care?' you'll always just at some point have to point to the reasons. So, in why should I go to the movies? And, you could say, well, because I'd be happy. And, I think that's a good reason. And, if it's, why should I not cut someone else? And, you would say, because they will suffer. And, I think that's the kind of bedrock reason. And, if I ask, well, why should I care about suffering? There's no further reason that one can give.

Russ Roberts: Well, there is if you believe in God. I mean through most of human history, at least civilized human history, since the advent of monotheism at least, there was a feeling that you had an obligation to the Creator of some kind--different religions feel, look at obligation differently. But, I think without that, it's really hard to argue for why you should care about other people. And, I'm not saying it isn't--we've found ways to sustain--well, I wouldn't say that. It's not clear whether--and let me ask--well, go ahead. Sorry. So, I'll let you react to that.

William MacAskill: Well, I just wanted to briefly say, I think God doesn't save us from this problem.

Russ Roberts: Why?

William MacAskill: Because you could ask the same thing. So, it's okay. Why should I pick up the glass? And, you say, 'Well, ultimately, because God wants you to.' And, I say, 'Well, why should I care about what God wants me to do?' Or 'Why should I care about what God says is right or wrong?' Or if it's like, 'Oh, I'll go to hell.' And, it's like: 'Well, that's back to the kind of self-interest question, why should I care about my own suffering?' So, again, at some point you are just throwing a line or the theist is just giving one additional kind of level of explanation. But, the why question can be applied to that, too.

Russ Roberts: That's a great counter-argument.

38:39

Russ Roberts: So, let me try to pile on against my own position. You're saying that a person who embraces, say, the categorical imperative as a rule to live by--Kant's, how would you call it? Not advice. Kant's--what's the word I want? [inaudible 00:39:02]His prescription[?] that you should live and choose according to an imagining that this was a universal rule, that everyone would act this way. And, then you look at how that world would play out and say, 'Well, if it played out really'--my favorite example of this is you sample the grape at the grocery store when they don't want you to sample the grape. And, you say, 'Well, it's only one grape. But of course, if everyone did that, grapes are going to get more expensive; and it could be they want you to sample it, and so on. But, embracing Kant is no different than embracing the God of the Old or New Testament or of the Quran. It's just a belief that you've decided to take on for yourself that has no--you can't justify it.

William MacAskill: Yeah. I mean, at some point you hit bedrock. And, this is true for not just moral beliefs, but other sorts of beliefs as well, where, let's say you are skeptical of climate change altogether--you know, is it even warming? And, I'm like, 'Oh yeah, of course, see these papers.' And, you're like, 'Why should I believe the papers?' And, I'm like, 'Okay, because of science and these experiments.' 'Why should I believe that? And, I'm like--maybe it goes all the way till I'm doing experiments in front of you. And, I'm like, 'Why should this experiment be [inaudible 00:40:24]--

Russ Roberts: Just a magic trick.

William MacAskill: Yeah. Or it's, at some point I'm giving you reasons and if you're not accepting them as reasons, there's nothing more I can do.

Or take an even simpler example. I'm like, 'Two plus two equals four.' And, you're like, 'Why should I believe that?' I'm like, 'Well, one plus one equals two. So, two plus two equals one plus one plus one plus one; and one plus one plus one plus one, it equals four.' And, you're like, maybe you're like, 'Oh, I got the first two, but just don't buy the second bit.' I'm like--what can I say in response to that? And, I think it's nothing. I've given you a reason that's a genuine reason for you to change your beliefs. If you're not willing to accept it, there's no way I can get you out of that what-we-might-call a kind of epistemic black hole.

Russ Roberts: I do think, though, that religion as a social construct--not as an intellectual experience along the lines we're talking about--but as a social, cultural phenomenon has the potential to restrain some types of behavior while encouraging others. It could also be negatives that we have not shown for sure that intellectually pleasant writing, say, by Kant and Hegel can substitute for.

William MacAskill: Is it sufficient--

Russ Roberts: Hard to know.

William MacAskill: Yeah. Yeah. Maybe this will surprise you but I completely agree, actually. I mean, I think of religion, it's like a technology or like a social innovation where, in particular, the thing that appeared was many different religious traditions, what's called Big Gods. These are Gods who are watching you while you're alone, while no one else is around--

Russ Roberts: It's a big breakthrough--

William MacAskill: and they care morally what you do. So, no one else is around. You could steal that bit of food. You could steal that money. No one would catch you. God could, though. God is watching you.

Now, that's great as an innovation. If everyone believes that, then you get a lot less free riding. You get a lot less cheating.

And, you are basically right. Like, how long have we had in a kind of post-religious era? I mean, we're not even there yet, really. The world is 16% atheists or agnostic. And, honestly, I just do worry about it. Perhaps you just do get free riding coming back. I guess at the moment, not: people are still morally motivated, maybe about [inaudible 00:42:45], something we should worry about[?].

Russ Roberts: It's fascinating. Fascinating question.

42:47

Russ Roberts: I'm going to ask a different version of it now. You have a thought-experiment where all but, say, 80 million people are destroyed in a nuclear war or a plague. So, we have a core group of survivors. And, you raised a fascinating question about how much technology would we be able to recover. Why don't you talk about that for a little bit; then I'll give you my variation on that and see what you think.

William MacAskill: Oh, sure. So, I wrote this chapter because--what are the things that could impact, not just the present but the long term? Well, there's this enormously important question of, like: How fragile is civilization? If there was some catastrophe that really knocked us off course, killed maybe let's say 99% of the world's population, would we recover?

So, it's obviously something we should just 100% try and prevent anyway, just because of the sheer loss of life it involves. But, would that also just prevent civilization from ever returning in the long run? And, I think probably no, actually. I think humanity is kind of remarkably resilient.

And, there's a few reasons why I think this. One is just if you look at enormous but still smaller-scale catastrophes, like the Black Death in Europe, or even the bombings of Hiroshima and Nagasaki, all those sorts of catastrophes, you see people having remarkable resilience in the face of that catastrophe. There's enormous amounts of suffering, but people respond, they build things, they restore society.

And, then a second reason for thinking that we would bounce back is just how much knowledge would be preserved. So, there are tens of thousands of libraries in locations that wouldn't be threatened by nuclear war; which are sufficiently dry, that the paper would actually survive for a very long time. There's also just evidence of the tools that we've made. So, even if we go back all the way to the industrial technology, it's much easier to invent something if you've got this prototype of this thing, because the people will know, 'Oh, there used to be this more advanced technology, and now I've got this thing: it looks like a tractor. Like, what's going on there?' You've at least got the idea for it.

And then the final thing is, if you just look at what resources--are there any particular resources that could be bottlenecks, that just simply prevent civilization from coming back to this state? I haven't found one, yet. There could be, but they[?] have been used up, so we wouldn't have access to them. I haven't found one yet, and so I think that's unlikely to be the bottleneck.

Russ Roberts: The only thing I would add to that as an economist is something I've thought about a great deal, which is: How many people do you need to have a successful division of labor if they trade among themselves?

And, the example I use: You put a hundred people on a desert island--not a desert island--you put a hundred people on an island that has really rich resources. All the minerals you might possibly want, really fertile soil. And, you get to pick who the 100 people are. You can pick the smartest, most talented; you can pick diverse people in terms of their skills and insights, their knowledge that they would bring that you're talking about of, say, technology and implementation.

A hundred people are going to be really poor. I don't care how smart they are. I don't care how rich the island is in titanium. You just don't have enough opportunity to exploit the Smithian gains from the division of labor.

And, one of the miracles I think of modern times that we don't appreciate is that trade allows seven, now eight billion people to specialize and do lots of different things they couldn't do if we were smaller. Interesting question, whether 80 million--how much time would we have to devote to just staying alive and keeping a subsistence level of wellbeing if all we could do is trade among ourselves, just those 80 million? How many people we need to devote to certain tasks, farming being one. At a non-current level of technology, we'd have a much lower level to start with. In the 1900, 40% of Americans were farmers. So, if that were true, in this post-apocalypse view, you've got 32 million of your 80 million people just doing farming. And now you're down to 48 million. And, how many of them could be making software for your smartphone? I mean, you'd be a long way from--it would take a long, long time, I think, to get back to anything like a modern standard of living.

William MacAskill: Yeah. I mean, absolutely. I was talking about when, kind of not how long and--

Russ Roberts: Whether. Yeah--

William MacAskill: In fact--sorry--whether. Yeah. Sorry. Whether, not how long.

And it's absolutely--if the situation, if it's 80 million people left on earth, would be considerably worse in a situation where there's 80 million people, all in one location and this kind-of preserved civilization--in fact it could be spread out around the world. Like, in principle, I think you could have a self, I think, you know: if the United States were aware that all of the rest of the world was going to just disappear in 20 years' time and had to reconfigure its economy, I think it could probably do that with 350 million people without a massive drop in wellbeing.

That's on future level of technology, future level of specialization. Perhaps that's not needed. Perhaps that's not true. With 80 million people, maybe I start to become a little more unsure, but certainly then with 80 million people around the world.

I mean, yeah, I would just absolutely predict that living standards would drop enormously. I was even kind of assuming we'd lose out on industrial-level technology. I think that's probably not true, but we'd lose out on a lot. And it would be a miserable period of human history [inaudible 00:49:01].

49:03

Russ Roberts: So, your thought-experiment prompted me to think of my own. It's a variant of what we just talked about a minute ago. Your focus in the book is on technological knowledge, engineering knowledge, and the ability to innovate.

I was thinking: What if the 80 million who survived had no religion and no knowledge of religion? So, they didn't have that technological thing you talked about of that someone is watching.

Would it matter if we lost, say, the Bible, the Quran?

And then I thought, how about The Iliad and The Odyssey? How about Plato and Aristotle? What if all we had left was--we'd lost literature, we'd lost philosophy, but we'd kept the technological knowledge that you're talking about. We have all the toys and all the knowledge to make the toys and to continue to make better toys--which is what human beings do. Would it make a difference or so is all this humanity stuff just--I don't know. I don't know, what would you call it?

William MacAskill: Well, yeah, I actually think that this is enormously important and maybe even the most important in the longer term aspect of civilizational collapse. So, again, emphasizing that we want to prevent this, merely the facts of so many deaths in the near term is more than sufficient. But, I think, like, if the world came back, how would it be in terms of its values, in terms of its institutions compared to the world today? And over time, I've come to the view that again, in particular, this kind of egalitarian, liberal, democratic worldview and set of cultures and institutions that is prevalent today, we're at least somewhat lucky in that. I think there are certain forces that mean that this makes more sense given the current level of technological development, technological change than in the past.

But, if you tell me that, kind of, there's a catastrophe, the world comes back, we get to this level of technological development, but slave-owning is very widely widespread, or the large majority of countries in the world are authoritarian rather then democratic, I'm not the least surprised--as in, I'm not like, 'Oh, that's an impossible fact.' And, I think that could make the world considerably worse, basically indefinitely into the future.

Russ Roberts: You talk about contingency, what would've happened eventually. It's a very fascinating question you raise: you asked a question I think is quite profound. Slavery ended in England in 1807, I think, it was outlawed in England.

William MacAskill: So, slave trading was outlawed in most of the British Empire in 1807. Owning a slave was outlawed in 1833.

Russ Roberts: So, that was an amazing thing, which we sort of take for granted because of course slavery is horrible. And, then the Civil War comes along in the United States and the North happens to win. Didn't have to turn out that way. It could have lost, or they could have sued for peace and kept the South as a slave-owning alternative. Some people have argued the economics would've ended slavery eventually, but you make the case, I think quite provocatively, that that's not necessarily true and slavery could have persisted. And, therefore we should be very thoughtful about those kinds of social changes and the evolution of morality.

So, talk about Benjamin Lay, a person who deserves to be--I'd never heard of him--talk about Benjamin Lay and this whole question of moral values, because I think the way I phrased it about the loss of the humanities--in other words, if we lost our whole knowledge of how the world had evolved up till now and our knowledge of history and the so-called lessons of history and philosophy, would we recreate it? And the answers you're suggesting: Probably not. So, this whole idea that things are not destined necessarily, and there's some individuals who push the path in a certain direction is very thought-provoking.

William MacAskill: Yeah. And, in particular I think some things are destined and other things aren't. So, if I'm trying to build some new technology, I'm like, 'Hey, I'm making the world better.' And that's kind of true in the short term; but in the long term, I think the intense incentives across many different kind of cultures and moral views you could have for technological innovation means that I think by and large we'll get there eventually.

With a case of moral changes, though, I think that's much less obvious; and at least in many cases, things could go either way. So, there have been many, many moral change-makers in the world. I just kind of highlight this one particularly notable example because the story is so wonderful.

So, yeah, Benjamin Lay is a Quaker. He's a dwarf. He refers himself as 'Little Benjamin who beat Goliath,' like David who beat Goliath. And, he is among the earliest people to really push--that we have records of--to be really pushing for the end of slavery in a way that looks to us now kind of like a social campaign.

And, he was born towards the end of 17th century, and most of his kind of actions were in the early 18th century. And, he just harangued the Philadelphia Quakers in particular, at kind of every opportunity about slave-owning, where he would engage in this kind of amazing guerrilla theater. So, he would kind of heckle the people who stood up to speak. They would be giving this moral sermon and then he'd be like, 'Oh, there's another Negro master.' And, he would get kicked out of the church; and he would just lie face down in the mud. So, that after, when everyone had to leave, they had to step over his body. Or, he would just stand in the snow in bare feet. And, when people were like, 'What are you doing?' he would point out the slaves had to be in the cold, just as he did, all winter long.

In his most famous stunt he brought a Bible that was filled with fake blood to the 1738 meeting of the Quakers, and said it was as great a sin to keep enslaved people as to stab the Bible. And so, he stabs the Bible, and fake blood spatters all over the audience.

And so, it's not exactly clear, like, his direct causal influence, although he was certainly influential on people like John Woolman, Anthony Benezet, who were then enormously influential, and that's kind of better-documented. And, his kind of era coincided with the Quakers' dramatically reducing the extent of their slave owning.

And so, I kind of use him as a vivid story of--I mean, I call it a moral weirdo, but kind of moral agitator. So, someone who really had this moral view, that in retrospect we think was completely correct. It was heterodox at the time. He stood up for what he believed in and was willing to make major kind-of sacrifices.

So, you know, he boycotted all slave-produced goods. He lived in a cave, so that he was, like, further opposing consumerism. He was vegetarian at the time--ties into our earlier conversation.

And, ultimately he was part of this larger campaign that was enormously successful--maybe one of the most successful moral campaigns ever--which was that Quaker thought became the packaged as part of Enlightenment thought convinced the British elites and the British public, the British Empire chose to end slavery and tried to basically bribe or threaten other colonial powers as well to end slavery, too. And, over the course of, I mean, ultimately kind of 300 years, slavery went from utterly widespread--where, like I say, the majority of people in 1700, were in some form of forced labor, to now it being kind of unthinkable where even on a broad understanding of forced labor, that's only 0.5% of the world's population; and every country, it's illegal.

And, that's just a remarkable thing. And, prior to learning about this, I would've thought, 'Yeah, this is inevitable. It's either just like the inevitable march of moral progress, or it's just the result of economic changes.'

And, I no longer think that's the case. I think it was largely a matter of cultural changes, such that if the world--if you could just leave all history and you told me that we had today's level of technological development, but widespread, forced labor, widespread slavery, I wouldn't be, like, totally shocked.

58:05

Russ Roberts: It's a fascinating thought-experiment. Now I'm going to give you the challenge of this kind of thinking. I'm going to read what you say about Lay, Benjamin Lay:

Lay was the paradigm of a moral entrepreneur: someone who thought deeply about morality, took it very seriously, was utterly willing to act in accordance with his convictions, and was regarded as an eccentric, a weirdo for that reason. We should aspire to be weirdos like him.

And I thought, it's kind of also true of Hitler. A moral entrepreneur, who thought deeply about morality, took it very seriously, utterly willing to act in accordance with his convictions and was regarded as an eccentric, a weirdo--until it suddenly became mainstream that, if you were German, to believe that Jews were the source of the world's problems and therefore it was okay to murder them.

And, for me, as a--my challenge, as a person who has embraced the motto "It's complicated," it's hard for me to be--what's the word you say?--"utterly willing to act in accordance with his convictions." For me, it's hard, because I'm aware that I could be wrong, and I try to be open to the possibility that I could be wrong. If you feel that way, you're not going to be Benjamin Lay--which is a shame--but you're also not going to be Hitler, which is a good thing.

So, it raises the question of: How do you know that your moral conviction and your eccentricity is headed in the right direction?

William MacAskill: Yeah. I mean, huge questions, and this stuff is extremely tough. This balance of, like: Okay, we want diversity of moral views. We want moral views to have air time. We should be aware that in the past, moral views that would've been potentially even repugnant--being against slave-owning--I mean, certainly laughable, but I think in some circumstances repugnant, too--giving rights to women--we now think of as major moral advances. At the same time, some views are morally repugnant, those views are morally repugnant.

One thing I should certainly say is an enormous difference between Hitler and Benjamin Lay is a difference of means. So, Benjamin Lay was agitating. He was making arguments, he was engaging in peaceful public protest. It was via the power of reason and empathy that he managed to convince the Quakers. Then they managed to convince both the British elite and the public, and it was [inaudible 01:00:48]

Russ Roberts: That's just because he didn't have much of a chance. The early Hitler did the same thing, too. Wrote a book, he had protests, he started a social movement. Once he got power, then he could really implement his vision. And Lay never got that power, you could argue. So, he was insulated from that.

William MacAskill: Yeah. I mean, honestly, I'd just be really surprised if Lay got power he would've implemented a dictatorship. However--

Russ Roberts: What about John Brown? John Brown, he was an angel with a scythe. He was happy to cut people down because he thought slavery was evil; and he might have been right. It's complicated.

William MacAskill: Yeah. But, the thing I should say is, yeah: Well, we want to set up a society in the right way, where firstly, you cannot use conquest, violence, to achieve your ends because that is not a method of getting to the moral views. And, instead it should be reason and empathy. But, here's the challenge. Well, maybe that's not enough, because maybe Hitler was just this very powerful orator and was then able to convince people. The dividing line between argumentation and brainwashing is perhaps a hard one to draw.

And, then perhaps I am just able to [?]--perhaps I am just with you and, like: Man, it's complicated. We obviously want to distinguish between brainwashing and rational persuasion. I would think, or/and deception and power grabbing. I would put Hitler as much more like the latter than the former, but how do you actually implement that in a society such that you get the former but not the latter? And, I'm like: It's tough. It might be just, I think this enormous, enormous challenge.

But, I do think we can do better, it's disliking[?] to me, but take politicians: we kind of expect politicians to lie. If they do, maybe they get a little bit of criticism. It's not kind of the end of the world. I'm like, this is terrible. We're trying to have a world where we get to better views, and having the most influential people in society I--like, either outright lie, or willfully neglect the truth, they just don't care. Or, say things that are technically true, but aptly[?actually?] designed to mislead--I just think that shouldn't be allowed. I think that should be, like, an absolute scandal if it happens.

And so, I think we can at least move in a direction where powers of non-rational persuasion are kind of muted and powers of argument and reason are kind of winning out are greater.

Russ Roberts: We cancel people for the wrong things, you're suggesting, maybe. I was just thinking on the personal level.

I read a very provocative essay this morning by William Deresiewicz who argues that leadership is through the careful study of how the world works. You come to have convictions, and you live by those convictions; and people follow you because you're authentic and you've understood something profound about the world. I like to think that's a good idea. So, I'm not suggesting certainly that you should have no principles. I think you should have very strongly-held principles and you should come to them thoughtfully rather than just adopting whatever is in the air. And that's really what he's talking about in that essay. We'll put a link up to it.

William MacAskill: Yeah. This motto that I'm sympathetic to, which is strong opinions, weakly held--

Russ Roberts: I almost said it a minute ago, yeah--

William MacAskill: Okay. There we go. Yeah. So, it's perfectly compatible for me to have some view, I'm really defending it, I'm, like, 'No, it's got to be this,' etc. But, if I have an argument that I think is decisive against my view, I switch. I'm like, 'Oh no, you are correct.' Or, I've got various strong views, but I'm also--that doesn't mean I want to promote some violent revolution to get those views enacted. Instead, I want to achieve that end via rationally persuading people, and perhaps that involves going by a number of incremental steps. And, like I say, that's the kind of attitude that I'm facing [?] towards.

1:05:33

Russ Roberts: So, a rather lengthy section of the book where you riff on the old joke, two owners of pushcarts on the Lower East Side [of Manhattan--Econlib Ed.] are talking. And one says, 'You know, sometimes I wonder if it had been better never to have been born.' And the other guy says, 'Yeah, but who's that lucky? Not one in a million.' And, you ask the question: Would future lives--I don't think I've ever told that joke on EconTalk; it's hard to believe, but I don't think I have.

William MacAskill: It's very appropriate because I'm in the Lower East Side as you say this.

Russ Roberts: It's deep. It's deep. It's not, it's not--

William MacAskill: I'm going to use this one.

Russ Roberts: So, in your book, you ask the question: Is every human life of the future of value, and is it possible that there are people living lives of net negative value who would be better off never having been born?

And, I found that shocking in the following sense. You tried to find evidence, you looked at [?], you looked at [?], you survey happiness surveys, you survey evidence about people's self-reported happiness or satisfaction of various kinds. For me the question is--it's simpler than that as an economist, using the idea of revealed preference--it's not that hard to end your life. Most people don't want to, which suggests that it's a net positive. Again, if you're a religious person, it's a relatively easy question to answer; but if you're not a religious person, I'd still say it's pretty easy. What do you think?

William MacAskill: Yeah. I think this depends crucially in your theory of wellbeing. So, economists normally assume what's known in the philosophy literature as a preference-satisfaction view, where what's good for you is getting what you want, fundamentally. And, perhaps we can make that--I mean, well, caveat: philosophers normally want something more sophisticated, like: It's getting what you would want yourself to want, where you are ideally well-informed, and it was a cool, considered moment, and so we had time to think about things.

At least something in that area is kind of what the kind of economist assumes.

It's not obvious to me that's the best view. It's actually not my preferred view. My preferred view of wellbeing is that wellbeing is just a matter of conscious experiences. So, positive conscious experiences--like happiness, joy, bliss, meaningful moments--and the avoidance of negative experiences--like suffering, misery, depression.

If you have that alternative view--so, okay, basically I agree that the world looks rosier if you have a preference-satisfaction view, because I think people are getting what they want. If they're not [?]--again, if they're not getting what they want to such a significant extent that they think their lives are net like[?] negative, they can end their lives.

If instead you have this other view--and it's about what experiences people have--and you think, 'Look, people have preferences. They correlate with what's best for them, but not perfectly. In some cases they can be significantly biased.' And, then you think, 'Well, if from the perspective of evolution, where would we be likely to be biased, where our preferences would not map onto what's best for our wellbeing?'

Well, one case certainly would be preference against dying. Because, imagine if there's just some subspecies of humans that end their own lives at very significant rates. Well, that subspecies would not do well.

And, I think it's a bit plausible that that's the case. So, that makes me give, like, some weight to this argument. But not enormous weight.

The second kind of argument is even from--yeah--cultural evolution, as well, where it's pretty notable that in kind of major moral traditions, there's these very strong prohibitions against taking one's own lives. Why is that? That's curious.

I think a good argument to be, like: Well, maybe people's lives in the past were just really bad, and rates to take ones own life would've been very high, or at least much higher, were it not for those prohibitions. And so, there was this cultural evolution before to say, 'Look, even if you really want to, it's morally bad to do so, you should not do so.' Because otherwise, the rates would be much higher. And so, basically I think that's some evidence. It's by no means decisive.

1:10:23

Russ Roberts: You argue that one of the things you can do to make the future better is to have children. And, that flies in the face of many people's intuition. I think it doesn't fly in the face of mine, but I think many people would find that surprising. Make the case.

William MacAskill: Sure. So, let's just--the easiest is just to start off with the counter-case. So, at the moment in countries the United States, people have less kids than they want to have. I think they want to have, like, 2.6 on average, and they have 1.8[?]--

Russ Roberts: They have fewer kids than they say they want. It's not the same thing. Just FYI [for your information].

William MacAskill: As a good reveal-preference economist, they have fewer children; and thank you for correcting my grammar, too.

They have fewer children than they say they would like to have. Sure.

And, that's for many reasons. But, one idea that's getting more currency is that it's immoral to have children because of the impacts on climate change. And, it is absolutely true that children--having a child--will cause more CO2 to be emitted into the atmosphere because of that existence of that additional person.

However, I want to say two things.

Firstly, you can nullify that harm by offsetting. And, in fact, you can nullify it 100 times over. So the cost of a child--in the United Kingdom, it's like £10,000--that's probably like $13,000, $15,000, let's say in the United States. Per year. By donating to extremely effective climate nonprofits, you can avert an expectation of a ton of CO2 for about a dollar.

So, let's say you increase the cost of raising a child by about 10%. You spend more than $15,000, you spend $16,000. A $1,000 of that goes to highly effective climate nonprofits. Then you have offset the carbon impact of that child a thousand times over.

So, it's really playing safe in regard. And, you've not enormously increased the cost of having a child.

So, that's the first thing.

The second thing is a kind of deeper thing, which is that if you're just looking at the carbon impact, you're only looking at one side of the ledger, where: Yes, people do things that are net, that are harmful for the world. Like, too many carbon emissions. They also do enormous positive things as well. So, they contribute to--and this is all kind of assuming that, like, you're able to bring up the child well, it's able to flourish as it becomes an adult.

But, yeah, they contribute to society. They help build infrastructure. They pay taxes. They innovate. They can be moral change makers that, like Benjamin Lay, who can, you know, improve the trajectory of civilization, too.

Russ Roberts: They could be a good friend.

William MacAskill: They could be a good friend. Exactly. They contribute in many ways.

And if--once you look at both sides of the ledger, I think the positives win out against the negatives.

I mean, the final thing is just that if the people will have sufficiently good lives, I think it's a benefit for them, too. Like, I'm happy that I was born and was--unlike the people in the Lower East Side that you referenced, who were wishing they weren't born, I'm, like, very happy to have been born. I feel very lucky.

And, you know, one way of thinking, 'Well, okay, how do the positives and the negatives weigh up?' is just to think, 'Well, suppose there'd been half as many people ever throughout history, where would we be?' And of I was born as the one person after 50 billion, rather than one person after 110 billion? Well, I would be a farmer. I would not have an esthetic. I would be working 12 hours a day. I would probably be in some form of forced labor. I would not have freedom. I would not have much freedom over who I marry. I would not be able to travel. It would be a really pretty bad life.

And, the fact that we have a world today where we have a high material standard of living and that we have made some moral progress, that's in significant part a numbers game: the fact that we've had so many people who have contributed in a net positive way to society.

And so, one thing I'm certainly not saying is that everyone should go having as many kids as possible, or certainly not that the state should get involved. All I am saying is, 'Look, it's not a bad thing, morally. In fact, I think it can be a good thing morally.'

There are many other good things you can do. You can donate to charity. You can volunteer. You can have a career that has impact. But this is one way I think of making the world a better place, is to have kids and bring them up well.

Russ Roberts: My guest today has been Will MacAskill. His book is What We Owe the Future. Will, thanks for being part of EconTalk.

William MacAskill: Thanks so much for having me on. It was a really fun and interesting conversation.

Russ Roberts: Ditto. I agree.