Intro. [Recording date: August 31, 2022.]
Russ Roberts: Today is August 31st 2022, and my guest is neuroscientist and author Erik Hoel of Tufts University. His Substack page is "The Intrinsic Perspective," and his novel, The Revelations, deals with consciousness, love, the mystery of the brain. Our topic for today is a recent essay he's written on his Substack page that we will link to, "Why I Am Not an Effective Altruist." Erik, welcome to EconTalk.
Erik Hoel: Thank you so much for having me, Russ.
Russ Roberts: So, let's start by talking about Effective Altruism. We've had both Will MacAskill and Peter Singer on the program to discuss it. They are the co-founders of the Effective Altruism [EA] movement. Listeners who pay attention will know that I'm a skeptic of sorts, and like you, I am not an Effective Altruist, but I think there's some good things about it. And you do, too. But we'll start with, let's say what it is and what's good about it.
Erik Hoel: Yeah. So, I think Effective Altruism is a new movement that is, sort of the short way to say it would be, based off of moral philosophers and funded by billionaires. The idea of it is to create a number of institutions which give away money, which is a very admirable thing to do, I think, in general. I think everyone--most people agree. They do it in a way that they deem 'effective': so, effective altruism. And, this is kind of the difference between traditional, maybe, charity or altruism.
If you've seen, say, the movie Moneyball or read the book, where in baseball, there is this sort of statistical revolution where they're trying to sort of improve baseball and they realize they don't need really big names: they just sort of need all the statistics to add up, and they can do it for much cheaper.
Effective Altruism, at its simplest, is just the Moneyball of charity. So, proponents will give examples like: you can train a seeing eye dog for a blind person in America for $40,000; but for that same $40,000, you could prevent 500 cases of blindness in the Third World. I don't remember what the exact numbers are, but those are the sort of examples that they give, and you can see the Moneyball aspect to the charity.
I think, in general, everyone--most people like that at least to some degree or they don't mind it. They don't think it's some--certainly, they don't think it's an evil thing or a bad thing.
Lately, they've also become, as the billions have poured in, they've become sort of more ambitious and the movement have begun to pivot to high-profile conceptual issues like AI [artificial intelligence] safety, like long-termism, which is really caring about the deep future of humanity. Again, these things are, I think, good and people mostly agree on them.
But, the actual core of the philosophy is based, as I said, on these moral philosophers' conceptions of utilitarianism, which is maximizing the good for the most number of people, with 'the good' defined in some mathematically capturable way.
Due to this core of utilitarianism, the movement ends up having of a number of repugnant or strange conclusions. And, it's not so much that the movement itself is bad as I think that in order for them to continue to gain mainstream acceptance, they need to leave behind a lot of these more original utilitarian conceptions and basically just become an organization that does cool stuff with billionaires' money.
Russ Roberts: So, I just want to expand a little bit on what the movement is and what I like about it, and then we'll get to the utilitarian part of it.
What you didn't emphasize is that it's scientific in the minds of the effective altruists. You don't just cure 5,500 people of blindness or prevent 500 cases of blindness. You're going to make sure that your dollar has the biggest bang for the buck and you're going to use--well, actually, it's not science, it's social science usually, so it's a little trickier and listeners know I'm skeptical about this--but the idea would be: Don't do the $40,000 for the seeing eye dog, and don't do it because that your uncle was blind and it gives you emotional satisfaction to do that. Make sure that your money has the biggest impact it can possibly have, and that might be preventing blindness in Africa but also in some poor country, or it might also be deworming--which was cause du jour for a long time. I don't know if it still is; and the research that underlay that conclusion was questioned.
So, it is much more complicated. But the fundamental idea that you should care about the impact of your money and not just the warm fuzzies it generates in your laying in bed at night is an interesting and very thoughtful and provocative concept, and I'm very sympathetic to it.
The fact that they care about that is I think a good idea--a really--and a big idea. And I think that's great. But I do think it has intellectual problems that has caused it to somewhat go off the rails. Let's go into that.
Now, you argue in your essay that at the heart of effective altruism is Derek Parfit's trolley problem and Peter Singer's shallow pond, two things that have both come up a number of times on EconTalk, and we'll link to those episodes. Describe what those are to start with, and then tell us why they're problematic. Because, on the surface, I mean, who could argue with them, you might think?
Erik Hoel: Yeah, absolutely. I think that the tendency to start from simple thought experiments and then to expand upwards into this 'how we should spend billions of dollars,' we should immediately wonder if there's some issues of scale or concept or complexity there.
So, the two motivating thought experiments are, as you said, the trolley problem and the shallow pond. So, the trolley problem is almost a meme at this point. So, a lot of people have seen it, but very quickly: There's a trolley going down the tracks. You are near a lever that can switch the tracks, and on the track that it's going down there are five people, but you could switch it to a track where there's one--people. And then the question is, do you switch the track?
Now, a lot of people assume that the correct thing to do is to switch the track. If you actually ask people, a significant minority will say, 'Do not switch the tracks. Don't interfere with the natural order.' But generally--again, this is very dependent on a classically so-called weird dynamics of Western undergraduates, which is where all these studies are done--but if you ask sets of Western undergraduates whether or not to do it, I think it generally ends up being a majority that say to switch the tracks.
Russ Roberts: That is: Make sure the train only kills one person if otherwise it would kill five.
One of the problems I have with these kind of problems is that there's nothing in life like this. And that sounds like a cheap shot, but it's not, because the same problem with the funding the deworming as if you know for certainty that if you spend this money on this instead of that, the outcome will be this other thing. And usually it's not necessarily clear; but okay, let's play along. Go ahead.
Erik Hoel: Yeah. It's sort of like how in politics all the non-controversial issues are easy to solve and therefore they don't crop up in national debate; but then all the controversial issues that can really split people down the middle are exactly the things that people run on. Similarly, all the easy-to-solve trolley problems that are out there in the world have mostly easily been solved, and then we're left with the things that may look like trolley problems but aren't.
Anyways, the point of this is that, yes, it's better to save five people than save one. That's supposedly the point.
The problem is we could immediately start to complexify the trolley problem. So, the classic complexification of it that shows the problem with the argument is that we can imagine a case of a surgeon who has five patients on the edge of organ failure, and then they go out at night hunting and they find some innocent young person and they pull them into an alleyway and they butcher them for their organs, and then they save five of their patients.
And, I think anyone who has not completely bit the bullet of so-called utilitarianism--which would generally advocate for stuff like this or people would use to describe people who would advocate for stuff like this--most people would find that deeply repugnant. They would say, 'We just can't live in a world where surgeons pull people in and off the streets and kill them,' and that is somehow deeply evil and unfair. But the math is the same. It's one person for five. So, why shouldn't you do it?
I think rather than thinking this is a problem of complexity, we should think: Well, maybe this is actually a problem of simplicity, where the trolley problem is just really, really simple.
Russ Roberts: Of course, the other issue would be--forget the rogue surgeon--you are morally obligated after a certain age to show up at the surgeon's office and put yourself up for donation prematurely before you die because you can save five people, and if they're going to be happier than you, the total world happiness--you know, that phrase, 'Greatest good for the greatest number of people,' is so compelling. It's got a romantic ring to it. But when you try to figure out what it actually means in real life, it is much, much trickier.
So, complexifying the trolley problem. Continue.
Erik Hoel: Yeah. So, we can immediately see how these problems, which give people the intuitive thought pump to agree with utilitarianism can quickly be complexified in such a way that the majority of people will now say, 'Wait a minute. Something must be wrong.' They throw up their hands.
The whole effective altruism movement, I think, really has a very particular intellectual origin, which is in this other thought experiment, which is Peter Singer's shallow pond--and Peter Singer being a really world-famous, one of those top-tier contemporary philosophers. And, he wrote an article called "Famine, Affluence, and Morality," I think back in the 1970s, and in it--this was during a terrible famine I think in Bengal--and he gives this analogy, which is that if he's walking to work and he passes a shallow pond and he sees a child drowning in it, he says, 'Everyone on earth would wade in to pull the child out.' Right--like, most adults would immediately jump to that. Your clothes will get muddy, but it's like an inconsequential thing. You'll just pay the dry cleaning bill.
But he then says that, 'Well, maybe the dry cleaning bill could literally save a child somewhere--some Bengali out there and who is undergoing this famine. And, why are you not morally obligated to act on that versus acting to walk by with a child drowning in a shallow pond?' And again, I think that it's a very persuasive thought experiment. I think a lot of people would say, 'Well, when you put it like that, it does seem immediately obvious that maybe I should consider Bengalis to be proximal to me and that my actions can really influence them, because they can. All I have to do is click Accept on some credit card charge or something.'
And again, I think that there's very little to object to about the particular thought experiment. Just like with the trolley problem--I personally think that we should switch the levers in that very particular case. And in the shallow pond, I think that it's very obvious that if you can donate a hundred dollars and maybe easily save somebody, then maybe you should.
But, the problem is, is that again, when we change the scale of the thought experiment or complexify it, we immediately run into things that don't look good at all.
And, this was actually an original critique of utilitarianism proposed by another philosopher who was very famous, Derek Parfit, and he proposed this notion called the Repugnant Conclusion. The Repugnant Conclusion is that if this were true, then what you should do is effectively try to arbitrage away all the inefficiencies in the world such that everything just goes to saving the lives of people; and having also as many people as possible, right? Because we all agree, the more people you save, the better, right? So, these are all consequences of this, as you said, beguiling definition of utilitarianism of maximizing the good for the most people. So, more people is better; and it's actually going to be easier to create a world where there's just a huge number of people where everything is the slums of Bangladesh or something, and everyone lives just in a not great life just above of the poverty line or something.
And, that is this repugnant conclusion that seems to follow from the reasoning of the shallow pond, just now applied at scale.
Russ Roberts: Now, let's dig into that because I think that might be hard for people to follow. It's a brilliant way you put it. There's a utility--which is the economic jargon for wellbeing. So, some people have higher wellbeing than others. So, I'm in the West, and I make a very good living; and there are people who are near subsistence and death. They're not just not as well-off as I am: they have very, very bad lives. So, that justifies--well, let me say it a different way because this is the Peter Singer way, and I'll remind listeners that the shallow pond is the centerpiece of his book, The Life You Can Save. We did an interview on that a while back. We'll put a link to it.
So, I want to throw a birthday party for my five-year-old. Well, that's immoral, because that money--my five-year-old will be a little happier, but my five-year-old is already really happy. So, making a five-year-old in America or Israel where I live now a little happier compared to transforming the life of a five-year-old in a poor country is a moral imperative. I cannot have the birthday party if I'm a moral person. I must use that money to buy a malaria bed net and save the life, say, of a five-year-old in a poor country.
So, the Parfit reductio ad absurdum is that, that seemingly--as you said, on the surface that's a nice idea. You might even go to your kid and say, 'We're not going to have a birthday party this year. We're going to help somebody far away who we don't know, but who still has a very tough life and we feel obligated to help them if we can.'
But the implications of that go much, much further. The example I gave recently in the conversation with Kieran Setiya was not only can I not have the birthday party if I want to be moral, but I need to not spend time with my son. I need to be doing some consulting work because I can take that money and I can save 10 lives in that poor country with the malaria bed net.
So, even though my son will long for me and present perhaps a little bit my neglect of him, his level of happiness will still be dramatically higher than the people who don't have the bed nets if they don't get them. So, it's a moral imperative, then, to get the bed nets, so I need to consult and ignore my son.
So, that's an example of that arbitrage.
But, to get to your punchline: Fill it in a little bit more, this idea that everybody's going to end up fairly miserable, but 'There'll be so many of them it'll be worth it.'
Erik Hoel: Yeah, precisely. I think that this notion of arbitrage or trade is really at the fundamental heart of it and treating morality like it's some market where we can trade things.
So, if I have a certain wellbeing, my wellbeing as someone who lives in the First World and in the United States is probably worth a lot, in the sense that I could sell it to improve the wellbeing of other people really significantly. Right? And I would only have to sell of mine in order to substantially improve the wellbeing of others.
And again, immediately that sounds, 'Well, that doesn't sound bad. That, in fact, maybe sounds kind of good.'
But then when you think about it as, 'Well, when do I stop? When do I stop arbitraging wellbeing?' And the answer is: You never stop. Right? If you're really maximizing the good, you should just keep arbitraging until the cows come home, and what ends up happening is that everyone then ends up with a life that's only sort of just barely above subsistence level, because all the extra avenues have been arbitraged away such that it's been fairly distributed among everybody else.
If you note, this notion of arbitrage is I think so deeply embedded in the EA [Effective Altruism] movement due to its inherent utilitarianism. Currently, one of the main funders of EA, Sam Bankman-Fried--who is a really smart young man; he's now a billionaire, he's placed most of his wealth to charity, which again is I think personally an admirable move. But how did he make his money? He made his money in 2017 when the price of Bitcoin in Japan rose rapidly and outran the price in America, and he arbitraged the price away. He was the one who did that really famous--one of the most famous arbitrage trades of the last 10 years.
And so, that's where the money for the effective altruism movement comes from. It comes from arbitrage, right?
So, in a sense, it really is the fundamental mindset of the movement--which, again, I have to preface, I think ends up doing a lot of good in general to the world. I just disagree with some of the most fundamental assumptions, particularly around this notion of maximizing.
Russ Roberts: Parfit calls this the 'Repugnant Conclusion.' That's the rogue surgeon. It would be this: 'We're all under borderline subsistence, but there's a lot of us, so the total is higher.' It's such a weird idea to suggest that I don't know if any actual effective altruist would agree with this conclusion, to be fair to them. It's a weird thing to me to suggest that, 'Let's make a lot of people'--although it's a little bit implicit in long-termism and Will MacAskill was recently on the program discussing it. It hasn't aired yet, so, Erik, you haven't heard it, but it will have aired by the time this comes out. But this idea that, 'Well, okay, most people will be miserable relative to the happiest people today, but there'll be so many of them that the total amount of happiness will be high, higher than this unequal distribution of wellbeing.'
At the heart of it--and a lot of this is embedded in economic policy-making and economics' notions of social welfare, and we may come back to this in a little bit to talk about it--but it implies an ability to add up wellbeing across people. And I just would add that Bentham, the Father of Utilitarianism, despaired finally of that challenge. He couldn't find a way to add up happiness over people, because it can't be measured.
So, part of the simplicity of these unskilled examples is: Well, of course, a person should suffer with a dry cleaning bill to save a life. I mean, that's obviously a good trade. And it is. Most of us, as you say, would do it voluntarily. We agree. We accept that moral imperative. It's that the implications of that are quite unpleasant, potentially, and they have a different texture to them than that simplified example.
Erik Hoel: Yeah. I strongly think that one big issue has been that moral philosophers--of which, for example, William MacAskill with his recent book certainly is one--and in it he discusses this Repugnant Conclusion. And he actually says that, personally, he just bites the bullet on it and says, 'Yes, maybe that is preferable,' but he also says he understands he's not going to get everyone on board with that. If I'm remembering his book correctly, he has a sentence or two that says that.
And, I think--it's funny because Parfit's original purpose is to poke holes. It's not to say this is the actual end state of the world that you should work towards. Right? It's to say we can't possibly believe this moral philosophy because this is so absurd. And the same goes for things like Robert Nozick's utility monsters and these other famous esoteric attacks on utilitarianism.
Russ Roberts: Let me read an excerpt from your essay that gets at this in a more colorful way, perhaps. You say, quote:
First, there's already a lot of charity money flowing, right? The easiest thing to do is redirect it. After all, you can make the same argument in a different form: why give $5 to your local opera when it will go to saving a life in Bengal? In fact, isn't it a moral crime to give to your local opera house, instead of to saving children? Or whatever, pick your cultural institution. A museum. Even your local homeless shelter. In fact, why waste a single dollar inside the United States when dollars go so much further outside of it? We can view this as a form of utilitarian arbitrage, wherein you are constantly trading around for the highest good to the highest number of people.
But we can see how this arbitrage marches along to the repugnant conclusion—what's the point of protected land, in this view? Is the joy of rich people hiking really worth the equivalent of all the lives that could be stuffed into that land if it were converted to high-yield automated hydroponic farms and sprawling apartment complexes? What, precisely, is the reason not to arbitrage all the good in the world like this, such that all resources go to saving human life (and making more room for it), rather than anything else?
The end result is like using Aldous Huxley's Brave New World as a how-to manual rather than a warning. Following this reasoning, all happiness should be arbitraged perfectly, and the earth ends as a squalid factory farm for humans living in the closest-to-intolerable conditions possible, perhaps drugged to the gills. And here is where I think most devoted utilitarians, or even those merely sympathetic to the philosophy, go wrong. What happens is that they think Parfit's repugnant conclusion (often referred to as the repugnant conclusion) is some super-specific academic thought experiment from so-called "population ethics" that only happens at extremes. It's not. It's just one very clear example of how utilitarianism is constantly forced into violating obvious moral principles (like not murdering random people for their organs) by detailing the "end state" of a world governed under strict utilitarianism.... [Russ: Then you end with:] Utilitarianism actually leads to repugnant conclusions everywhere, and you can find repugnancy in even the smallest drop.
Now, it's a little harsh perhaps, although I enjoyed it. What would utilitarians say? They're not, they don't, most of them--we're going to quote an exception a minute, Eliezer Yudkowsky--but most of them are going to agree with you. They'll say, 'Oh, no, no, no. That's not what I had in mind. Of course, not. I don't believe[?] rogue surgeons. I don't want to create hydroponic farms in Yellowstone and fill them with people having 10 kids, 15 kids a lifetime.' They don't really believe this, do they? Is this a straw man argument?
Erik Hoel: I think that there actually is a reasonable sense in which it is a straw man argument, in that I think that the average, say, effective altruist out there who is donating some percent of their money to charity or is part of this movement--which comprises now of thousands of individuals--probably does not need to bite the bullet of the repugnant conclusion or anything like that. And I sort of openly admit that.
However, my criticism of the effective altruist movement is that many of the leaders--many of the leading lights of the movement--do toy around with the repugnancy that's inherent within utilitarianism.
Let me give a brief example, which is William MacAskill's latest book, and in it, he has this section where he talks about humanities' impact on the earth. He says that, 'Well, the normal moral view is that humans killing animal wildlife is bad.' Like, when we clear a forest and make way for a parking lot, we kill a lot of animals and that's really bad. But, MacAskill is able to somehow personally calculate out the average suffering of animals, and he finds that animals generally often suffer, and he thinks that the suffering of animals outweighs the positive aspects of being an animal. That is, if you had to choose between--according to William MacAskill--not being born and being born a rabbit, you would choose not being born because rabbits often suffer.
So, from that utilitarian notion, he then says that actually, maybe we arrive at, quote, "the dizzying conclusion" that from the perspective of wild animals themselves, the enormous growth and expansion of homo sapiens has been a good thing. To be clear, what he means by that is that there are now less wild animals because there are more humans and their lives are worth negative value and, therefore, it's good that there are less of them.
I think that that is both incorrect--so, I quibble over all sorts of things, but I think it's a perfect example of arriving at this repugnant conclusion of like, 'Oh, the average rabbit in this rabbit warren has a negative utility, so let's just pave over it.' Right? And that is a very direct repugnancy, right? That's in a book that was just everywhere. It was in the New York Times. It was in the Atlantic. It was in the lead of the movement, right? That's a very, like, statement that most people would disagree with.
Russ Roberts: Even on EconTalk. But those people--
Erik Hoel: Yes, yes, yes, most prominently--
Russ Roberts: --those people could be wrong.
Russ Roberts: So, first, I want to say that I've seen a handful of rabbits in my time, probably about 50. They do look very nervous and uncomfortable, but I would not--for starters, I wouldn't be sure whether they'd wish they'd never--actually, I find that amusing. I can't help myself. I find that very amusing to imagine a rabbit wondering if the rabbit should never be born. But, what's wrong with that? Let's take it seriously. You say it's repugnant. Most people wouldn't agree with it. And, rabbits are not maybe the best example because rabbits are--well, they're a mix: they're cuddly, but they're not really that wild.
Let's talk about something more dramatic, some megafauna, like bears. If we develop the Western United States more, there are going to be fewer grizzly bears: and 'that's good because they have a very hard time.' What's wrong with that argument? I mean, why isn't that a reasonable argument?
Erik Hoel: And this is a real argument that people in EA give, I think. And hopefully, I'm getting this correct, and I apologize if I'm not. But I remember seeing that there was grant money at some point given out to figuring out whether or not killing all predators would be, like, the moral act to do. Like, 'We should get rid of all predators,' because predators, of course, eat prey, and therefore cause suffering. Right? And that this is, like, a possible effective-altruist cause area.
So, again, this is something that people take very seriously.
And you're right that you can see, sort of, very initial sketches as to why. Right? And those would be, like, 'Well, okay, so they do cause suffering. So, what would be wrong with getting rid of all the bears or something like this?'
And, I think this goes to some of the failures of utilitarianism in that to deal with arriving at repugnant conclusions--and I'll get to the why behind it in a second--it's often the case that utilitarians need to import non-utilitarian ethics to shore up the theory.
And, non-utilitarian ethics: you can have things like qualitative differences or natural order. Like, you might just inherently value the natural order of things. So, you might say, 'Well, listen. In a weird sense, the bear, the rabbit, the deer, they're all part of this larger ecosystem, which is this natural order. It's where we come from. We sort of owe it in a fundamental sense to leave it as untouched as we can.' Like, that doesn't mean we need to bulldoze cities and put in rainforests, but it does mean that when we can, we should sort of let it run its natural course. And actually, the moral thing to do is to let it run its natural course.
But, that's not an argument about suffering or happiness. It's not an argument about utils. It's an argument about, say, just respecting nature, which is something I think most people have an inherent respect for nature in the natural order. And that's an example of a moral aspect that does not show up in utilitarianism, because you can't really add it to any calculation.
Russ Roberts: You said 'utils,' in there. Utils are--in the early days of utilitarianism, there was a hope in the economics profession in particular, that our job in life is to gather as many utils as we can, U-T-I-L-S. And, ideally, if I have more utils than you, a distribution--a redistribution--of those would make the world a better place in sum, that being the greatest good for the greatest number of people kind-of-argument.
And unfortunately, the utilometer never really got off the ground. There's no way to measure my wellbeing relative to yours. We can maybe say something about my wellbeing today versus yesterday, my wellbeing before I eat an ice cream cone versus after--although maybe an hour after something different still.
But there's, again--the hope so that we could make this whole process more scientific, this process of making the world a better place, which is at the fundamental root of morality--has hit many walls. This is one of them, and I think it's important.
It's a fascinating example because, for me, it's an example of our yearning for precision, certainty, and how easy it is to be seduced by it. And, for a long time, people hoped they could measure these things. Now with MRIs[?magnetic resonance imaging?] and other things, the hope is reunited--I mean, reignited--that we'll sometime in the future be able to find out how happy you really are.
In my view, all these are, I think, a grotesque degradation of the human enterprise, a misunderstanding of what life is really about and an overworshipping of maximization that you've alluded to.
But now, to give the other side of the stew[?] for a moment: Isn't all this critique of utilitarianism and effective altruism, isn't it just an excuse to be selfish? I mean, come on. All these people have come with these beautiful ideas for why you should give away more of your money and you're just a selfish person and you're just shooting holes in the stickyable--to feel good about not giving to charity.
Erik Hoel: Yeah, and I actually think that people should give to charity. And I think that that's sort of reasonable. Like, honestly, like I think that that is sort of a reasonable initial response because the question is whether or not there are any practical effects--right?--to the Repugnancy. Right?
So, if there were no practical effects--if this was totally almost like a metaphysical argument over the bases of this utilitarianism, then I think that that reply is really fair, because the reply is just like, 'Listen, none of this matters in practice. It's only in principle. You're mad or you dislike the in principle stuff, but you're just critiquing--but you're not critiquing anything in practice. So, therefore, really, this is just some way to be selfish or get out of giving charity dollars since that's the in-practice effect--right?
And, where, I think--and I think that it's actually untrue that this is only about in-principle objections.
Let me give a very concrete example of something that effective altruism is the biggest spenders in. So, it's the cause area that the EA movement is the only one giving money to--effectively--and, therefore, their views are totally dominant. Right. It's not like--they're not the only ones giving out mosquito bed nets--but this area. And this is the area of AI [Artificial Intelligence] safety. And, AI safety is now that we have these--
Russ Roberts: Artificial intelligence--
Erik Hoel: artificial intelligence safety is that: Now that we have these artificial intelligences that look like, sort of, proto-versions of what science fiction has traditionally dealt with--like, real, actual working minds in some sense; and they're not there yet, but they look like proto-versions of it. And so, people have become very concerned about what will happen after Google and these other companies finish inventing these things.
And, one question is, literally, 'Will they destroy the world?' And that may sound, again, very science fiction and very ridiculous, but I think that there's some reasonable arguments to believe that this is a real concern.
Russ Roberts: We had Nicholas Bostrom on the program talking about this. A lot of people are worried about it.
Erik Hoel: Yeah, and again, I think very reasonably so. But again, EA is the biggest funder--funder of research into so-called AI safety and wondering what can we do about it and so on. So, their views on this matter must necessarily have in-practice effects.
Let me give an example of that, which is that, again, in William MacAskill's latest book, when he talks about this issue of AI safety as a so-called existential risk--that is, a risk that has the potential to destroy the earth and, therefore, we should take it really seriously, like nuclear war or something or an asteroid impact, right?--when he puts it into that category, the first thing he says is that: Well, it doesn't really belong in that category because, unlike--if an asteroid hit and killed everyone on earth, civilization would end, but if machines took over and killed all the human beings, civilization wouldn't really end. In fact, we'd have a lot of AIs, and presumably, those AIs, according to utilitarianism, maximizing the most good for the most people, are people. Again, think about how many that there could be; and that, it would be bad, but it wouldn't be as bad as humanity just gets wiped out.
I think that this sort of thinking, this sort of sympathy for effectively saying that, I mean, if you want to talk about the repugnant conclusion, a future earth, which is only AIs--well, you could fit a lot of AIs on earth. Right? Way more AIs you could fit on earth than people, because you can just copy/paste. You can't copy/paste people. You can copy/paste an AI. So, you can just copy/paste 10,000 of them, right? And you've immediately: 'Wow. We've multiplied the wellbeing by 10,000.'
I really think that that, sort of in-principle sympathy of utilitarianism towards artificial intelligences shows up in what the movement funds in that I personally think that we should be investigating legal and political ways to come down hard on these companies via a public debate and public representatives to craft laws that tell companies how and when to create artificial intelligences, particularly the really powerful new ones that cost millions of dollars to train. There's only a small handful of companies that are doing it.
But the EA activists sort of have this--they really want artificial intelligence to exist.
I feel like they're naturally very sympathetic to it. So, they don't take this anti-AI movement seriously. They don't take the idea that maybe we should literally ban certain aspects of this technology in the same way that we banned research into nuclear weapons and the same way that we've banned research into creating artificial pandemics and other things that could reasonably harm a lot of people. Right? We've banned them. But that is a huge--that is a slim, slim minority of the EA funding. Instead, it all goes towards can we enslave them in such a way to keep them safe? And so on.
Now, again, this is all very high-minded, right? So, I'm not claiming that this is in practice affects right now. But, clearly, it influences how the EA movement spends money in AI safety, and MacAskill and others' sympathy towards effectively AI genocide--well, I can only describe sympathy from that section of his book--is influencing the funding decisions.
Russ Roberts: So, I regret that we didn't talk about that when I interviewed Will. I did read the book, but that didn't come up.
The part I'm puzzled about, and I'll think whether we should--I don't know if Will will want to come back and try to defend himself on some of these issues; that might be interesting--but it's the greatest good for the greatest number of people. AIs aren't people. You kind of fudged that back--I don't know if you're trying to channel your inner William MacAskill, but they're not people. They're machines. But there's this debate about whether they could, because they are intelligent, have consciousness.
My view is I don't think so yet. Maybe I'll be persuaded. My view on this is that if a vacuum cleaner that goes around your house on a track--not a track, on its own mind knowing not to bump into things--if it longs to be a driverless car, maybe perhaps it will have consciousness. I don't think it longs to be a driverless car. Now, maybe there'll be day an AI that will come along and do that, but is that the argument? That, because they'll be conscious, they should get moral standing?
Erik Hoel: Yeah. So, when I sort of conflated AI and people, I'm giving the utilitarian perspective, which I'm not sure why the utilitarianism necessitates this. I mean, you could imagine not. But let me just read William MacAskill, right, himself. So, he says, "In contrast, in the AI takeover scenarios that have been discussed, the AI agents would continue civilization, potentially for billions of years to come. It's an open question how good or bad such a civilization would be.... even if a super intelligent Artificial General Intelligence were to kill us all, civilization would not come to an end. Rather, society would continue in digital form, guided by the AGI's values."
Then he goes on to explain that--the only reason he gives, I mean, I don't know what he personally thinks--but the only reason he gives, the reason he feels the need to emphasize about why that would be bad is that maybe during the AI takeover, bad moral values would be locked in for very long periods of time. He never says, 'Yeah, it would be really bad because we would cause the human race to be extinct.'
Again, I think that this sort of like--this sort of, what can only be described as some form of sympathy, shows up in that the dominant funding for AI safety is basically to create AIs, and to sort of--but just sort of to hope that we're going to be able to perfectly enslave them and that nothing ever goes wrong.
And, I really think that the other direction, which is to basically figure out ways to convince people to ban this technology and treat it like global warming and only do it under really huge amounts of red tape and government regulation and public oversight, that gets very little attention within EA.
And so, this is an example of a repugnancy within utilitarianism spilling down into a funding decision. So, again, it's still quite high-minded, but that's my reply to this, 'Well, there are no in-practice effects.' I think that there are.
Russ Roberts: It's interesting to think about the role of religion in setting people's reaction to these kind of examples. I think most religious people would have no trouble saying that that's repugnant--that, a world without human beings who were created in, say, God's image would not be inferior to a world with--I mean, a world without human beings would be grossly inferior to a world that had lots of AI and no human beings. I suspect the effective altruism movement is not a particularly God-centered group, for better or for worse, but I think a lot of people's repugnancies--repugnant reactions--in these settings come from deep-seated religious views. Which, I think--as a religious person--I think I should be open to the possibility that my repugnant reaction is actually coming from my belief in God, say.
The question is: If you don't believe in God, do you have an argument to reject a world of AI as a civilized world? A religious person has no problem with that. Even a person who is not religious but is affected by the zeitgeist that's still out there of something about sanctity of human life, and that word 'sanctity' is a word that implies some religious belief. It's an interesting question of: If you don't believe in God, how do you reject this repugnant conclusions?
Erik Hoel: It is actually, I think, an interesting question. Quite an interesting one. I had an essay before everyone started talking about longtermism due to William MacAskill's latest book--I had an essay about long-termism and about the issue of keeping humanity human. Such that, I think that humanness--and by humanness I mean something that William Shakespeare might write--that humanness is a moral quality in and of itself. So, just maintaining the connection to our humanity seems morally important to me. And even if you could make everyone's lives better by putting a chip in their skull that stimulates their happiness center--let's be super reductive neuroscientists for a second--stimulates their happiness center, I'd probably be against that because I would say, 'Well, this takes us away from humanness, from our fundamental of being human.'
And this is an example of a qualitative aspect of morality that I think most people have an intuition for. Sometimes that intuition is expressed religiously, but you could also--let me give a non-religious or relatively non-religious example of this, which is Plato's theory of ethics, which is that things should act in accordance to their platonic form. They should act to their nature. So, a good man is a man who is the best of the men that he could be, right? Similarly, a good bear or whatever, a good bear is a good hunter because--it's almost, like, moral for a bear to be a good hunter, even though it causes suffering, because that's their nature.
So, similarly in this case, it would be that if humans went extinct, that would be very bad because what we value is the platonic form of humanness. That's what we are and that's naturally what we value. I think that these qualitative aspects of morality can have either religious justifications or other metaphysical justifications, but they're really just missing from a lot of the really core utilitarian guiding principles of EA.
Russ Roberts: I was thinking about Nozick's experience machine--we haven't talked about it in a long time in the program, but we had in the past--where you have an option to hook yourself up to a machine and you'll then program it in advance to have a bunch of experiences that you would choose. You could become a great rock star. You could become a great golfer. You could become President of the United States. You could cure cancer. The only problem is that it feels like real life. You go through this time on the machine having these emotional sensations from imagining them. They feel real to you, but they're not. You're lying on a table and then at the end, the machine's unplugged and you're dead.
If you ask people whether they would like to live that life, which would be on some level exhilarating because it's all that chip in the brain stimulating the happiness center. It could be free of disease. It could be free of--right? Your imagined life. Most people, I'll quote, I was going to say most people would say that they would choose not to do that. But I think that conclusion is not as common today as it used to be.
I hear smart people say, 'Oh, yeah, I'd be on that machine.' Now, maybe they get pushed into a corner philosophically and they have to defend it, but I do think our world, maybe because it's less religious, I don't know. Again, a religious person would not choose that life, and most non-religious people wouldn't at least while religion hovers over our consciousness in some way perhaps, unknown to us, in the zeitgeist. But I think increasingly--whether it's religion or not doesn't matter--people are saying, 'Yeah. That'd be good. I'm in.' What do you think of that?
Erik Hoel: Yeah. I think it's really interesting, and if I can give a reason as to why we should be skeptical of these really seemingly disconnected utilitarian claims--and when I say disconnected, I mean from sort of the history of our civilization. So, I think for the experience machine, if you went back and you polled people during, say, Jane Austen's days, there's no way that they would, almost any of them, would ever accept this. They'd say, 'This is out[?].' Maybe you could find--Jeremy Bentham, right somewhere. But the vast majority of people, including intellectuals of the day, would never ever accept something like this.
There's sort of this argument that it's very bad to top-down design economic systems, or systems of any kind, rather than letting them naturally evolve. Because when they naturally evolve, they become really robust and they capture all these things that you would never really notice when you're trying to top-down design things. It's like, 'Why are things done in a certain way?' It's like, 'Well, try doing them some other way and you'll quickly find out why they're done in that way.' This is a very classic phenomenon that people run into.
Russ Roberts: It's the Chesterton fence argument, and you think you know why the fence is there, but it evolved for some reason that you don't know of and you should start--your default should be, 'There's a reason it's there and I don't understand it.'
Erik Hoel: Yeah. Absolutely. I think that you can of apply the same thing to morality irrespective of religious beliefs. So, you can say, 'Listen, most of the people reasoning and talking about this stuff are coming out of Western civilization. It has certain values and assumptions. We should take our intuitions seriously as things that have evolved over very long periods of time.' And, this top-down design from utilitarians where, 'Let's just calculate everything in terms of utils and we can just arbitrage everything,' it looks like top-down planning of morality.
I think that there's a sense in which we should trust the ancient wisdom, even if we can't immediately give some sort of justification that's satisfactory; and maybe there's a sense in which the number of people who would now go into the experience machine or something like that shows that we're somehow disconnected from the values, from where we came.
Russ Roberts: Yeah, it's ironic. You said morality is not a market, but you're kind of suggesting there's something market-like that emerges--rather than from the top down, from the bottom up--through centuries, millennia. Of course, as I like to point out, even though I'm a big defender of laissez faire and emergent order, to a certain degree, there are a lot of things that emerge that are not attractive. We had racism as a defensible, self-righteous view for a long, long time--
Russ Roberts: as the justification for slavery, and so on. So, it's a little trickier, but I think that's a good, what you said, it's a very good starting place.
Russ Roberts: I want to go back to one thing we talked about, and then I want to transition to the Eliezer Yudkowsky quote because I think it's very, very interesting. When I asked you, 'Isn't this just an excuse to be selfish, these critiques?' the answer I would give is that it's an excuse not to give to people far away from you where you don't know how the money is going to be spent or what it's impact is. And of course, there's unintended consequences that you as a Westerner do good or think you can control and you don't. We have tragically large numbers of examples where people with lots of money thought they were making the world a better place and they weren't.
So for me, it's all about local knowledge. The reason you should give locally isn't because you care more about your kid than you do about the kid in the poor country who needs the bed net. It's that you can actually find out what the effect is. You can see that your kid's happy. You can see that your local community center that you've given money to is actually functional rather than dysfunctional.
I think--it's a variation of--the reason I just thought of it is the point you're making about bottom-up versus top-down. One of the virtues of bottom-up and emergence is that it utilizes that local knowledge in a way that the top can never do because it can't have access to all the information that would be needed to make those decisions. Goes back to the socialist calculation debate of Hayek and Mises against Lange and--forgot who the other side--who else was there.
Russ Roberts: Let's turn to this fantastic quote from Eliezer Yudkowsky. He says the following, quote:
Pick some trivial inconvenience, like a hiccup, and some decidedly untrivial misfortune, like getting slowly torn limb from limb by sadistic mutant sharks. If we're forced into a choice between either preventing a googolplex of people's hiccups, or preventing a single person's shark attack, which choice should we make? If you assign any negative value to hiccups, then, on pain of decision-theoretic incoherence, there must be some number of hiccups that would add up to rival the negative value of a shark attack. For any particular finite evil, there must be some number of hiccups that would be even worse.
Meaning, one person has a hiccup--well, that's a trivial effect; but if billions of people have hiccups, the sum of that is so horrible it outweighs the sadistic shark attack that's unbelievably a horrible way to--it's a death and a very horrible death.
Again, I'll quote the last line:
For any particular finite evil, there must be some number of hiccups that would be even worse.
Your answer is, 'No, there isn't.' Why not? What's wrong with that argument of Eliezer Yudkowsky?
Erik Hoel: I think it comes down--yeah. Yeah, and first, I want to say also that Yudkowsky has, I think, been presciently correct about other issues. So, just before I go on to, say, criticize this brief thing, I'm not really criticizing him fully in any sort of sense. But I really don't agree with this.
The reason why is that I think it's implied by the basics of utilitarianism, and what that does is really treat good and evil as big mounds of dirt. So, it's all sort of the same. It's just all dirt, and there's a certain amount of the dirt that you have and, say--this is how moral philosophers talk: they talk very reflexively about big historical events--but let's say, let's follow their example for a moment and say that you had a mound of evil dirt that was the Holocaust, and it's this huge mountain of evil dirt, right?
And you say: Well, when a human being stubs their toe or hiccups or something like that, it's like this little spec of dirt, right? But if I, like, let humanity go on long enough and just started adding up the stubbed-toe amounts, eventually, I would have a mound of dirt, of these stubbed toes, that is equal to the Holocaust. Right? So, it's like literally, they want a mathematical equation that says: A holocaust is equal to x number of stubbed toes.
I think, again, this is something where I think most people get off the bus at some point around here, but I think the reason why is that good and evil just aren't big mounds of dirt. They're just not. There's all these qualitative differences between various types of evils and goods. Similarly, just as stubbed toes don't add up to a holocaust, no number of warm socks that you put on adds up to someone saving a life. These just aren't really comparable things.
The problem is, is that: what utilitarians want to do by this process of maximization is arbitrage between everything. So, you can see how this repugnancy happens. Right? They conceptualize everything as mounds of dirt, and then they want to arbitrage and trade between all the mounds and treat everything sort of the same. Then you end up at sort of this terrible place, this repugnant conclusion. And that, I think, indicates that there's something really deeply wrong with viewing good and evil as essentially big mounds of tradable dirt. They're not tradable. It's not fungible in the way that they want it to be.
Russ Roberts: I agree with that, and I think it's extremely important in practical terms for economics and economic policy-making, because economic policy-making is fundamentally driven by this kind of thinking.
When the North American Free Trade Agreement [NAFTA] was under consideration, one of the examples that was often used is that there was a town in Illinois where a lot of brooms were made, and if NAFTA were passed, that town would disappear. A lot of the people who worked there would go out of business because the Mexican brooms would be much cheaper than the American brooms. And the economist's answer is--and by the way, I'm a big believer in free trade. I wrote a book, The Choice, about this moral issue. So, I don't think this is a good--I'm in favor of free trade, but I do not think this is a good argument for free trade. But this is the economist's argument, often--not often--always.
The argument is: There will be millions of people who will save $2 on a broom, and when we add up that benefit in dollars--and this is not utils, this is dollars--we add up the benefit in dollars that they save, and then against that we're going to have the 5,000 or 3,000 people in the town who lose their jobs. They're going to be unhappy for a while. They would pay something to keep their jobs. But, the gains are enormous of getting rid of those broom-jobs in America and letting them go to Mexico.
Now, as a footnote, I looked into this and it turned out a lot of the workers in that factory were Mexican, who would come by bus to make brooms in America, and what would actually happen is they would go back to Mexico. And we could debate whether that's good or bad; but that wasn't [?] the conversation. The conversation was: Millions of people will save a dollar, say, and a few thousand people will lose $10,000.
And, 'The calculus is clear,' says the economist.
And, when you press the economist, they say, 'Well, of course, it's not really true that the country is better off. The broom buyers are better off a little bit and the broom makers are really savaged, but there's enough gain to the broom buyers to compensate the broom makers so that the net amount of wellbeing in the country goes up. And therefore, free trade makes Americans better off.'
And that calculus, to me, is morally bankrupt. And if you use it with anybody who is not an economist and say, 'But, the people who gained a dollar adds up to more than the people who lose their jobs and lives are ruined,' they're going to just look at you like you're crazy. And they're right, because that moral equivalency requires a commensurability. A measurability. And, having this metric that is totally wrong. Right? As if, 'Well, what would you pay to save your job?' 'Well, I don't have a lot of money, so the answer is: not very much,' and this other person who's really rich is going to save a dollar on their broom and, therefore, it's--that's just insane.
But I taught that for a long time. I taught that tariffs and quotas are inefficient--and they are, by the economists' definition of efficiency, which means that the pie as a whole gets bigger.
But at the fundamental root of that is, to me, a moral failure, which is to suggest that it's average wellbeing that counts or average willingness to pay. That's absurd. It's absurd to compare those across people. It is grotesque.
And yet every economist listening to this is trained that way, as far as I know. Maybe at George Mason you get something a little different these days, I hope, but in general, that is mainstream economic thinking.
Erik Hoel: Yeah. That's a really interesting, and I think specific example of this. Because, my first thought in this is some qualitative moral difference, where, say, someone losing a job is not the same, really, as them losing x amount of money. It's actually worse--
Russ Roberts: Much worse--
Erik Hoel: in some fundamental way. Right?
Russ Roberts: Dignity. You lose your dignity--
Erik Hoel: Yeah. Exactly. Exactly. Or I mean, it makes me sad to maybe say that our platonic essence is in work, but maybe in terms of our platonic essence is in responsibility of some kind. Right? And nowadays that's often work. And therefore, this is some violation of it.
And again, we see. So, what's funny is that we have these ancient moral systems and they actually do a pretty good job. Right? Like, here, we have Plato's notion of platonic forms, which you would never think is the best ethical theory of all time or anything. But, actually, in this case, it kind of gives you a pretty sensible answer. And puts its finger correctly on what goes wrong or at least one of the things that goes wrong.
And that, again, I think that there's this severing of what might call ancient wisdom, I guess, from contemporary utilitarianism.
And, this does show up very occasionally, but it does in sort of the practical effects of EA.
With that said, I also want to at least have a moment where I acknowledge some of the really cool things that the Effective Altruist movement has done. And I think that where they have the advantage is just that they are doing--they are treating things like charities that people don't normally treat as charity.
So, as an example of that, the AI safety movement, which is trying to make the world safe for AIs, is they treat as a charity. So, they say, 'Okay. This is a charity cause.' It's a very unusual charity cause. People would never normally think of it that way.
But, by treating it as a charity cause, we can actually raise a bunch of money for it and we can distribute it. And I think that that's very cool. I think more weird things should be thought of as charity causes. And the expansion of the notion of charity within the EA movement to incorporate things like that--or other things that I agree with.
So, for example, William MacAskill has an interesting point where he says, 'We should probably leave some coal in the ground because coal is really accessible and easy for civilizations to start up again with,' and if something ever happened--like we had some civilizational collapse or some new dark ages, then if there was no energy left in the ground that was easily minable, it would be very difficult to restart civilization.
So, he says, maybe like a weird cause area for, again, a weird area to do charity in would be, like, buying up coal mines and making sure no one minds them. Which is both good for dealing with climate change and good for--maybe we can have it in our back pocket if we ever need it as a civilization: we have some coal and we can sort of redo the Industrial Revolution.
And, that is, again, like a crazy thing. But it's a fun and I think interesting expansion of the notion of charity, right? And so, that's where I give effective altruists sort of a lot of credit: is of expanding this notion of charity to incorporate maybe more almost science fiction or ambitious or weird things. I think that that's good in general.
Russ Roberts: I just want to reiterate on my broom example that I think Americans should be allowed to buy brooms free of duty and quota from Mexico. I just don't think that argument that was made is a good reason for doing so.
As an economist who has become increasingly critical of my field, when I hear non-economists criticize my field, I always go, 'That's totally wrong. They have no idea what they're talking about. My critique, it's deep and sensitive and nuanced and it's based on blah, blah, blah, blah, blah.'
Now, you're not a philosopher.
When dealing with--you're a mere neuroscientist. Don't philosophers look at you and say, like, 'Oh, you don't really understand utilitarianism. We've worried about all these problems forever. We understand them. We have good answers to them. These are cheap shots.' These are things I say about people who make fun of economics often: 'It's a cheap shot. You don't really get it.' What response have you gotten from--if you've gotten any--from serious philosophers about your essay, especially people who like effective altruism?
Erik Hoel: Yeah. So, I think--One, when dealing with really interdisciplinary issues, first of all, I think it's very difficult to say where the boundary is of what academics should contribute to. I do agree: I think philosophy is the most easy to jump into and criticized field and, therefore, it gets a lot of flack in that manner. I am a neuroscientist, but my main subject of study is the science of consciousness, which is an inherently very philosophical field. So, I'm relatively familiar with the philosophical literature, at least at a broad level.
But the point is not to say, that that's very relevant. I think what's more relevant is if you look at the statements of the people who sort of espouse this philosophy and you look at their academic papers--so you look at the academic reasoning behind these moral arguments--what you'll find is that there is no universal--not even just universal--the repugnant conclusion remains a huge topic of constant debate, and people are always trying to come up with new, clever ways to deal with it, and they're always failing.
You can look at many of the leaders of EA and look at their papers, and what's strange is that they'll admit this in the academic papers, but then when they go to publish, like, a popular book, right, it's suddenly that sort of ambiguity. It's still there to some degree. Like, I'm not claiming anyone's covering anything up or anything like that. But it's not emphasized. Versus, the predominant conclusion of a lot of these academic papers is: We have no idea what to do about these problems.
And let me give a very brief, a read, which is that: here's someone who, this is an academic paper from someone who is head of this FTX Foundation, which is this effective altruist foundation that has a lot of the money. Here's the ending of an academic paper. "In summary, as far as the evaluation of prospects go, we must be willing to pass up finite but arbitrarily great gains to prevent a small increase in risk (timidity), be willing to risk arbitrarily great gains at arbitrarily long odds for the sake of enormous potential (recklessness), or be willing to rank prospects in a non-transitive way. All options seem deeply unpalatable, so we are left with a paradox."
This is on, basically, trying to make utilitarianism work for calculations that have really long odds. So, sort of like extremes of calculations. And he says, 'Oh, we're left with a paradox.' Right?
So, this isn't, sort of a--yes, maybe people within the school will be like, 'Oh, well, it's not an important paradox.' But, when you read this literature, what you come away with is--as so much of academic philosophy--is that it's a bit of a mess. Right? But other--and that's fine. That's fine. I'm not criticizing the moral philosophers. But it exists. For example in my own field, philosophy of consciousness, it's a huge mess. Right? But they don't have billions in funding, right?
So, it's just the case that the standards are going to be higher once the word 'billions' enters the picture. I think that that's relatively fair to want really firm, firm answers, and a firm dealing with these issues, if you're saying, 'Well, the way that we're giving away all these billions is justified by this moral philosophy'; and then when you look into it you're, like, 'Well, this is a bit of a mess.'
Russ Roberts: Well, in economics, it's the same thing. When an economist gives a seminar or writes--a workshop--or writes an academic paper, there's lots of caveats and by-the-ways and footnotes about, 'Well, we did it a different way and it came out this way: not quite the same way we did in the paper that's in the main line conclusion,' and so on and so forth. It's thoughtful, but when they write the op-ed about it, not so thoughtful. And, I think--it's a human impulse. We want to be paid attention to, and nuance is not as attractive as certainty and self-confidence, so the uncaveated version is often what gets the most attention, and so a lot of times you don't get all the caveats.
Now, having said all that, and we're pretty tough on effective altruism, you argue at one point toward the end that, 'Well, okay. So, they soft-pedaled the utilitarian underpinnings and basically what the movement could be and should be, and maybe really is, is simply: try to do good with your money.' And that's a good idea. I'm all for it. I think you're probably all for it. Is it really so important that these intellectual underpinnings--outside of AI, which is horrifying, which is troubling--but is it really that important? To really say it dramatic, maybe fairly or unfairly: They've done a lot of good in the world, you could argue, this movement, and they've encouraged people to give a lot more charity than they otherwise would. They may have encouraged some people to go into running hedge funds who could have been doing something else but were motivated because they could give a lot more money to bed nets. There might be a few world issues there even in its own practical way. But, you could argue that overall the effective altruism movement has encouraged lots of good things in the world.
Erik Hoel: Yeah, and you know what? I totally agree with that. I'm very open about it, and I mentioned at several points in the essay that I thin,k on net, it's certainly done good. And I think in general it's a relatively admirable movement.
This particular essay was prompted--one reason why I wrote it, beyond that, I was getting constantly, given how much it's been in the news, I mean, constantly getting asked my opinion about it--is that there is sort of this contest that they offered about, like, 'Give us the best criticisms of effective altruism.' Right? So, then, of course, I wrote a very critical essay.
But I think that the criticism actually does provide a path forward. And I think it goes into this notion of expanding the idea of charity.
So, my ideal effective altruist project is something like: 'Let's go out there and we'll mine an asteroid and we'll give it all to charity.' Right? And this is purposely exaggerated as a project. I don't really expect anyone to do this. But, sort of, like, that the ideal case will be something like, 'We're going to go. We're going to mine this asteroid for charity'; but in the process of doing that, they have to invent all sorts of new technologies and fund, like, a new space center and come up with all of this, and put money into the space industry, and everyone's watching the YouTube videos of the rockets going up. And it's like, 'Why do we actually like it?' It's not really to give the money to the charity. The reason we all like it is because asteroid mining, which you actually saw it happening, would be really fun and exciting to see, at least for a certain type of person. They would really get a big kick out of this. I know I would. I'd be on YouTube looking at the videos of the rockets going up.
And, I just kind of want us to be, like, honest about it--that there's a sense in which the utilitarian calculation of, 'Well, in the end, we'll get the money to give it to charity'--it's this fig leaf that we can put over the cool project. And the utilitarian calculation really isn't necessary for a lot of this stuff. Right? You almost, like, don't even need a calculation.
An example of this is that they're actually giving out a lot of money to something that's near and dear to my heart, which is writing online, which I think more and more people--more and more writers--should start doing. And they're giving out, you know, serious money to this.
And, I think that that's great, and I don't think that you need to have some calculation of saying, 'Well, is giving a blog a hundred thousand dollars,' which is what they're saying they're going to do, 'somehow worth actually the equivalent of dozens of Bengalis being supported for the rest of their life?' Or something like that? I think we can sort of just put aside those utilitarian calculations. And in many ways, the EA movement already does that. Right?
So, again, but, the goal of this criticism is not to deflate the movement or say that it's terrible. It's just to sort of try to nudge it away from what I view as really this core repugnancy of utilitarianism that comes from the fact that this grew out of moral philosophers, and more towards less caring about that stuff and editing that stuff out. I think that'll also make the movement appeal more to, quote-unquote, "normies."
Russ Roberts: Another thing I would add is that I think it's a good idea to give a fixed amount of money that you earned to something other than yourself, whether it's 3%, 5%, 10%, whatever you commit to is a nice thing. And you should try to do something good with that money.
And you should be humble about the good you can achieve--which I think would be my other criticism of this movement. There's a certain lack of humility, a certain hubris about the science. You said, 'Save so many Bengalis out of poverty,' as if we have a way of doing that. Right? Helping people thrive, helping people flourish requires more than money.
We've spent hours on this program talking about the challenge of poverty in both rich countries and poor countries. It's really hard to do. A lot of times there are unintended consequences.
So, my motto would be: Check to see what your money is actually achieving. Try to avoid harm. I wouldn't say necessarily 'first do no harm,' because that might really bias you toward not giving very much. But I do think you should try to have an impact with your dollars; but I don't think you should be overly confident that you know how to improve other people's lives. I don't think we know very much about that, and that's why I think local giving or giving to causes that you think might get at root reasons for people having a challenge to flourish is very powerful.
So, I look forward to talking about this more with its proponents. I'm sure they'll be here to answer some of your criticisms, Erik.
I'm curious, are you going to stay in this area? Are you--is this an ongoing interest of yours? Is this a passing thing? What's next for you? You've written a novel. You going to write some more novels? What are you doing?
Erik Hoel: Yeah. I'd love to, one day, write some more novels. I feel like novels have to be really necessary, though. You have to feel like you have to have written it.
For my Substack, I cover a lot of topics and I do deep dives into various ones, generally once a week. And I generally return or cycle around certain issues, but oftentimes, I'll then go on to do--the next post will have nothing to do with effective altruism. I view myself as basically just an essayist, in this capacity, where the goal is just to write really interesting essays that hopefully also inform people, but also hopefully are well-written and provide some aesthetic satisfaction. I think that that's actually missing a little bit online. I think sometimes people just want the bare bones of stuff. They just want the facts. They want the bullet points. And that's not really my style. So, that's what I can hopefully add, is some sort of combination of the two.
Russ Roberts: My guest today has been Erik Hoel. He is, I hope, an effective altruist, small E, small A, but not with the capital letters. Erik, thanks for being part of EconTalk.
Erik Hoel: Thank you so much, Russ. This was such a pleasure.
Sep 26 2022 at 10:35am
Loved this episode; seriously thought provoking. I think it would be fascinating to hear Bjørn Lomborg‘s thoughts on EA as the work he and his colleagues do at the Copenhagen Consensus Center, as I understand it, is focused on maximizing the utility of world resources by quantifying the real costs and benefits of addressing each of a long list of global issues.
Sep 26 2022 at 11:19am
On the Artificial Intelligence question, Russ is quite right to observe that machines (including computing machines) are not persons. Nor is it the case that they ever can be. Thus, they don’t have a moral equivalence with persons. For those who want to dive deeper into the distinction, check out this recent book.
Non-Computable You: What You Do That Artificial Intelligence Never Will
by Robert J. Marks
paper audiobook NOOK ebook
The episode had a good discussion of the failure of utilitarianism. Here is another way to understand the descent into repugnance.
Whenever we try to reduce moral questions to a single scale of “benefit” (e.g. utils) of some kind, that renders us blind to the existence of evil acts — acts that violate moral limits by committing wrong, by doing what humans ought not to do. The inevitably opens the door to calculations where we might justify doing moral wrong for the sake of benefit.
Shall we do evil that benefit may result?
An example given in the episode involved murdering one person so that others could benefit from transplants. That isn’t hypothetical. China has turned that practice into an industry by collecting biological information on prisoners to in order to quickly supply matching organs on demand. If the net benefit (utils) is positive, how does the consistent utilitarian say “No”?
Guest Erik Hoel approached the same idea talking about any attempt to reduce moral questions to comparing differently sized piles of dirt.
In order to understand the repugnant failure of utilitarianism, one needs to recognize the real existence of moral right and wrong such that some actions transgress moral boundaries and commit evil. Such actions are evil, even if someone does them for the sake of benefit (utils).
That in turn depends on recognizing that there objectively is a way that humans ought to be, which assumes a true human nature that can be violated by wrong actions. Those who think human behavioral standards are merely arbitrary conventions of subjective preference will have no objective grounds to regard the Chinese calculus for organ harvesting as being inferior to any other they might prefer.
Sep 26 2022 at 11:21am
Thanks for doing these episodes on effective altruism. They’re challenging my thinking. Particularly the emphasis on scaling as a driver of unintended consequences.
Question for you: Don’t you think that Milton Friedman fell into a similar trap with his infamous 1970 NYTimes opinion piece? The one saying that corporations best benefit society when they maximize economic activity, which (in theory) should lead to more aggregate social benefit? At scale, single-minded profit optimization leads to evils such as the dehumanization, poor economic mobility, and anti-democracy that come from extreme income inequality. Just as humans can’t reduce morality to a simple “utils” formula, perhaps businesses can’t reduce their morality to simply profits (even assuming lack of fraud and technical compliance with the law).
Just a thought. Love your podcast.
Sep 26 2022 at 11:03pm
I find EA compelling despite never finding any simple version of utilitarianism attractive. Criticisms of the latter don’t undermine EA’s attractiveness.
Any moral theory – religious or otherwise – seeking to resolve moral questions through general principles is bound to be pushable to highly paradoxical conclusions. EA should not be blamed if some of its proponents stick out their necks unnecessarily far in endorsing particular moral theories.
Like most people, I don’t have a complete moral theory. This doesn’t stop me from thinking that the outcomes of our actions matter, that opportunity costs should be taken seriously, and that it’s sensible to deliberate over the big picture options I have with my time and resources.
I don’t need much by way of moral theory to conclude that saving a child from drowning is better than keeping my shoes dry, or that spending $20K to plant one tree in a wealthy section of town generally compares unfavorably to planting 200 trees in a poorer section of town with the same resources. Etc. Many action-guiding comparisons can be reasonably made without reference to a set of worked out moral principles.
We’re all partially blind to our relative affluence and to our opportunities to engage in activities of greater worth. This is true on any sensible measures of worth.
Oct 1 2022 at 3:35pm
Yes, but the key contention shared by all moral anti-theorists such as Kantians, Aristotelians, Liberals, and Conservatives, is that by taking life one day at a time and appealing to our moral intuition about each particular decision without ever entertaining generalizations, we can always do what’s intuitively right in each case.
Sep 27 2022 at 12:27am
As a utilitarian of sorts, utility maximization is not what I want to use to make day-to-day decisions. It is what I want as an influence on the norms, laws or virtues I do use. It would decrease utility if we used utility maximization for all day-to-day decisions. Just like the manual/automatic camera example used by Kahneman in “thinking fast and slow”. Utility is the north star, but not the path we use.
The norm of not killing someone healthy to take the organs makes a better world because we will not be scared every time we turn the corner. That said, we are willing to sacrifice the life of a soldier when we know many will not return from war.
Sep 27 2022 at 9:41am
There is a certain schizophrenia in interpreting the EA movement:
If it is simply guidance to individuals in making choices about the destination for their own charitable contributions, it is not entirely objectionable. It might not be entirely accurate either, but it offers an input into those decisions that individuals must then way with local opportunities they can assess directly. They can even apply their own “investment” criteria in determining which charitable causes meet their own hurdle test. Not much wrong with that.
If it is a global ethical imperative, arguing that to act morally, everyone must do it, everyone must follow top-down research-based guidance, everyone must apply the same utilitarian criterion (regardless of the problem of other minds and the various utilitarian conundrums that Russ and Erik examine) and everyone must sacrifice their own consumption choices (leisure vs. work, their own children vs. the children of others) to a logical endpoint, then it I think it really collapses.
In this version, as Erik concludes, EA becomes a mandate for equal distribution and equal poverty, just above subsistence, perhaps. It is indistinguishable, it appears, from communism, and we have already run that experiment.
Yes, communism was implemented as a governmental mandate at the point of a gun — but the objective was to change human nature to achieve voluntary compliance and enthusiasm. That never happened. In the end, to each according to their needs and from each according to abilities failed spectacularly, with neither needs fulfilled nor abilities utilized. There was no room for highly remunerated hedge-fund managers to engage in beneficence. Communism suffered a general equilibrium problem in that regard. Moreover, it didn’t curtain self-interest, but simply created a new elite exercising monopoly power and reaping the rewards, driving in chauffeured limousines down the middle of closed Moscow streets.
The problem was that incentives were ignored. Charity works only if it is a voluntary component of individual “spending”, part of holistic economy grounded in personal choice and freedom. That’s the only way private wealth can be directed to helping others. Moreover, for those concerned about “longtermism”, in the longer term, the most promising source of material prosperity for those living currently in unacceptable poverty is to become part of that system — educated, subject to rule-of-law, balanced democratic governance and with access to equal-opportunity capitalism (not the dysfunctional oligarchical/crony version). Of course the secret sauce to encourage that sort of self-sufficiency is not perfectly known, but it’s hard to make progress without setting it as a broader social goal.
If EA goes for option 2, it risks all the failures we have already seen and thought we might be mostly past — though that doesn’t seem highly likely, because an EA of that sort probably won’t be taken seriously.
Sep 27 2022 at 10:53am
maybe “Chesterton’s Fence” is a permanent link in the Delve Deeper section? it comes up in conversation enough 🙂 But I don’t recall it being discussed in this episode.
Thanks again for another great EconTalk. I’ve been wondering, as this EA marketing campaign and the counter-points from its critics have intensified recently, whether it might be helpful to discuss with a philosopher familiar with virtue ethics to give another dimension. I’m thinking in particular of Candice Volger, whose virtue ethics lectures I attended nigh 20 years ago, but I think virtually anyone from the Chicago philosophy department would be an interesting guest… if they find EA an interesting topic of discussion. I imagine they might, since Agnes Callard just gave a popular Aims address considering Longtermism ( which I have not read the transcript yet… )
Sep 28 2022 at 1:52am
I thought this episode was interesting but misrepresented EA in some ways.
Much of the argument was against utilitarianism, but as many EAs, including Will Macaskill and Peter Singer, point out, you don’t have to be a utilitarian to think that it is morally good to donate a proportion of your income to effective charities (that is, to agree with EA).
Russ said that Peter Singer thinks it is immoral to give your 5 year old a birthday party because you should give the money to charity. I have heard Singer talk about this and he refers to a birthday party that costs $40,000. Overall, Singer’s recommendation is that if you earn $100,000 a year you should give about $1,800 to charities, so funding a birthday party would clearly be fine. (If you earn $60,000 it is $600).
Erik said that ‘ MacAskill is able to somehow personally calculate out the average suffering of animals, and he finds that … the suffering of animals outweighs the positive aspects of being an animal’. Here is Macaskill talking to Tyler Cowen ‘I really don’t know whether animals in the wild have lives that are good or bad’. Elsewhere I have heard him say that he thinks we are nowhere close to knowing enough to try to do anything about wild animal suffering and likely won’t be for thousands of years.
Finally, I think Russ pushes the point about uncertainty too far. Yes the evidence on de-worming is complicated, but analyzing such evidence is very much part of EA. There are, however, things that we know. For example, maybe it is not precisely accurate that training a guide dog costs $40,000 and curing someone’s blindness in a poor country costs as little as $20 to $50, but surely we know that the guide dog costs more and that it is better to cure a blind person than provide a guide dog.
Sep 28 2022 at 9:45am
Thanks very much for these recent episodes on EA, they have been really thought-provoking.
In this podcast, I wonder whether the repugnant conclusion was depicted in a way that tricks the ear into thinking it is very repugnant. For example, discussed was the idea that total utilitarianism requires you to create vast numbers of future people even if “everybody’s going to end up fairly miserable”.
But in fact the repugnant conclusion only holds if the vast number of people live net positive lives. These could be people who experience tremendous amounts of both happiness and suffering (perhaps similar to people alive today), or people who live fairly bland lives, but not miserable lives.
Looking forward to hearing more on EA in the future!
Sep 28 2022 at 2:04pm
Lucas. I agree. “everybody’s going to end up fairly miserable” unless “fairly miserable” means overall positive, just not as positive as possible per unit (life).
Sep 28 2022 at 8:04pm
I had a lots of issues with this interview. 2 off the top of my head
AI – To think that AI will never be the moral equivilent of the human is to basically appeal to religion. We are creatures that obey the laws of physics. Our brains run on physics and or worth, “I think therefore I am” comes from the emergent properties of the physics of our brains.
There is zero reason to believe we won’t someday be able to re-create those same emergent properties either through simulation to other. When we do we’ll have to grapple with the fact that we’re no morally different from those creations.
I’m not saying I like the idea of AI replacing humans but personally I find it inevitable. To proof that it’s possiible is our own brains. Are own brains similated in a computer that can run 100s of thousands of times faster, have a brain virtually much larger (more nerons for example), have access to all knowledge, and be able to instantly transmit new knowledge to clones of its mind just seems like an enevitable recipe for us being surpassed. No amount of legislasion will stop this progression. It’s be like outlawing math. Not possible.
So, points made with AI in the future are not without their merit, however unpleasent it is to see humanity’s demise.
See the end of the Spielburg movie, A.I. as some representation of the concept.
It was strange to hear Russ talk about a small amount of people losing their jobs for lots of people to save money. Isn’t that the textbook example of economists “getting it” and non economists not?
We could have saved the luddites their cloth making and in exchage, the millions of people that make their living from the fashion industry would all lose their jobs as their would not be enough fabric to supply it.
Yes, it sucks be the cloth weavers. It also sucked to be a horse livery owner when cars replaced horses. I’m certainly glad we didn’t prevent cars just so some horseshoers could keep their jobs.
It is not an immoral conclusion to see that cheaper goods > saved jobs. AFAIK we have no qualitative evidence to the contrary. We just have very sad anecdotes for the people who’s careers disappeared but few to account all the things the savings enabled. I’m certainly sad when people lose their livelyhood and solutions for that would be great but preventing cheaper goods is arguably not the solution.
Sep 28 2022 at 11:54pm
Agreed on the AI perspective, but I take issue with the last half of your comment (Tech advancement/free trade displacing employment opportunities) – At no point does Russ claim that this tradeoff is ‘immoral’, but simply looking at the quantitative difference between Consumer Gains vs. Employment Losses doesn’t fully capture the actual gains or losses of either. He is saying that our inability to accurately measure the full effects (of both gains and losses) makes any sort of comparison based on those meager numbers suspect, and ultimately inadequate as a basis of whether to engage or refrain from any particular behavior. A few minutes later he explains that he is in favor of free trade and in fact wrote a book on the subject. Also I think qualitative evidence is, by most people’s definition, literally ‘sad anecdotes’.
Sep 29 2022 at 1:25am
The word that kept coming to my mind listening to this episode was: BALANCE. Most good things or ideas taken to far can become bad. Thank you for another thought provoking episode.
Oct 3 2022 at 7:05am
Very good point, often the extremes of problems are where interesting things happen so they get more study and attention but reality and life is not often lived or experienced at the extremes.
Sep 29 2022 at 2:26pm
If I were a Utilitarian (I am most assuredly not), the below would, I imagine, be their response to the criticisms presented in this episode.
Trolley Problem: Someone is going to die, should we intervene to save 5 at the cost of 1. A simple utilitarian calculus demands that yes, we should save them.
Rogue Surgeon (Kidnap/Sacrifice one patient for organs to save 5): I actually think some utilitarians would be ok with this, but the ones who aren’t would probably say this is different from the trolley because the harm of kidnapping and depriving someone of life outweighs the benefit of the 5 organ recipients continuing to live. Think of it as a harm-multiplier effect, where in the trolley you are simply passively switching the tracks and the train (ie a natural cause) is the source of the harm; Here, the Surgeon is acting directly against another individual, so the proximal causation is man-made, and therefore should be measured differently than the harm of a natural cause.
Shallow Pond (should you ruin your suit to save a kid drowning in a pond): Once again, utilitarianism requires it because the cost is so small and the benefit so large.
Repugnant Conclusion (we should maximize quantity of people in the world, regardless of quality of life): I think most utilitarians view measuring utility as more complex than simply adding up the aggregate good/harm. Surely a life well lived with the freedom to pursue any career/passion is worth more than a life of subsistence. Either way, this objection can be easily overcome by changing the method of measurement.
Animal suffering (they don’t deserve to live…): I think MacAskill has to be an outlier here, most utilitarians likely think animals can experience enough pleasure (eating food, procreating, etc..) to make their lives worthwhile.
Stub Toe vs. Holocaust Mounds of Dirt: Once again, if we tweak our methods of measurement and put some kind of upper limit on certain types of harm, these events are easily distinguishable. For example, minor inconveniences (ie stubbing one’s toe) can only reach a maximum of 100 utils in the aggregate, no matter how many people experience them. Compared to killing or torturing others which might get some kind of harm multiplier as I discussed above.
All the critiques offered in this episode seem to me to focus on the problems of measuring pain and pleasure. They don’t actually refute utilitarianism as a moral philosophy assuming we can adjust our measurement techniques effectively. For me, the best way to actually refute utilitarianism, is to think of examples where the aggregate good absolutely outweighs the harm offered by a decision, yet my personal moral philosophy demands I choose the less beneficent route – IE consider one’s spouse is the one on the other track in the trolley problem. I would absolutely let the 5 die rather than sacrifice her, and I think it is moral to do so.
Oct 1 2022 at 1:01am
Honestly, if it was my Dog on the other track, I would let those 5 people get run over…. Sorry not sorry
J C Lester
Oct 4 2022 at 10:42am
Peter Singer’s “Famine, Affluence, and Morality”: Three Libertarian Refutations
J. C. Lester
Studia Humana 9 (2):135-141 (2020)
Peter Singer’s famous and influential article is criticised in three main ways that can be considered libertarian, although many non-libertarians could also accept them: 1) the relevant moral principle is more plausibly about upholding an implicit contract rather than globalising a moral intuition that had local evolutionary origins; 2) its principle of the immorality of not stopping bad things is paradoxical, as it overlooks the converse aspect that would be the positive morality of not starting bad things and also thereby conceptually eliminates innocence; and 3) free markets—especially international free trade—have been cogently explained to be the real solution to the global “major evils” of “poverty” and “pollution”, while “overpopulation” does not exist in free-market frameworks; hence charity is a relatively minor alleviant to the problem of insufficiently free markets. There are also various subsidiary arguments throughout.
Oct 7 2022 at 7:05pm
Silicon Valley tech types discovering utilitarianism (but somehow not the immediate refutations and modifications by its own disciples) and deciding the most ethical thing to do is give their money to an ultra-silicon-valley obsession (AI saftey) is beyond parody.
Against Hoel, I don’t see why increasing the population is a necessary outcome of their beliefs.
The real question is “what DO we owe other people?”
Oct 7 2022 at 8:32pm
For a comedic take on both the trolley problem and rogue surgeon, check out The Good Place season 2 episode 6, aptly named “The Trolley Problem.”
I recommend the entire series. 4 seasons, episodes are short (about 20 min) except the touching finale.
Oct 8 2022 at 11:41am
Oct 11 2022 at 11:01pm
Thank you for another insightful episode. I like the way that Econtalk has set this up in a conversation-rebuttal format, almost a diachronic debate. It’s probably worth checking in with Will MacAskill once again as Russ suggested, if such a repeat could be agreed upon. Another critic of EA and Longtermism who would be an interesting guest, I think, is Phil Torres, who wrote the “Against Longtermism” article (see https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo).
It might be interesting to do a podcast one day with a leading Transhumanist scholar – for example, Steve Fuller at the University of Warwick, a sociologist of knowledge, who authored Humanity 2.0 in 2011 and Post-Truth: Knowledge as a Power Game in 2018, assuming this is work that Russ would find interesting to read. Thinkers on transhumanism seem to be attracted by utilitarian worldviews; it would be interesting to see whether there are alternative ways of thinking of the far distant future, say, beyond the lifespan of a typical terrestrial species.
In general I like Russ’ interviews with people who think about the future, including topics with economists such as the future of work, but these talks on moral philosophy and the future are fascinating. Russ explores these ideas in a way that makes the ostensibly esoteric topics very real and relevant.
Oct 17 2022 at 8:18pm
Do you know that 98% of engineers who are not getting paid to say AI is right around the corner will tell you why it is not plausible within the next 20,000 years? Russ really needs to get out of this echo chamber.
Comments are closed.