Do All Creatures, Great and Small, and Made From Silicon, Have Rights? (with Jeff Sebo)
Mar 31 2025

71iagPMc0SL._SY522_-197x300.jpg Should monkeys have the same rights as humans? What about elephants, ants, or invertebrates? NYU philosopher Jeff Sebo makes the case for expanding your moral circle to many more beings than you might expect, including those based on silicon chips. Listen as Sebo and EconTalk's Russ Roberts discuss to whom and what we owe moral consideration, how we determine a being's intrinsic moral significance, and why we have ethical obligations to others, anyway. They also discuss human exceptionalism--the idea that humans should be prioritized over other beings.

RELATED EPISODE
Paul Bloom on Empathy
Psychologist Paul Bloom of Yale University talks about his book Against Empathy with EconTalk host Russ Roberts. Bloom argues that empathy--the ability to feel the emotions of others--is a bad guide to charitable giving and public policy. Bloom argues that...
EXPLORE MORE
Related EPISODE
Patrick House on Consciousness
How does the mind work? What makes us sad? What makes us laugh? Despite advances in neuroscience, the answers to these questions remain elusive. Neuroscientist Patrick House talks about these mysteries and about his book Nineteen Ways of Looking at Consciousness...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Shalom Freedman
Mar 31 2025 at 8:42am

There is a point in this very interesting conversation where Russ Roberts points out that not all human beings are interested in enhancing the welfare of themselves and others but are in fact sadistically cruel and take great pleasure in harming others. This in turn leads to a different question I believe not considered in the discussion. It is given the sorry state of humanity as a whole in terms of the injustice and cruelty in the world shouldn’t our moral focus be on working to improve that situation? This question of course implies that there is a definite moral hierarchy, and human beings do deserve far more attention and care than other species, biological or artificial. This of course does not mean efforts should not be made to limit gratuitous cruelty to other forms of being. It too does not mean piecemeal efforts which improve both the welfare of humans and other beings should not be made. It does however imply that the feelings of ants should not be for us a first priority.

Tami Demayo
Mar 31 2025 at 7:02pm

Why does moral responsibility toward an individual depend on that individual’s moral standing? I refrain from harming benign insects, not because I believe that they might be capable of feeling harmed (I have no idea!), but because I do not inflict harm gratuitously, in principle.

Dr G
Apr 1 2025 at 10:32am

Tami –
In your version, the insect has no moral standing. If a fly is buzzing around and annoying me, I could choose to kill it—but I might also say, “I don’t like to cause harm, even to insects, so I’ll just tolerate the buzzing.” If that decision is purely about my preference, then the fly has no moral standing—an intuition many people probably share.
On the other hand, someone might say, “The fly’s life matters in its own right, so I ought not harm it, even if it’s annoying.” This view attributes moral standing to the fly itself.
If you accept that distinction, the next obvious question is: Who gets moral standing, and to what extent? I think Jeff is trying to sidestep that question. Instead of saying definitively whether the fly has moral standing, he suggests that since there’s some chance it has some measure of moral standing, we bear some degree of moral responsibility.
How much responsibility? How do I weigh the fly’s possible moral standing against my personal discomfort? That’s hard to say, so I think Jeff’s trying to focus on practical cases where we clearly treat animals as if they have zero moral standing animals (e.g. factory farming). Even if we don’t know the precise moral calculus, we almost certainly know the direction we should move in cases like this: toward less animal cruelty.
That feels likely a reasonable answer from a public policy perspective, but it gets a little unsettling from a personal perspective. There’s some chance I’m doing the right thing… and some chance I’m morally monstrous. And I don’t know if it’s 50/50 or one in a quadrillion. This feels more than a little unsatisfying, but in fairness to Jeff, philosophers have been giving unsatisfying answers to these questions for a long time.
 

Ben Service
Apr 1 2025 at 3:47pm

I’ve realised I really love these more philosophical podcasts from Russ as they really make me think hard about things I don’t normally consider.

Take the sadist argument, what if you were the opposite of a sadist and just wanted to give others pleasure, would you feel good submitting to the sadist and giving them pleasure, is that then a win win for both people (I guess this is the BDSM culture)?  However the fact that a sadist gets pleasure from doing harm doesn’t match your view of the world so then things get complicated.  I was thinking about how evolution might allow the “give pleasure to others” people to out compete the sadists, giving pleasure to others means you give pleasure to everyone including the sadists whereas the sadist only personally get pleasure so the maths seems to work out that they should shrink compared to the rest of the population but I guess there is some sustainable low % of the population that can stay sadists.

In terms of living being based protein food I heard a good argument from David Duchovny on Peter Singer’s Lives Well Lived podcast that maybe factory farming insects is OK as that is how they prefer to live their lives.

Guest request is Doomberg, he has a substack about energy, he has different views from me about energy (I suspect he is more correct than me in a lot of things) but I’d love to hear a good interview with him as he is pretty smart but tends to go on very sympathetic podcasts.  I think Russ would be sympathetic to a lot of his ideas but might be able to tease them out better.

On tail risk on climate change, what is the tail risk of heating the planet up so much that say 20 degrees north and south of the equator is consistently >40C (104F) which is basically unlivable outdoors for most mammalian land based creatures who can’t evolve quicker than the change in the climate?  I know it seems a bit climate doomy and maybe we’ll all be on space ships to somewhere else by then but it does seem like a bad outcome.  It won’t happen in my lifetime or my kids or my grandkids but the fact it could does concern me, maybe I just read too much dystopian fiction too.

Earl Rodd
Apr 1 2025 at 5:17pm

While not central to the edifying discussion of the main theme of the podcast, I want to comment on what I think is a common mistake made thinking about advanced forms of AI whether called “general intelligence” or “self-aware” or whatever. I always go back to how current AI’s work. Whether LLM’s or a recently announced use of AI in weather prediction (which uses the same technology but trained on weather data, not language text), what AI technology does is predict what comes next from training on a large body of information containing things in sequence (whether words/sentences or temperature/pressure etc.). I think a discussion about whether an AI can be “self-aware” needs to either think about how “prediction” can lead to being “self-aware” or admit (as was mentioned by the guest) that some different technology is needed. Personally, I am bullish that AI will find amazing uses we haven’t thought of but rather bearish that it will gain biological life’s features and particularly human features. Sadly, history tells us too many of these amazing new uses will not be good.

Dr G
Apr 2 2025 at 11:54am

Earl –

 
I think the issue with your position is that it sets a pretty high bar for attributing consciousness to AI—one that, if applied consistently, humans would not even meet (I believe Jeff touched on this briefly). The truth is, we don’t really understand how consciousness evolved, or the degree to which non-human animals experience it. We don’t understand how consciousness arises from brain chemistry or biology, and we don’t even have a clear definition of what conscious is.

So if we require a systematic, mechanistic explanation of how consciousness arises before we’re willing to attribute it to something, we’re facing a much bigger problem.

wade Baker
Apr 2 2025 at 2:43pm

If there is a 1/100,000 chance there is a species on another planet, and that planet is in trouble, so that species is planning on coming to earth as life boat to save their species,

do we have a moral obligation to manage the earth for the welfare of those beings?

Ben Service
Apr 3 2025 at 1:12pm

These are almost impossible questions to answer and it comes down to trade offs I think.  Are we morally obligated to not let a species on earth die out?  Probably not, this happens naturally all the time.  Are we morally obligated to not let all life, how ever you define that, on earth die out?  Probably yes but then if you knew there was life thriving somewhere else in the universe then maybe it is not as strong of an argument.  Maybe earth is not the right environment to save that life anyway, maybe Venus would be more hospitable to them (the book The Hail Mary Project has an example of this).  I’m also halfway through the three body problem books which also investigates some of these aspects too.

Mark Maguire
Apr 10 2025 at 7:47am

I’m reading “The Future of Life” by Edward O. Wilson. I sense a lot of parallels in the thinking of Wilson and your guest, Jeff Sebo.

The final chapter of Wilson’s book is titled “The Solution”. Together with Mr. Sebo’s ideas, I feel much more grounded and informed about this topic. It’s worth reading.

From the “The Solution” chapter, this opinion stuck me as true:

“The juggernaut of technology-based capitalism will not be stopped. Its momentum is reinforced by the billions of poor people in developing countries anxious to participate in order to share the material wealth of the industrialized nations. But its direction can be changed by mandate of a generally shared long-term envi ronmental ethic. The choice is clear: the juggernaut will very soon either chew up what remains of the living world, or it will be redirected to save it”

Matt
Apr 17 2025 at 10:02am

Great and important episode. Two points, IMHO:

It seems to me that the first robots we think are conscious won’t actually be conscious.

The single thing we can each do to reduce our suffering “footprint” is to not support the factory farming of chickens. Numbers here.

Take care everyone.

J_S
Apr 22 2025 at 9:56am

“Now, for present purposes, I can say two brief things. One is I personally side a little bit more with the anti-realists. I think that value is a social construct and not an objective fact of the matter in the world.”

Frankly, the conversation should have just ended here. Once you admit that moral values and duties do not objectively exist and cannot be rationally uncovered or revealed, everything after that is just Jeff doing the equivalent of trying to convince people who like rap music that, if they really though hard about it and had lots of tedious discussions with him, they would realize they like Baroque Chamber Music.

The entire conversation after that point is absurd and self-defeating. If I believe that it is okay to kill chickens in factory farms and have no serious moral qualms about it, I’m willing to listen to an argument that I am wrong. But why should I or anyone spend our time listening to some fool who says there is no actual moral truth of the matter and then insists I ought to spend an absurd amount of my time scrupulously considering the wellbeing of the pigeons simply because that is what his moral intuitions tell him to do?

 

“…the reason I push in this direction anyway is because we have to compare this decision procedure to the alternatives. And right now, the alternatives involve either totally neglecting this issue all together, or going with our intuitions alone and our intuitions are, of course, very biased, very ignorant.”

Intuitions can only be “biased” or “ignorant” if there is some actual fact of the matter for them to be biased or wrong about. I am not “biased” for preferring fruit flavored ice-cream, nor am I “ignorant” for not seeing the obviously greater value of chocolate flavors. I simply have a different preference. Jeff seems to want to impose a very detailed set of ethical values on everyone else without considering that he leaves himself with no logical ground on which to do so. He is left shouting into the wind, insisting “but you ought to like chocolate!”

Well, I don’t. What are you going to do about it?

 

Comments are closed.


DELVE DEEPER

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:37

Intro. [Recording date: March 12, 2025.]

Russ Roberts: Today is March 12th, 2025 and my guest is author and philosopher Jeff Sebo of New York University [NYU]. Our topic for today's conversation is his new book, The Moral Circle: Who Matters, What Matters, and Why.

Jeff, welcome to EconTalk.

Jeff Sebo: Yeah, thanks so much for having me.

00:59

Russ Roberts: What is the moral circle?

Jeff Sebo: The moral circle is a metaphor for our conception of the moral community. So, when we make decisions, when we select actions or policies, to whom are we accountable? to whom do we have responsibilities? who do we extend consideration? Normally, we might extend consideration only to some humans, though many of us now recognize we really owe consideration at least to all humans and many animals, like mammals and birds affected by our actions and policies. So, this book is about: Should we go farther than that? And if so, how far should we go?

Russ Roberts: You start with a provocative, entertaining scenario that you come back to now and then, and maybe we will as well in our conversation. You've got some roommates. One is Carmen, the other is Dara. Or maybe you pronounce it differently; I don't know, it's in print. Tell us about Carmen and Dara, and before we know more about them, your initial introduction to the group. Then we'll talk about how that gets more complicated once we learn about their nature.

Jeff Sebo: Yes, absolutely. I should say, by the way, that this thought experiment is inspired by a similar one from the philosopher Dale Jamieson, but I take it a little bit farther.

So, imagine that you live with a couple of roommates, Carmen and Dara. You get along really well. Obviously, you have agreements and disagreements, and you have to sort through some tensions because you live together and have different preferences. But, on the whole, you have good relationships.

One day, the three of you for fun decide to take ancestry tests to learn a little bit more about where you come from. To your surprise--to your collective surprise--your roommate Carmen turns out not to be a member of your species at all: she turns out to be a Neanderthal. You thought Neanderthals were extinct, but it turns out a small population has still survived and exists to this day, and Carmen is one of their members. And, your roommate Dara, it turns out, is not even a being of your kind. Dara is a Westworld-style robot. You thought that, at best, these kinds of beings would exist only in the farther future, but it turns out that a small population already exists in data mode, and Dara is a member of their population.

The question that I ask in this thought experiment is: how does this revelation affect your attitudes towards, but more importantly, your moral relationship with you roommates? Do you still feel that you have a responsibility to consider their interests and strive to find a fair and equitable way to live together in your household, in spite of the fact that Carmen is a member of a different species and Dara is a being of a different substrate? Or, do you now feel that you have a right, as long as you can get away with it, to treat them however you like, and impose your own beliefs, and values, and decisions on them, even if it seems to be against their will?

Russ Roberts: I like the modest, undemanding example of playing music late at night or early in the morning, if we have different wake up and work times. We could also imagine them having different kinds of ancestry than the ones you chose. One of them could have a parent who was a guard at Auschwitz, one of them could be a founder of Ku Klux Klan's offspring. We could ask questions whether that should change things. We could discover things about Carmen and Dara in their own past, not just their parents' past, that disgust us or we think is morally reprehensible.

I think it's a very interesting idea to think about how we treat people in general. Often, I think in our conversation, we might go back and forth between how we think we ought to treat them versus what does morality demand of us. And, they may not be the same, for a variety of reasons.

5:23

Russ Roberts: But, let's start with the Carmen and Dara that you talked about. Summarize what you think are some of the range of responses people could have in that situation, how you think we ought to respond.

Jeff Sebo: Yeah. There are a lot of options, even within modern ethical theory. And then, of course, in society people are going to have an even wider range of responses. Most people, at least these days in philosophy, would accept that you do still have moral responsibilities to your roommate Carmen. Carmen is the Neanderthal. Yes, Carmen is a member of a different species, but apparently this species has co-evolved with humanity in such a way that we now have broadly the same capacities, and interests, and needs, and vulnerabilities. And so, Carmen, you can presume, is still conscious. It feels like something to be her. Is sentient; she can feel pleasure and pain, and happiness and suffering, is agentic. She can set and pursue her own goals based on her own beliefs and desires. And she still has all the same projects and relationships that she had yesterday, before you had this revelation.

The mere fact, in and of itself, that Carmen is a member of a separate reproductively isolated--but very close--species, is not enough to strip away any intrinsic moral significance she has and her interests have. And I think pretty much everybody would agree about that. There might be subtle differences now, in terms of what she wants and needs and how you relate to her, but fundamentally you do still have responsibilities to her.

Now, Dara is a whole separate question. Dara appears to be conscious, and sentient, and agentic, and have projects and relationships. But, Dara is a product of science, not of evolution, and Dara is made out of silicon-based chips, not carbon-based cells. So in this case, you might have real uncertainty. Philosophers and other experts have real uncertainty about whether a sufficiently advanced, sophisticated, silicon-based being like a Westworld-style robot, like your roommate Dara, whether it really can feel like anything to be that being. Whether they really can experience pleasure and pain, and set and pursue their own goals in a morally significant way.

And so, while we might have broad consensus that you still have responsibilities to Carmen, with Dara, we might have a lot of disagreement and uncertainty. And then, you are going to have to make decisions about how to treat her, despite that disagreement and uncertainty.

8:00

Russ Roberts: So, before we go further on this, talk about the Welfare Principle that you write about and how that might inform how we deal with this new information.

Jeff Sebo: The Welfare Principle is a plausible and widely-accepted idea in philosophy that holds: if you have a capacity for welfare, then you also have moral standing. So, this is what that means. The capacity for welfare is understood as the capacity to be benefited and harmed, to be made better off or worse off for your own sake. My car could be damaged, but my car is not really capable of being harmed--made worse off for its own sake--so my car lacks the capacity for welfare.

And, moral standing means that you have a certain kind of intrinsic moral significance: that you matter for your own sake, and that I have moral responsibilities to you. I owe them to you.

The Welfare Principle basically holds that welfare is sufficient for moral standing. If you have the capacity for welfare, that is enough for you to matter for your own sake and for me to have responsibilities to you when making decisions that affect you.

Russ Roberts: Just two comments, the first whimsical. We will link to this clip--one of my favorite moments in Fawlty Towers is when Basil Fawlty, in a hurry to get somewhere, his car breaks down and doesn't restart at a red light or somewhere. And he gets enraged, and he gets out of the car. He goes and picks up a large branch by the side of the road and he starts hitting the car with it, saying, 'How many times have I told you?' It's funny. It's very funny. But, it illustrates unintentionally this principle. He could damage the car. He could dent it, he could hurt its paint, he could incapacitate it permanently with a set of actions, but he can't harm the car in its own sense of self.

Just to be clear, because we're going to turn to consciousness inevitably in this conversation: Going back to Dara, if I notice that Dara's batteries are running low and I boost her up, or vice versa--I unplug her or block her access to electricity, similarly to keeping Carmen from eating stuff out of the fridge, taking away her keys so she can't go buy groceries--we would be comfortable saying that it's cruel, it's harmful to Carmen. Dara would be, I think, more complicated.

So, you want to add anything, in terms of the Welfare Principle for Dara, in terms of suffering, or wellbeing, or happiness? Because in one of the formulations, I thought it might include this, but I'm not sure.

Jeff Sebo: Yeah. What I can add--and by the way, I love that example. The philosopher Derek Parfit has a similar example. He used to talk about how he would always feel the strong urge to hit and punish his computer when the computer stops working. Then he would have to try to psychologically overcome that.

In any case, part of what is interesting and complicated about the Dara case is that it reveals disagreement and uncertainty, both about ethics and about science. Both about the values and about the facts.

On the ethics side, we could have disagreement and uncertainty about: what is the basis for welfare and moral standing in the first place? Do you need to be sentient, capable of consciously experiencing pleasure and pain? Or is it enough to be conscious without being sentient? To be able to have subjective experiences, even if they lack a positive or negative valence. Or, is it enough to be agentic without being conscious--to be able to set and pursue goals, even if it feels like nothing to be you? Philosophers disagree about that. And based on your answer to that question, that sets a different standard that Dara would need to meet.

And then on the science side, we might also have disagreement and uncertainty about what it takes to meet those standards. Is a sufficiently sophisticated silicon-based being capable of having feelings of their own?

Both of those are contested sets of issues. That is part of what would probably make you feel really confused if you learned that your roommate Dara is a silicon-based robot, after all.

12:26

Russ Roberts: You use the phrase--I forget exactly how you used it--the experience of what it's like to be you, something like that. That's a reference, I assume, to Thomas Nagel. You want to take a minute and step back, and give listeners and viewers a little bit of that background as an example of one way of thinking of consciousness and sentience?

Jeff Sebo: Yeah, thank you. So, Thomas Nagel wrote a famous paper called "What Is It Like to Be a Bat?" This was now decades ago. Basically, this paper was helping people to understand what we now call phenomenal consciousness. And, this is helpful because the word 'consciousness' can be used in many ways. Sometimes we can use it to mean being awake instead of being asleep. Or, being self-conscious, self-aware, instead of not having that kind of meta-cognition.

But, in this paper, Tom Nagel was focusing on a particular phenomenon, which he used 'what is it like to be you' to identify. The basic idea here is that our brains do a lot of processing. Some of it corresponds to subjective experiences and some of it might not. Right? So, when our brains have perceptual experiences or affective experiences--when I see the color red, when I hear the sound of a trumpet, when I feel pleasure and pain--those are all subjective experiences that feel like something to me. But then, when my brain helps my body regulate heartbeat or digestion, that might not feel like anything at all.

The question here is, first of all: What is it like to be a radically different kind of being? What are their subjective experiences, those kinds of conscious experiences like? And then second of all: What kinds of beings can have those experiences in the first place? How far does it extend in the tree of life, and then beyond the tree of life?

So, yeah: when we ask about consciousness in this context, we are focusing on phenomenal consciousness: What is it like to be a different kind of being?

Russ Roberts: This is sometimes called qualia: your perception of things. A lot of interesting papers, at least interesting to me, on this. Many listeners of yours may not find it of interest. The Nagel paper, which we'll link to, is mostly accessible to a non-philosopher. If I remember correctly, there's some hard parts.

But I want to reference another work of philosophy, which I'm going to forget the name of, but you might remember it. It's by Harry Frankfurt. I looked it up a second ago. I think it might be "Necessity and Desire," but it might not be. In that paper, if I'm getting it right and we'll link to the right one, he talks about that we have desires about our desires. So, an animal might have a desire for shelter, reproduction, food, warmth, all kinds of things on a cold, rainy night. And, we have those things, too; so in that sense, we share a certain level of consciousness with animals.

But we also have desires about our desires. I might desire ice cream, but I might desire that I didn't like it as much as I do. And, this opens, I think--it sounds kind of trivial, but it's actually I think quite important--it opens a way that I think about this question of AI [artificial intelligence], of robots and Westworld characters. Do you imagine the possibility that Dara will have regrets? That Dara will wish her hair were a different color, or wish she had chosen, been assigned to someone other than me or you as her roommate. Or wishes she hadn't been cruel to Carmen unintentionally earlier that morning in an interaction over the volume level of the stereo.

For me, since it's going to be--well, you write a lot about the fact that it's hard to know what level of consciousness anything feels--what level of suffering and happiness anything feels--whether it's an ant up to a dog, say, for example. And, we already have the experience of Claude and other LLMs [large language models] that act in language the way humans do. And we presume humans are like us, and we feel suffering and happiness. So, we might assume that Claude does. But, if Claude does not have regret--if Claude doesn't have longing--like, I didn't use Claude, say, yesterday, does he sit there? He doesn't sit there. But, when I come back to him, he might say, 'Gee, I was so sorry you didn't talk to me yesterday.' But, does that have any meaning if he's a machine?

For me, at the current level, it certainly has no meaning--to me. You might disagree, and we might disagree on the probability that Claude will become something different. What are your thoughts on these issues of regret, desire, longing, sadness, and so on, other than their verbal manifestations, and whether that tells us anything about LLMs and other types of silicon-based things?

Jeff Sebo: Yeah. A lot there. One is about this concept of second-order desire--desires about other desires. Another is about these complex emotional states, like regret. Then a third is about the present and future state of large language models and other AI systems, and how these ideas all fit together.

So, briefly on each of these points, and then you can tell me which one you want to pursue, if any of them.

With respect to second-order desire, and then these more complex states like regret, there is no reason in principle why those should be unavailable not only for non-human animals in certain forms, but also and especially for AI systems. So my dog, for example, might not have desires about desires in the same kind of linguistic way that I do, and he also might not experience regret in the same kind of way that I do. But, he can have his own kind of meta-cognition and that can still carry some ethical weight.

So, for example, he can attend to his own perceptual experiences, as well as the perceptual experiences of others; and that kind of attentiveness can allow him to tune in to some sorts of mental states, and have different kinds of experiences, and make different kinds of decisions. And then that can affect his interests, and his goals, and what I owe him in order to make sure that I treat him well and promote his welfare.

So, that version of meta-cognition and its ethical significance can be available even to my dog. The same can be said about more complex emotional states. Perhaps not regret, because that really is tied into our language and reason. But, emotional states that are adjacent to regret.

Why does this matter for ethics? Well, there are two ways it might matter for ethics. One concerns our moral agency and the other concerns our moral patient-hood.

So, moral agency is when you have duties and responsibilities to others, and moral patient-hood is when others have duties and responsibilities to you. So, I do think that having sophisticated forms of higher order states, like belief and desire, and emotions like regret, are necessary for moral agency--for having duties and responsibilities to others. This is part of why my dog does not really have duties and responsibilities to me in the same kind of way that I do to him.

But, those complex types of higher order states and emotions are not, in my view, requirements for moral patient-hood. You can still have a life that matters to you, you can still be capable of being benefited and harmed, even if you lack the cognitive sophistication that ordinary adult humans have.

So, those are a few general remarks about I think the ethical significance of those states.

Russ Roberts: I totally agree with you on animals. We might disagree on where the--there might be a line I'd draw--I don't think you would draw; we'll talk about it perhaps--for animals, for non-human carbon life forms. I get in my Twitter feed videos about people tricking their dogs. Putting their hand over something and the dog makes a choice, and the dog is misled by the person. And the dog is troubled. You don't know literally the dog is troubled because the dog can't literally communicate, but the facial expressions, the behavior, the posture of the dog suggests disappointment, sometimes resentment. Of course, it could just be a passing state that looks like that, therefore the video gets popular on Twitter, but I'm open to that reality.

21:37

Russ Roberts: I think it's much harder with Dara, so I want to push you there. Then we'll talk about probabilities. But, start with the strong case for why I could imagine having to care about Dara's welfare.

Jeff Sebo: Great. Yeah. I think that is the tough question.

As a starting point: there is no reason in principle why AI systems in the near future will be incapable of many of the types of cognitive states that humans and other animals can have. So, we already are creating AI systems, not only with physical bodies in some cases, but also with capacities for perception, attention, learning, memory, self-awareness, social awareness, language, reason, flexible decision-making, a kind of global workspace that coordinates activity across these modules. So, in terms of their functional behavioral capacities, as well as the underlying cognitive mechanisms that lead to those functional and behavioral capacities, we can expect that we will, within the next two, four, six, eight years, have AI systems with advanced and integrated versions of all of those capacities.

And that can extend to cognitive capacities that play the functional role of desires about desires, of emotions like regret.

So, I think the only question is: Will these types of cognitive capacities in AI systems come along with subjective experiences? Will it feel like something for AI systems to have desires about their own desires, or to have the functional equivalent of regret? And: Does it need to feel like something in order for AI systems with those cognitive capacities to have intrinsic moral significance and deserve respect and compassion?

So, what I think about that right now is: We can expect that there will in fact be AI systems with advanced and integrated versions of these cognitive capacities, functionally and behaviorally speaking. And, we are not right now in a position to rule out a realistic possibility that it will feel like something to them. Right now, there is enough that is unknown about the nature of consciousness--about phenomenal consciousness--that it would be premature to have a very high degree of confidence that it will feel like something to be those AI systems, or that it will not feel like anything to be those AI systems. I think right now, we can presume that such systems will exist, and we should be fairly uncertain whether and at what point it will feel like anything to be them.

That is our predicament, when we need to make decisions right now, about whether and how to scale up this technology.

24:30

Russ Roberts: So the only thing I disagree with--the first part of your remarks about that--is the self-awareness. I don't know if we have any--I'm totally agnostic. Well, that's not true. I'm skeptical. I wouldn't say it's a zero chance, which is fun because we'll talk about the role probability plays in this. But, I'm skeptical that they'll develop self-awareness. I might be surprised and turn out to be wrong.

It's interesting, I think, to think about how I might come to revise that view. Right? So, if my only interface--you know, you put Claude into a physical body, a thing that looks like a human, and Claude could easily express regret. I talk to Claude in two places. I talk to him on my phone: He's not inside my phone. He's an app. And similarly, on my laptop on a webpage on a browser. But, if he was embodied in some dimension, in a physical thing called a robot, I'd be more likely to be fooled by Claude's claims of self-awareness. But, I don't know how I would ever assess whether those professions of self-awareness were real. So, I want to challenge you with that and see what you think.

But, again--and I also want to bring this back to this question of suffering and pleasure. So, it might be sentient. It might be conscious. I think the crucial question for our moral responsibilities is the one you identify, which is the Welfare Principle. Are you--is it enough that Claude has the kind of responses you talked about? Is that enough to invoke the Welfare Principle for you?

Jeff Sebo: Yeah. Those are great questions.

And by the way, I agree with you about Claude. I think that if we placed Claude in a physical body capable of navigating an environment, we might start to experience Claude as having not only self-awareness, but also morally significant interests of various kinds. And, that might be a false positive. We might be anthropomorphizing--

Russ Roberts: I have that already--

Jeff Sebo: Claude--

Russ Roberts: I have that already. It's embarrassing.

Jeff Sebo: Yeah.

Russ Roberts: I can taste it. I can't quite--

Jeff Sebo: We all have it. We all have it. Yeah. People had it two years ago. People had it four years ago with even much, much more basic large language models. So I agree with you that that could be a false positive. That could be over-attribution of these capacities.

It is worth noting however, that even near-future AI systems might not work in the same kinds of ways that current large language models do. Current large language models do generate realistic text, realistic language outputs based on text prediction and pattern matching. And so, when they say, 'I am self-conscious,' or, 'I am conscious,' or, 'I am morally significant,' then we should not treat that as strong evidence that they are and that they in fact do have self-knowledge.

But, it might be that, in the near future, AI systems not only produce realistic behaviors, but produce them via the same types of cognitive mechanisms that humans and other animals use to produce similar behaviors. So, representations that function like beliefs do, like desires do, like memories do, like anticipations do, fitting together in the same kind of way. And then when those AI systems profess having a certain kind of self-awareness, then we might need to take that a little bit more seriously.

Now it is worth also noting that self-awareness, as with animals, can come in different shapes and sizes, different kinds and degrees. It might not be helpful to ask do they have self-awareness, yes or no? It might be helpful to ask: What kinds of meta-cognition do they have and lack, and what are the moral significance of those forms of meta-cognition?

But, one area where AI systems are going to outstrip animals is that they will, at least functionally, behaviorally, have human-like versions of all of these cognitive capacities, and then some. So then that goes back to your question: Is that enough for moral significance? My own personal answer is no. I really think phenomenal consciousness is a key ingredient for moral standing, intrinsic moral value. And so for me, a lot really does rest on that further question: Fine, they have language, they have reason, they have self-awareness. We can stipulate that for the sake of discussion. Does it correspond to subjective experience? Does it feel like anything to be them? Can they feel happiness and suffering? For me, intuitively, that is what everything rests on.

Russ Roberts: Yeah--

Jeff Sebo: Do I feel--sorry, go ahead.

29:20

Russ Roberts: No, no. My reaction to that is very common sense. I'm an untrained philosopher, which sometimes is an advantage. Most of the time, it will be disadvantage, I concede. But, my first thought in this setting is: It's a machine.

Now, the fascinating part about that common sense reaction is of course is that maybe I'm a machine. I happen to be made out of flesh and blood, but I am at the mercy of algorithms, I'm at the mercy of my genes, I'm at the mercy of physical manifestations of my nervous system and endocrine system that maybe are analogous to what is going on inside of a Westworld-type robot. I don't think so, but maybe I'm wrong. Because when you said, 'Oh, it will have the same cognitive,'--I forget how you worded it--I'm thinking, 'No, it won't.'

It'll be vaguely analogous in that there's electric stuff in my brain as neurons fire, and there's electric stuff in Claude's responses in zero/one settings. And, I'm also kind of, maybe finishing sentences as I go along; I just don't realize it. I'm looking for the next word just like Claude does, etc., etc.

But they're not the same. I would argue that's an illusion. Do you want to agree or push back on that? Before we get--I want to come back to the Welfare Principle.

Jeff Sebo: Great. Yeah. I guess I would both agree and push back on that.

So, in terms of pushing back, I do think that there will be at least broadly analogous cognitive capacities in AI systems in the near future at the level of cognitive representations that play the same functional role as beliefs, and desires, and memories, and anticipations, and so on and so forth.

Now, as you say, that might not mean that there is an exact one-to-one correspondence between how it works in our brains and how it works in these silicon-based systems.

For example, Peter Godfrey-Smith and other really smart philosophers and scientists point out our brains play all these roles by producing these very specific kinds of chemical and electrical signals and oscillations that at present are possible in carbon-based brains, but not in silicon-based chips. Right?

So that then leads to this further question: How fine-grained do these similarities and capacities need to be in order to realize the relevant kinds of welfare states and the relevant kinds of moral significance? Does it need to work exactly like it does in human, or mammalian, or avian brains in order to generate the relevant kinds of interests and significance? Or, is it enough for different kinds of brains to play broadly the same functional roles in different kinds of ways?

I think this is a real open question that is very difficult to answer. But I will caution us about racing to one extreme or the other extreme. On the one hand, it would be a mistake to be too coarse-grained. If we specify these in too broad a way, then any animal, any plant, any fungus, microscopic organisms can trivially satisfy these requirements. And that might be too broad. But, if we specify it in too fine-grained a way, then we might be ruling out even the possibility of consciousness or moral significance in reptiles, amphibians, fishes, octopuses; and that would be a mistake. We should be open to the possibility that different kinds of cognitive systems can realize broadly similar forms of value in different kinds of ways and not rule that out by fiat.

33:06

Russ Roberts: So, let's turn to the basis for the Welfare Principle--which you don't provide. Nobody does. It's not a personal criticism.

It seems self-evident that it's wrong to harm things and it's good to help things. But, I want to ask why. In particular--this is not a gotcha show and it's not much of a gotcha, and I'm sure you've thought about these things. I would suggest the possibility that our belief in the Welfare Principle--the ethical demands to be kind and not be cruel--comes from a religious perspective. A religious perspective that many philosophers, of course, disagree with or are uncomfortable with, either intellectually or personally.

I just want to raise the possibility--I'm curious how you'd react to it--that it's a leftover. It's a leftover from a long tradition of a few thousand years--3000 years in Western thought. There's parallels of course in Eastern thought, maybe we'll talk about those as well. It crossed my mind while I was reading your book that there are many elements of Eastern religion. There's elements of both in your ethical principles--meaning not yours, Jeff Sebo's, but the discipline's--philosophy's--ethical principles. And your book is a very nice survey of the different ways philosophers look at these questions.

But: Why should I care? If I don't believe in God, and I think that the so-called Judeo-Christian--or Buddhist--pick your choice--or Islamic principles, are about how--about, say,--animals, or our obligations. If you don't accept those, why should I care about how I treat other people?

Carmen--forget Carmen and Dara. [?How about?] you? I'm your roommate, but you get on my nerves, Jeff. You play the stereo late at night when I want to sleep. And I don't like the smell of the food you cook. Whatever it is.

Now, I may try to impose my will on you and fail, but I'm more interested in the questions that your book is about. Which is: Why do I have an ethical obligation other than to my own pain and pleasure? I think I do, just to be clear. I'm asking a thought question. But, why?

Jeff Sebo: Yeah, great question. And I think we might make good roommates, because I tend to go to sleep pretty early, so I think we would get along as far as that goes.

Now this is a question in meta-ethics. So, meta-ethics is: What is the status of ethics? So, when we have ethical disagreement, ethical uncertainty, what are we doing in those moments? Are we disagreeing about an objective truth, or are we shouting our preferences at each other, and one of us will win and the other will lose through sheer force of will?

Some philosophers, not surprisingly, disagree with this. I will note that even if you do have a religious perspective, that is not necessarily a solution to this problem. Two thousand-plus years ago, Plato pointed out that even if you think that what is good is what the gods say is good, you still have to ask, 'Okay? is it good because the gods say so? Or do the gods say so because it is good?' Either way, you have further questions that you need to ask, further challenges that you need to face. So, this is a problem that we all face.

Now, in current secular meta-ethics, there are broadly two camps. I can briefly describe each one, and then say what I think about this.

Russ Roberts: Great.

Jeff Sebo: One camp is the moral realist camp. They hold that there is an objective fact of the matter about what is good, bad, right, wrong. Torturing innocent children for fun is bad and wrong, even if we all get together and agree that this is good and right. It is objectively true, whether we like it or not.

Anti-realists, however, think: No, values are a social construct. There is no objective fact of the matter about what is good, bad, right, and wrong. Instead, when we ask ethical questions or when we have ethical disagreements, what we are doing is talking about what each of us most fundamentally believes and values, and how we can live a life that is authentic, and examined, and that reflects and aligns with what we most fundamentally believe and value.

Now, for present purposes, I can say two brief things. One is I personally side a little bit more with the anti-realists. I think that value is a social construct and not an objective fact of the matter in the world.

But second of all, everything that I talk about in the book and everything that we talk about in contemporary applied ethics, I think you can have those conversations in roughly the same ways, whether you side with the theists, or the moral realists, or the moral anti-realists. If, for example, you were a moral realist, then you could take all of these arguments, and objections, and replies in the spirit of: I am trying to get at the objective truth. And, if you were an anti-realist, then you could take all of these arguments, and objections, and replies in the spirit of: I am trying to work with you to help both of us figure out what we most deeply believe and value, and what kinds of practices would properly reflect our most deeply held beliefs and values.

And my prediction is that if we really think hard about this together, and get full information, and ideal coherence, then what we will discover is that our own values commit us to a certain kind of respect and compassion for other individuals with interests. So, his is not a norm imposed on us from the outside, this is a norm that we discover in ourselves through sufficient reflection.

Russ Roberts: I think it's more a Kantian argument that--or you'll tell me a better way to phrase it. I think most of us imagine that we'd like to live in a world where people held the Welfare Principle. We would prefer it not apply to us, perhaps. But, when we're thinking about our ethical obligations, you don't have to believe in God to believe the world would be a better place if people weren't cruel to each other. I think the challenge is why I should accept your moral injunctions. And I think that gets trickier.

Jeff Sebo: Yeah. I think there is no shortcut to answering that question. I think you have to have a long series of conversations about science and philosophy. But, I think the upshot of those conversations would be that, if you are built like me and like most other humans at least, then you do have some combination of self-interest and altruism inside of you. We would identify the parts of you that are a little bit more self-interested and the parts of you that are a little bit more altruistic, and we would think about how to build a value system and how to live a life that properly balances and reflects your self-interest and your altruism. I think that there would be room for consideration of other welfare subjects and for an aspiration to consider welfare risks for them, and reduce harms imposed on them within that.

But, we would have to discover that through, again, a long series of conversations, thought experiments, objections and replies. I think there is no simple, single argument that can get us directly to that destination.

40:40

Russ Roberts: The other thing that's hard about it for me--and I agree with you. I think we're a mixture of good--actually, I'm going to say it the way you said it: self-interested and altruistic. I think most people, in my experience, which is limited obviously--very limited to a certain time and place, a few places and a few times, but limited. As I get older, I marvel in horror at the willingness of human beings to do horrible things to people. It's not that I am self-interested and I prefer to keep the last piece of food for me rather than for you. It's that I enjoy you not getting it. That part of our nature is hard to understand.

And I don't know how important it is for these kinds of conversations. Maybe it's not important. The simple term for it is sadism. The fact that there is a sadistic side to human beings that gets pleasure from the suffering of others is deeply disturbing; and it complicates, I think, these conversations.

Jeff Sebo: Yes. I think this is directly relevant. Because, we might have a lot of responsibilities to fellow humans, and then to non-humans of various kinds, but we also have clear limitations on how much altruism we can achieve and sustain individually and collectively. At least, right now. We can barely get it together to take care of even a fraction of eight billion humans at any given time. And so, once we start extending consideration to quintillions of members of millions of species, then are we signing up for way more than we can realistically achieve and sustain?

And, this is where I think it really helps to consult the different ethical traditions. So, there are some ethical traditions, like utilitarianism and Kantianism, that are about acting according to ethical principles that push us towards more respect and compassion. But, then, there are other ethical traditions, like virtue theory and care theory, that focus more on how we can cultivate character traits and habits that can naturally guide us towards more altruistic behaviors than we are capable of right now. And, how can we build social, and legal, and political, and economic, and ecological systems, and structures, and institutions that can likewise incentivize and pull out better behaviors from us individually and collectively?

And, I think a question that we need to ask over time through trial and error is: how much progress can we make--understanding the extent of both our responsibilities and our limitations--towards cultivating the kinds of character traits and then building the kinds of shared systems, and structures, and institutions that can help us get a little more mileage out of our altruism. And then, what will that unlock, in terms of our ability to achieve and sustain higher levels of care for other beings?

We may never be able to get fully there, but maybe we can get a little bit farther than we have so far if we think about it in that more holistic way.

Russ Roberts: Yeah. I like the enterprise; I like the realism of it. I think it's laudable. I think it misses the unintended consequences of some of those things. Maybe we'll get to that, maybe we won't.

44:08

Russ Roberts: You have a chapter called "Against Human Exceptionalism." Of course, the Biblical view, the Judeo-Christian view--I don't know enough about the Quran, but I know a little bit about Buddhism. "Against Human Exceptionalism" is closer to a Buddhist worldview and farther from a Judeo-Christian view. In a Judeo-Christian view, human beings are created in God's image. That privileges them in a way. It does not allow them to taunt a dog in a video, especially if the dog could realize it's being taunted. That would be unacceptable, I think, in Jewish/Christian tradition. But, it creates a certain hierarchy which your book rejects, and I assume most philosophers reject. And then it's a question, again, of how far you go down--the question of whether your concern for animals makes it harder to be concerned about human beings. Or, it might go the opposite way: It may make you more likely to be kind to human beings.

In certain calculi--calculoses--it doesn't matter. In others, it would be: since humans are privileged and exceptional, it should matter. What do you think about that?

Jeff Sebo: Yeah. Well, I can first of all note that my arguments, and conclusions, and recommendations in the book are compatible with a kind of egalitarian view about the moral circle, or a hierarchical view about the moral circle.

For example, if you think that elephants can experience much more intense pleasures and pains than ants, then you might have reason to prioritize elephants over ants to that extent. And that would be compatible with equal consideration of equal interests, and a rejection of purist species difference as a good reason to prioritize some beings over others.

But now, with that said, I do think that we can improve our treatments of non-human animals, even AI systems, in a way that is good for humans. There are a lot of co-beneficial solutions that we might find, as long as we at least consider every stakeholder, everyone who might matter in the conversation.

For example, we could pursue food system reforms that are better for humans, and the animals, and the environment at the same time. We could pursue infrastructure reforms that are better for humans, and wild animals, and the environment at the same time. We can pursue ways of developing AI systems and approaching AI safety that are more collaborative and less adversarial with AI systems who will soon be about as powerful as us.

Also, it is worth noting that a lot of forms of human prejudice, and discrimination, and oppression are rooted in comparisons with non-human animals who are presumed to be lesser than. And so, advocating for non-human animals can also undercut these kinds of dehumanizing narratives that are also used to oppress marginalized human populations of various kinds.

So, there are all kinds of co-beneficial ways to improve human lives and societies, while still improving our treatment of non-human animals.

And I would love to see how much mileage we can get by pursuing co-beneficial solutions that are good for us, and non-humans, and the environment, and/or making modest, easily sustainable sacrifices in order to get big gains for vulnerable non-human populations. I expect that we can get a lot of mileage out of just that before we have to contemplate making more significant sacrifices as a species for non-humans of various kinds.

47:47

Russ Roberts: Would you be willing--what do you think of the morality of raising birds or mammals for food if they were treated not the way the current food industry treats them, which is mostly not so nice? Let's say we imagined a free-range, ethical herd of cattle, and free-range chickens. Chickens are trickier, because I don't think chickens do so well in a free-range situation. My understanding is that they're nervous creatures and they get eaten by coyotes a lot in the free-range world, which is incredibly stressful. So, let's put that to the side.

There's a bunch of animals we're going to bring into the world. They're going to have a very pleasant life by their standards. They will not break LeBron James' career scoring record ever, so they won't have that kind of thrill, or become The Beatles. But, within the world of their sentience and conscious, which we don't fully understand, it will be a comfortable life. We will actually reduce the harm that's caused to them. We won't reproduce exactly life in the wild because nature is red in tooth and claw. So, they have a pretty good life and then they're killed painlessly for food. Do you view that as an increase in wellbeing in the universe?

Jeff Sebo: Yeah, great question. I guess I have two types of answer. I can briefly mention each.

First, as far as my own personal view is concerned, I do imagine that a version of that practice could add to the overall positive wellbeing in the world. However, my own personal view is that is not enough to make the practice ethically acceptable. Because, first of all, it might still introduce less positive wellbeing in the world than alternative ways of creating and in treating animals could do.

Second of all, I think that we have a responsibility when we knowingly, willingly, foreseeably create someone who is vulnerable and dependent on us to not only ensure that they have more happiness than suffering in their life, but also ensure that they have a reasonable opportunity to flourish as the kind of being that they are. If we are bringing vulnerable animals into existence solely to kill them at six months old in order to harvest them for flesh that we can sell or that we can eat, then that is not really consistent with giving them a reasonable opportunity to flourish in life.

Now with that said, the second answer is that I think that this is all a little bit of a distraction from where we should be focusing when it comes to conversations about food. Because the reality is that this type of food production constitutes, at best, five to 10% of global meat production. And, it would not be scalable as an alternative to factory farming and industrial animal agriculture.

Russ Roberts: Not with eight billion people.

Jeff Sebo: It would take more land--yeah, yeah. It would take more land than the planet contains to raise an army of farmed animals, and treat them that way, and give them that amount of space.

And so what I would love is if we could all--those who reject free-range animal agriculture, those who accept free-range animal agriculture--if we could all work together over the next 25 to 50 years to end factory farming and industrial animal agriculture, which harms animals, harms public health, harms the environment. And then we can fight along the way about whether to sustain some amount of free-range animal farming as, like, a marginal five-to-10% part of a future food system that would be primarily plant-based. I think that that would be the right approach to take, as opposed to fighting about that now in a way that prevents us from building the coalitions we need to really go after what is the main problem here for animals, and public health, and the environment.

51:53

Russ Roberts: Let's go back to something I mentioned earlier that I think is very provocative in the book, which is the role of uncertainty and probability. I don't know how to ask you a question to get you started, but I bet you could do it for me. Which is, it's something like: when you're not sure--when there's uncertainty, which there almost always is in many of these cases, either about the consciousness, or the suffering, or the happiness, how should you build that into your way of looking at the world?

Jeff Sebo: Yeah. This is really important because in general, in ethics and policy, we understand that we have a responsibility to consider non-negligible risks when making high-stakes decisions. Right? We do this all the time. People will consider even a quite low chance that a medicine will have a fatal side effect before they take the medicine. We do this in public health when thinking about risks for new pandemics, and the environment when thinking about risks involving climate change, and so on and so forth.

But, until recently, we have not really done this when asking questions about the moral circle and to whom we owe moral consideration. The question has been: Are they conscious? Are they sentient? Are they agentic? Do they deserve moral consideration? And, I will extend the moral consideration when you have proven to me that they meet those standards.

And, part of what I argue in the book--and Jonathan Birch and others have been making similar arguments recently--is that we should take the same precautionary approach to thinking about the scope of the moral circle that we take to any other high-stakes policy domain that involves disagreement and uncertainty. So, the question should not be: Do they definitely matter for their own sake? And, should not even be: Do they probably matter for their own sake? It should instead be: Is there a realistic, non-negligible, non-trivial chance that they matter for their own sake based on the best information and arguments currently available? And if there is, I think we should give them at least a little bit of moral consideration when making decisions that affect them in the spirit of caution and humility, as we investigate the matter further and try to collect more evidence and improve our understanding of the issue.

54:08

Russ Roberts: And, that leads to--you call it the risk principle, but it's sort of an ethical precautionary principle. It seems to me--it to some extent cuts both ways. Meaning if we're going to act on low-probability events because they might be real--and, of course, probability in this sense is tricky. There are different senses of probability. In reality, your AI robot is either conscious or not. You can talk about it having a one-1000th, or one-one-millionth chance of consciousness, but it's really--that's for me to deal with in a world where we don't have full information.

What worries me about it is the idea that in my zeal to be open to the possibility that I need to take this group into account, I will fail to discharge my responsibilities elsewhere. I will take actions that are actually harmful to, say, people. But, if there is consciousness in this fill-in-the blank, or sufficient consciousness--if it's a machine, I might do some things that actually harm people. You didn't--I think you have a paragraph on that actually, to be fair. I think you do note that possibility. But that's what I worry about more.

Jeff Sebo: Yeah. I do discuss that possibility somewhat in the book. And I agree with you that this is a really important issue, because there are risks involved with false positives and with false negatives with over-attribution of moral significance and with under-attribution.

So, with over-attribution, as you say, if we end up mistakenly treating objects as though we were subjects, then we might not only form inappropriate social and emotional bonds with objects, but we also might divert scarce resources away from humans, and mammals, and birds who really need them, towards entities that in fact have no interests or needs at all. And that would be really tragic.

Now, the harm of false negatives--of under-attribution of moral significance--are also, of course, very important. That can lead to exploitation, and extermination, and suffering, and death. Often at huge scales, often for trivial reasons. It can lead to industries like factory farming, for example.

One question is: Are these risks symmetrical, or is one much worse than the other? If one is much worse than the other, then that might give you a little reason to err in that direction.

But, even if you think these risks are symmetrical, then I still think we should extend at least some consideration to beings who have a realistic chance of mattering; but we have tools that we can use to balance these risks. So, for example, you could have a threshold, a cutoff line for the probabilities. You might say, 'I will extend at least some consideration to every being with a one-in-1000 or higher chance of mattering, but not beings with only a one-in-a-million, or billion, or trillion chance of mattering.'

Another tool is a kind of expected-value principle. You can multiply the probability that a being matters based on the evidence available to you by how much they would matter if they did. You can treat the product of that equation as how much they matter for purposes of decision-making. And that would allow you to assign a kind of discount rate across species and across substrates that would let you sort of balance the risk of false positives with the risk of false negatives, and give at least some weight to both of those concerns.

Russ Roberts: Yeah. That doesn't persuade me very effectively, mainly because those probabilities are quite subjective. They're not empirical. I don't even know how to describe them. Well, there's a name for them, I don't know what they are. But they're subjective: they're not objective.

58:20

Russ Roberts: Let me try something else. Is it moral to have children?

Jeff Sebo: Heh, heh, heh. Yeah. Well, one brief response and then I will answer your question.

Russ Roberts: I didn't mean to sandbag you there out of the blue. Take a deep breath.

Jeff Sebo: No, no, no, no, no.

Russ Roberts: That was a very abrupt change of focus, I apologize for that.

Jeff Sebo: I am down for abrupt changes of focus. I just wanted to also briefly address your--

Russ Roberts: Please--

Jeff Sebo: reason for not being persuaded. Because I sympathize with it, to be clear. I agree that these are subjective, imprecise, unreliable probability estimates. And, they could lead us astray; and we should be very careful about making them and basing big decisions on them, especially at this stage.

So, the reason I push in this direction anyway is because we have to compare this decision procedure to the alternatives. And right now, the alternatives involve either totally neglecting this issue all together, or going with our intuitions alone and our intuitions are, of course, very biased, very ignorant. Equally, if not much more subject, to all kinds of distortions.

And so, these kinds of probability estimates are imprecise, are unreliable. But, I think, especially if we can improve them and improve the frameworks that we use for generating them, they might still be kind of like democracy. Really bad, but maybe less bad than all of the alternatives.

So, anyway, with respect to having kids: I am not an antinatalist. Antinatalists hold that it is unethical to have children because you introduce suffering into the world: you subject them to suffering.

I do think that we should think about creation ethics very carefully because we do make a lot of decisions that either directly or indirectly determine who can exist and what kinds of lives they can have if they do.

And that extends not only to the very personal decision whether to have children and how to raise them, but also to these more collective political decisions about, for example, whether to pave a forest, or whether to reintroduce a forest to what was previously paved territory. That will indirectly determine the abundance and diversity of non-human life that might exist in that space, and introduce all kinds of non-human suffering and happiness into that space. And I think that we have neglected those sorts of creation-ethics decisions, and focused much more on the more direct ones like whether to have a kid or whether to breed animals.

So, I have no universal answer to that question. I think if you want to have a kid, great, have a kid. If you want to reintroduce other kinds of life into the world, then that is in principle perfectly fine to do. But, it gives you a responsibility to ensure that they can have a good life and that they can flourish as the kind of being that they are.

We all recognize that we have that responsibility when we have a kid. Very few of us recognize that we have that responsibility when we make decisions that cause other types of beings to come into existence. And that is where I think we should be focusing much more.

1:01:37

Russ Roberts: But I think the challenge--and the reason I brought up the kid wasn't because children are going to have some suffering in their life, as children and adults. But, rather, thinking about your kid: there's some chance, not zero, maybe more than one-in-1000, your kid would be a carnivore, even if you raised them to be a, say, vegan.

And as a result, if you start to worry about plants' consciousness--which you could and some people do--because they strive and they are somewhat agentic. They turn toward the sun, and so on. You can't count that out. You can't rule that out that your kid will be a carnivore. And, by bringing the child into the world, you are going to cause suffering in the animal world almost 100%.

And thinking about the forest idea, and paving over or returning it to its natural state: You know, most people worry about that from a very human exceptionalist framework. Meaning: is this good for people? Obviously, it could include values of human beings that are complicated. A desire to have nature. Right?

It doesn't imply we should pave over the world. It doesn't imply everything should be manufacturing. Which would be horrific, obviously, for all kinds of reasons.

So, the human exceptional perspective is that we are of nature. We value nature in primal ways we don't fully understand. The "Lake Isle of Innisfree" by Yeates captures this beautifully. We have a connection to the natural world that is precious.

But it's a human perspective, that preciousness. And, the creation of, say, a national park that, in doing it, kills bugs for example, which you talk about, or ants--insects more explicitly--most people would just say, 'Eh, that's not a big deal.'

By suggesting it should be a big deal--which is an interesting perspective--I think you make it harder to improve human wellbeing. But maybe you disagree.

Jeff Sebo: Yeah. I, first of all, am not too concerned with the ethical significance of having a kid in the grand scheme of things. I agree with you, that having a kid introduces more harm into the world. It also potentially introduces more good into the world. Not only because your kid themself might go on to experience a lot of happiness and flourishing in life, but also because if you raise them well, then they could do good works in the world--

Russ Roberts: I was going to say--

Jeff Sebo: They could contribute--

Russ Roberts: Could be a philosopher--

Jeff Sebo: in making the world a better place.

Russ Roberts: Could become a philosopher.

Jeff Sebo: Now, as you say, when you make the decision to have a kid, you are signing up for teaching them a certain way, attempting to instill a certain set of beliefs, and values, and practices. But, then, ultimately knowing that they will be their own person and perhaps make different decisions than the ones you hoped they might make, and you have to be at peace with that.

Russ Roberts: Yeah.

Jeff Sebo: So, it is a little bit of a gamble, both in terms of their own happiness and in terms of whether they are a net negative or net positive force in the world. And you have to be willing to take that gamble.

In general--and this comes out towards the end of the book--while I think those types of questions and decisions do matter a great deal, I think that we have been overly focused in ethics and in advocacy on individual decisions, and individual behaviors, and individual consumption patterns when thinking about how to make the world a better place. We could make a lot more progress I think if, yeah, we still continue to ask those questions; but we also broaden our focus to include: How can we change our collective practices, how can we change our norms, our social, legal, political, economic systems and incentives so that we can have better incentives, better patterns of behavior from everybody? As opposed to focusing exclusively or primarily on our individual actions, our individual meat consumption, etc., etc. So, that is generally how I think about the kid issue.

With respect to giving a lot of weight to our impacts on animals, including invertebrates, including insects, I do think that it complicates our ethics and policies because now we have many more stakeholders to consider and a much wider range of stakeholders to consider. And, it forces us to confront the reality that the world is a kind of tragic place, and very little that we do is going to be universally good for everybody. There will always be tensions, there will always be trade-offs, there will always be winners and losers. We can work towards a better way of capturing all of that information and making decisions in a way that incorporates all of that information. But, that will be the case, that we will have to consider a lot of beings and do the best we can for as many as possible, and just accept that we will never be able to do everything for everyone.

But, I think that that is compatible with still investing in our species, improving human lives and societies. To go back to your religious perspective that you were mentioning before, in part because that will empower our species to be able to be good stewards of the planet, or better stewards of the planet than we currently are for everyone else. So, first and foremost, we ought to improve human lives and society specifically so we can empower our successors, the next generation to be better stewards of the planet for other animals than we are. So actually, taking care of ourselves is one of the best things we can do to take care of other species moving forward. But, like a parent with their kids, we got to hope that our successors follow through and do good works with the resources that we gave them. So, there is a gamble for our species there as well.

Russ Roberts: Well said.

1:07:39

Russ Roberts: I want to try to talk for a minute about the individual versus the collective thing that you referenced. I think a difference between philosophers, maybe, and economists--you know, economists are like poor philosophers, so we're potentially dangerous.

Jeff Sebo: We're also less empirical economists.

Russ Roberts: Yeah.

Jeff Sebo: So I think it cuts both ways.

Russ Roberts: Exactly.

Jeff Sebo: But, yeah, go for it.

Russ Roberts: That was my cheap shot. I wasn't going to say it that way, but thank you.

I'm thinking about the fact that, when I think about why the world is a place of pain for many people, there's two dimensions of it. One is the genetic endowment we bring into the world as human beings. We're not necessarily made for this world in an easy way, and we suffer. We also have glorious happiness, love. I think it's a very mixed and complicated bag. I really hate the anti-natalist view that says it's wrong to bring children into world because there's suffering. I find that repugnant. So, one form of human suffering is that it's just hard to be a human, but has many good sides and I'm glad I'm here.

The second part of it is the things we do to each other that I referred to earlier. We do terrible things to each other. Both in private and in public. We do things in the dark of night and we do things in broad daylight that are tyrannical, oppressive, cruel. As I suggested, and it's hard to say but I think it's true: it's not just because people want to get their own way. They actually enjoy it. They enjoy the exercise of power, they enjoy the exercise of cruelty.

So, when I think about your intellectual agenda--and again I don't mean yours personally but I think you're in this group--and I think about mine, they're very different. One of the differences is: when I look at what's wrong with the world, I don't see it as a lack of will. I see it as a complicated, really depressing set of incentives that's very hard to untangle. So, the reason that we have failed to, say, take care of the eight billion people on the planet, isn't because we're mean, or not nice, or selfish. It's because the complexity of the system is very hard to untangle and to push it in a positive direction. As we start to try to do that, I guess maybe this makes me something of a traditional conservative in the sense of the Chesterton Fence idea: that there are many things in the world that are troubling or puzzling, but it's not obvious how to make it better.

So, my interventionist impulses are inevitably tempered by that. I tend to focus on trying to be nicer to my wife, rather than trying to save the world from global warming. Partly because I'm open to the possibility--to take a cheap shot, Jeff--that global warming will actually increase the health and population of certain species. But, I'm agnostic on that. My first impulse would be I don't need it to change; I don't want to engineer it for sure. But, since it's so hard to get, say, global commitment to that goal, and since the normal ways to do that scare the heck out of me--meaning some kind of global governance that would have lack of accountability--I tend to push toward being nicer to my wife, and my kids, and my coworkers, and my podcast guests. I try not to get too worked up.

Why don't you close us out and give us your perspective, because I think you disagree?

Jeff Sebo: I actually agree with you about almost everything that you said, with maybe one exception. By the way, in my previous book Saving Animals, Saving Ourselves, I do discuss the possibility that climate change will introduce more insects and parasites to the world--possibly less diverse animals but more numerous animals. So, I consider the possibility that it might, at least in the short- to medium-term, be a net benefit for welfare for that reason, though there are many uncertainties and there are longer-term destabilization concerns. In any case, happy to discuss that more sometime if you like.

I completely agree with you, and this is a big part of why I try to shift focus away from individual behavior within existing structures to the structures themselves and the incentives they create for collective patterns of behavior. And why I argue for a theory of change that, as I put it earlier, takes seriously our responsibilities and our limitations in equal measure. I think we do have strong responsibilities to the non-human world and we should set a goal of making the world a friendlier place to all vulnerable stakeholders. And, we have significant limitations right now--limitations to our knowledge, limitations to our power, limitations to our political will--in part because of these shared structures that are perhaps changeable.

I agree with you: there is the conservative Chesterton's Fence concern. We would be mistaken to blunder into major transformative changes without appreciating why we have the systems we have in the first place, and that could easily do a lot more harm than good.

So, the kind of theory of change that I advocate for and that I argue for in the book involves marrying together really ambitious goals for inclusion and egalitarianism in the future with very modest, incrementalist, co-beneficial short term policies that can do at least a little bit more good for non-humans in the short term, while also helping us get a little more knowledge. Just a little bit more capacity, just a little bit more political will.

For example, with food system reform, we can pursue incrementalist, informational, and financial, and regulatory, and just transition policies to make plant-based foods a little bit more accessible, a little bit more affordable. That would be a little better for humans, and farmed animals, and the environment. With infrastructure reform, we can focus on interventions like adding bird-safe glass to more energy-efficient buildings, adding wildlife corridors to new green transportation systems. That then helps us learn whether those are good. It helps us build institutional capacity for considering wild animals. It helps us build political will by normalizing the idea of considering them as stakeholders.

And so my idea is we can pursue these low-hanging fruit, co-beneficial, incremental interventions and learn as we go. Then that way, we can discover over time, each step of the way, where we should go next and how far we can ultimately go, rather than setting this bold agenda now, with such limited information, and then kind of fanatically pursuing that even though we are probably getting in over our heads. This is a theory of change that really tries to honor both how bold I think we should be, but then also, as you say, how incapable we are of making big changes all at once.

Russ Roberts: My guest today has been Jeff Sebo. His book is The Moral Circle.

Jeff, thanks for being part of EconTalk.

Jeff Sebo: Yeah, thanks so much. It was a great conversation.