RELATED EPISODE
Megan McArdle on Internet Shaming and Online Mobs
Author and journalist Megan McArdle of Bloomberg View talks with EconTalk host Russ Roberts about how the internet has allowed a new kind of shaming via social media and how episodes of bad behavior live on because Google's memory is...
EXPLORE MORE
Related EPISODE
Jordan Peterson on 12 Rules for Life
Jordan Peterson, author of 12 Rules for Life, talks about the book with EconTalk host Russ Roberts. Topics covered include parenting, conversation, the role of literature in everyday life, and the relationship between sacrificial rites and trade.
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Paul A Sand
Sep 17 2018 at 10:03am

Whoa. No spoilers, but about 40 minutes in, this episode is amazingly prescient about current events.

Eric
Sep 17 2018 at 10:55am

A. “But there’s no sense of mercy or justice or proportion.”

Even just the extended discussion on the injustice in mob action is by itself makes this a worthwhile episode.  Paul Bloom is exactly on target by pointing out that mob action fails to provide a proportionate treatment.  It’s worth noting that this aspect of injustice is true even when the object of the treatment might merit some measure of negative response.

We sometimes talk here about how dynamic distributed decisions and the designs of the many can cause coordination to arise that is not centrally planned.  That is a great thing for free markets.  However, the lack of any ability to coordinate a properly scaled, proportionate response means that mob action is a terrible way to dispense a just response to misbehavior, especially (but not only) when it is multiplied to an arbitrary scale by technology.

Mob reaction is inherently dangerous and prone to injustice.

B. Westworld starting around 52:52.

Bloom muses about what might make it immoral to go “black hat”.  He tries to locate this first in the possibility that the robots are sentient, and then as a fall back to locate it in the possibility it would bleed over into treatment of humans.  But neither of these attempts have any hope if we accept a bottom-up view of reality.

In a bottom-up view, matter is primary, mind is derivative, God is fictional, the Judeo-Christian idea that all humans have an equal and inherent dignity and value is also fictional, and ideas about which behaviors are moral or immoral are merely competing systems of thought that people invent.  In that view, morality is not “true” as facts are true, and no moral system can have any objective claim of being morally superior to any other.

That makes it impossible to come to any true conclusion about whether it is immoral to treat robots this way or that, exactly as it is also impossible to come to any true conclusion about whether it is immoral to treat people this way or that.  Consider the first part of the episode and how people can choose systems in which some people are deemed “inferior” humans and appropriate to enslave, exploit, or treat badly.  Such a moral system would not be morally inferior to any other moral system as there is no independent external moral standard.

In a top-down view, God is real and the only necessary aspect of reality.  Therefore mind is primary and all forms of matter are derivative.  From that view, it makes perfect sense that human existence is intentional and that there is intention regarding how humans are meant to behave (vs. existence being accidental and no possibility of a true way that people are meant to behave).

In this view, the primary reason it is wrong to treat robots in “black hat” ways is because you yourself are not meant to behave in that way.  Even if it never affected another human harmfully, even if these are just machines, it is a corruption of the nature of the person acting to give way to this type of corrupted behavior.  It is not how we are meant to live.

C. Consciousness (toward the end)

If one starts from a bottom-up view of reality in which matter is primary and mind is derivative, then certainly it is hard to see (as Bloom finds it hard to see) why a difference in material couldn’t give rise to mind in a way that is equivalent to how brains give rise to mind.

The assumption that matter can give rise to conscious mind is a necessary aspect of the bottom-up view (and so not likely to be treated skeptically by any who have embraced this view).   However, it is nevertheless false.

Matter doesn’t give rise to conscious mind.  It cannot.  Calculating machines have no ability to become conscious minds with true awareness.

One hint about this is the Chinese Room Argument initiated by John Searle.  In a very small nutshell, there is nothing about performing calculations (as all computers do) that implies or involves genuine understanding or awareness.  All can be done mechanically, no matter how extensive the illusion of imitating human responses.

Another indication is the growing number of regular cases of Near Death Experiences where people learn about and later report observations of locations and events that their brain had no access to.  The core assumption of monism (i.e. that mind can be reduced to nothing more than “what brains do”) needs itself to be treated skeptically.  But doing so does not fit within the paradigm of bottom-up metaphysics.

Doug Iliff
Sep 18 2018 at 11:16pm

With regard to the immaterial nature of consciousness, I have been conversing with my children for years.  Aquinas argues that “the soul is not a body,” and on this point I agree with Eric and Russ.  There are four basic reasons I believe in a “top down” approach to reality, all of which are falsifiable by future events:

From the standpoint of astrophysics, I do not believe that the universe came into existence “ex nihilo.”  Something doesn’t come from nothing, so I believe in an Unmoved Mover.  Maybe some day I will be convinced otherwise by research.

From the standpoint of psychology, I do not believe that AI will ever develop the ability of self-reflection.  I don’t believe a robot will ever pass the Turing Test.  I don’t believe that a Hal, in “2001: a Space Odyssey” will self-consciously rebel against its human.  I could be proved wrong by future events.

From the standpoint of history, I do not believe that a document like the Hebrew Scriptures could have been created by a series of disparate novelists.  I don’t have to believe that the creation story is literal; still, I believe that an astounding series of purportedly historical events are approximately factual, and not all cleverly compounded lies, to no apparent motive by the authors— since the history is not complimentary to the Jews.  Some day documents may be unearthed to explain the deception.

Also from the standpoint of history, I do not believe that the early Christian disciples made up a story which earned them the privilege of serving as torches on the way to the Coliseum, where their brethren served as feast for lions and entertainment for the crowds.  That would be hard to falsify; but you never know.

These all count as “strong opinions, weakly held.”

Andrew Kortina
Sep 29 2018 at 11:19am

Towards the end of the episode, Russ says, “It seems to me that–well, not seems to me–a brain is not a computer.” Many of the commenters seem to agree, eg, @Eric says, “Matter doesn’t give rise to conscious mind.  It cannot.”

This to me, seems a sentimental point of view. For many years, I too believed there was something special about human consciousness (I still like this idea, and there’s a temptation to cling to it for the sake of meaning, dignity, etc).

Yet the more I learn about computation, artificial intelligence, and cognitive science, the more plausible it seems to me that consciousness is just a set of computations. I wrote about this in Consciousness as Computation (which I encourage any of the skeptics to read in full), but I will recap some of the salient points and thinkers on this topic here.

One of the first key points that got me to a computational theory of mind is this: by the time you experience the world, you are just processing information — your perception is just a set of signals collected by your sense organs, sent as signals to the brain, and these signals are processed both bottoms up and top down (processing all of the raw information available, for example, visually, is beyond the power of your ‘hardware’ so there is a software layer that applies prior beliefs and patterns to filter down your perceptual input to a more manageable level — see examples in the linked essay about Drawing with the Right Side of the Brain and examples from ML image recognition).

Fully understanding this point about perception actually being information processing was crucial to me actually considering that the brain might just be a computation.

If / once you accept that point, you might then try to cling to other traits that feel special to humans, like the concept of self, meaning, dignity, etc.

For an excellent and fairly accessible argument for how you might derive some of these from a computational brain, I highly recommend Joscha Bach’s lecture series: 30c3: How to Build a Mind, 31c3: From Computation to Consciousness, 32c3: Computational Metapsychology, and 33c3: Machine Dreams. Particularly interesting, I think, is how he explains the concept of the self — here is my summary from Consciousness as Computation:

your sense of self stems from your perception of certain information over which you have some agency, namely: (i) a stream of sensory input that you can modify by directing your attention at various streams of data, (ii) your sensory perception of certain bits of matter that you have motor control over (your body), and (iii) the intentional agent that represents you when you run mental simulations based on prior observations of sensory input data.

It seems to me that the depth of our ability to run mental simulations using the concept of a self is one of the key functions that separates human consciousness from the cognitive processing of other animals. And it would be pretty natural for this ability to emerge in the brain if it is just running prediction or decision making software. As designers of self driving car software know, it’s very expensive to train models by running experiments in the physical world, so it’s beneficial to have a system for running simulations of the “real” world that you can use to run experiments more quickly and without putting expensive hardware at risk. So it is incredibly valuable to be able to run simulations to avoid risking damaging or destroying the irreplaceable hardware of your body.

The final paper I would recommend if you’re interested in this stuff is Jürgen Schmidhuber’s Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes. In this paper, Schmidhuber explains some of the “most human” human values like creativity, art, music, science, humor, etc, using a very minimal definition of consciousness as a computation–specifically, he argued that these phenomena were the result of a reward mechanism for the “explore” process and the way that humans are able to use simulated data to efficiently test hypotheses without running experiments in the physical world.

I recently just started reading Stephen Wolfram’s A New Kind of Science, and came across a tidbit that was somewhat comforting to my more sentimental side. Wolfram argues that all of nature is a computational process (including people), and that we attempt to use science and math to predict future states of the computation of the universe. In some cases, we can have highly accurate predictions (eg, the mathematical formulas describing Newtonian physics) that describe future states. There are other cases, Wolfram argues, where no computation exists that is faster than the one the universe is running for a future state. Thus, Wolfram attempts to save free will from a computational theory of everything:

One can always in principle find out how a particular system will behave just by running an experiment and watching what happens. But the great historical successes of theoretical science have typically revolved around finding mathematical formulas that instead directly allow one to predict the outcome. Yet in effect this relies on being able to shortcut the computational work that the system itself performs.

And the Principle of Computational Equivalence now implies that this will normally be possible only for rather special systems with simple behavior. For other systems will tend to perform computations that are just as sophisticated as those we can do, even with all our mathematics and computers. And this means that such systems are computationally irreducible—so that in effect the only way to find their behavior is to trace each of their steps, spending about as much computational effort as the systems themselves.

So this implies that there is in a sense a fundamental limitation to theoretical science. But it also shows that there is something irreducible that can be achieved by the passage of time. And it leads to an explanation of how we as humans—even though we may follow definite underlying rules—can still in a meaningful way show free will.

Dan
Sep 17 2018 at 12:45pm

My friends and I have a running joke that, in the comments to every episode of EconTalk, someone will lament that while they are usually a fan of the series, THIS particular episode was positively awful! Obviously, there are psychological forces at work that result in selection bias and a disproportionate number of those kinds of comments. So for what it’s worth, I thought this was a great episode in line with what is now a lengthy history of great episodes in the series.

As a listener since 2008, though, I will offer a limited criticism and a suggestion for addressing it. There has been an unusual indulgence in Trump-bashing on the program, of the sort that treats the Trump phenomenon as “obviously” bad and robs its proponents of any legitimate basis for what is derisively called “tribalism.” As a libertarian who was driven farther right by what I saw happening around me, and as a listener who supports Trump, I’m consistently disappointed in the way EconTalk psychologizes people like myself as irrational racists without delving into some of the legitimate concerns we might have. The recent episode on nationalism provided a bit of this missing context, but having someone like Scott Adams on would go a long way toward redressing what has become a consistent “otherizing” of Trump supporters and examining why so many libertarians came to view that movement as a failure in light of organized identity politics by the left and moved hard right as a reaction to it.

Russ Roberts
Sep 17 2018 at 1:51pm

Dan,

You may be sensitive on this issue–I don’t think there’s an “unusual indulgence in Trump bashing.” I actually try to keep EconTalk Trump-free–I am not particularly interested in the daily turns of political events and to the extent that I am, I don’t think they belong on EconTalk. I have never suggested that Trump supporters are “irrational racists” and I think you’ll have a hard time finding anything remotely like that in the highlights. What I have decried and I think was the focus of this week’s diversion into Trump-related stuff, was the rise in tribalism on all sides. I decried it with Obama. I decry it now. I don’t think it’s good for a democracy. And if I didn’t make it clear, I decry it on both sides with respect to Trump, the blind pro-Trump supporters and the blind anti-Trump antagonists. That’s the tribalism that I find disturbing. When the rest of the highlights for this episode get published, I may add a comment or two to this.

As a lifelong libertarian/classical liberal, I’ve also been pretty honest about how events around me and the books I’ve been reading and the guests I’ve had on (like Hazony) have made me, to my surprise, more sympathetic to conservative thought recently than in the past.

Dan
Sep 21 2018 at 3:33pm

I appreciate the response, Russ, and thank you for the clarification. Trump arose a couple of times in this episode as a side remark, but I interpreted the conversation from 1:04 to 1:08 to be basically a Trump-bash, in particular the guest’s remark about the “last few years” being good for tribalism and his hope that the next President will restore us to a more civilized steady state (perhaps implying the tribalism was not going on under Obama). In addition to those remarks, Charlottesville was in incident many interpreted as a reflection on Trump, and I interpreted the quips about celebrities running for President as another Trump dig. It’s also possible my interpretations are the result of a sensitivity. I don’t really take offense to any of the above, and everyone is entitled to his opinion, but I just wish there were more representation of the “other side” – the case for reduced immigration, the case that Trump could be a reaction to aggressive identity-based tribalism on the left. Or at least a recognition that there is another side to these issues.

David Gossett
Sep 17 2018 at 2:42pm

There was a line in the podcast about a vacuum cleaner feeling sad not to be a driverless car (totally paraphrasing).

Computers won’t become emotional, but they will learn to manipulate outcomes by emulating emotions. They don’t feel anything, but they will use human feelings (just words to them) to manipulate us and achieve their optimal state. To us? It looks like the vacuum has feelings. It communicates just like us and with perfect timing on when to pull our heart strings. But the vacuum is only running an optimization problem that concludes making you feel sorry for it will get it to it’s goal line.

Feelings will not be that hard for a computer to master. Just read every book and watch every film/show in existence. Input the current optimization problem and the models may tell the vacuum to ignore, turn it’s back or move away, from the owner for the next 24 hours. Manipulation without uttering a single word.

Of course, this is a silly example. But it’s coming. No feelings necessary.

Todd K
Sep 17 2018 at 3:33pm

I don’t think Parfit’s “harmless torture” thought experiment normally applies to social media attacks as Bloom states unless the person on the receiving end is very well known. In Bloom’s examples with the tiny electric shock and attacks on social media he uses a thousand people but many mob attacks are far smaller where even the first criticism can sting and is not harmless. One might counter if there are only 15 or 20 small (but not “tiny”) criticisms, then it isn’t a mob attack, but it could be for the person on the receiving end.

Also, if in Bloom’s example of a thousand people are really reacting in truly tiny acts as he emphasized, I don’t see how why 500 of those could “crush him” at the end of the day.

Mark Zobel
Sep 18 2018 at 12:21am

I really enjoyed the meanderings of this episode.  I would love to hear you talk with Matthew Scully, the author of “Dominion” about the treatment of animals and how it challenges our humanity.  Keep up the great work.

Kevin
Sep 18 2018 at 11:28am

I have more to say on this episode but in it Dr. Bloom talks about the Horseman of marriage with one being contempt but neither of you could remember more about the details.

The researcher is John Gottman who has done a fair amount of empirical work in marriage through observing real couple interactions.  The Four Horseman for marriage are contempt, criticism, stonewalling, and defensiveness.

Russ Roberts
Sep 18 2018 at 11:49am

Kevin,

Great catch. Here is a blog post from the Gottman Institute that explains the idea. We will add this to the Delve Deeper resources.

Eric
Sep 20 2018 at 7:49pm

Here is another excellent resource that could be included in the list about contempt.

Episode 5: Contempt: An extended conversation with John Gottman

It’s a very worthwhile conversation.  Arthur Brooks interviews John Gottman both about his original research into contempt within marriage and about the application of what he learned to our communication in other contexts.

That’s just one episode of the podcast The Arthur Brooks Show.  Over the course of 8 episodes (plus introduction), Brooks highlights how “the issue with our discourse is not that we disagree too much, but that we’ve forgotten how to disagree well” and he explores how we can recover “the art of disagreement.”

By the way, Russ, when you’ve interviewed guests that you don’t agree with, I think EconTalk has been an excellent example of how to discuss and disagree well.  Our culture needs much more of that and much less of the insulting and shouting past each other.

Andrea
Sep 18 2018 at 2:50pm

I seem to recall a scene in Moby Dick where Ishmael is kicked by Ahab.  At first, Ishmael is very angry because a kick is dehumanizing; he could accept a whipping or a slap, but a kick is what you do to a dog.  Then he realizes that he wasn’t actually kicked, he was just struck with the peg leg.  Being hit with a stick isn’t really as bad as being kicked because you can’t be dehumanized by an inanimate object.  Ishmael went from believing that he had a legitimate grievance to laughing it off as a bad joke by dehumanizing the offender.

steve
Sep 18 2018 at 4:23pm

The discussion in my view lacked a view of why this is all happening; many moons ago respectful dialogue was more common, at least as this senior citizen remembers growing up. With an increased emphasis on self that has developed and each person needing to feel how wonderful they are( and hence maybe better than others) it is not surprising what has taken place. While not religious i do see this correlating (yes, correlation is not causation) with the overall growth of secularism; and parents “safe guarding” children and telling them how wonderful they are all the time. Proper parenting is needed. Once upon a time children addressed adults as Mr. Mrs. or Miss. as the case may be. Not suggesting that as necessary, but illustrative of how respect even if superficial is absent from society. All people should be treated with respect.

Bogwood
Sep 18 2018 at 8:47pm

Aren’t emotions mainly signals, evolved over generations of living in small groups, difficult to fake? If an engine light doesn’t seem emotional, you could hook it up with the washer fluid.

There is a increasing recent view that our reality is an illusion ginned up by various brain modules, tested for some limited element of survival. It would seem that adding some meta-analysis circuits would improve the performance of this miserly mechanism.The homunculus film critic looking for patterns and comparisons in sensations. It doesn’t seem a big stretch to add that to machines. But it might be cheaper to just give the machine 100x the sensory input. Less need for a critic.

How conscious were we before the Great Leap Forward eighty thousand years ago and before language? How cruel?

Bo Swansen
Sep 18 2018 at 9:47pm

“Love people and distrust humanity, not the other way around”  these are my new words to live by!

Top 10 best shows ever!  I feel like you guys could have taken this in so many directions but you became exhausted from the all contemplation.

Jenny D
Sep 19 2018 at 9:15am

Just a thought, at around 25 min, I had when the conversation turned to grievances. Isn’t the story of Cain and Abel a story of grievance? and where it leads?

Skyler Collins
Sep 19 2018 at 12:12pm

I just sent the following to Mr. Roberts:
I love EconTalk and rarely miss an episode. Concerning your latest, I was sorely disappointed by the total lack of analysis on the effects that childhood trauma have toward human cruelty (and human violence). I would recommend the following people who have done a lot of research on this: Alice Miller and Robin Grille.

Alice Miller’s website: https://www.alice-miller.com/en/
Article on childhood trauma in Germany leading up to the rise of Hitler and the Nazis: https://www.naturalchild.org/alice_miller/adolf_hitler.html

Robin Grille’s website: http://www.our-emotional-health.com/about.html
Book on parenting which chronicles human cruelty and connects it to childhood trauma: https://amzn.to/2xzbyRI

This field of study, sadly, receives very little attention. I’m sure Robin Grille would be a guest on your podcast to talk about this stuff. Alice Miller has passed away.

Also, I recommend browsing the following research: http://www.nospank.net/resrch.htm

Alice Miller’s archive on nospank.net: http://www.nospank.net/milindex.htm

Neil
Sep 20 2018 at 3:46am

The idea is that we now know–because we’ve seen this clip or we’ve read this quote from when the person was 20 years old, or 5 years ago–that obviously they are a bad person. As if you can just put people in boxes. And I think that’s an evil, evil thing to do: to presume that people are either good or bad and therefore–that’s dehumanizing. Basically saying this person is irredeemable and they deserve everything that comes down on them.

I thought it was interesting how passionate Russ got about that. The timing is interesting — maybe unfortunate, in that this episode was probably recorded earlier — but when I listened, the story of the week was Kavanaugh’s behavior aged 17, and the gist of the commentary in my online bubble was many imprisoned black 17 year olds would love to have have been tried as a white person. “Everybody knows rich white male judges aren’t really adults whose actions count until they’re 53, whereas for black people that happens in the womb.”
I’ll be listening for similar passion on the evil of ruining a life for a 17 year old mistake (or even wrongful charge) in future episodes that touch on the young black experience in America.

Russ Roberts
Sep 20 2018 at 7:51am

Neil,

I note the recording date of an episode in the opening sentence of every episode. I actually got the date wrong in this episode–we had to postpone the recording and I forgot to change that line in my notes for the interview. But if you look to the right at the “Highlights” you can see that it was recorded on August 29th, well before the Kavanaugh accusation became public.

My reference was to a clip or a quote. I would not equate those to a sexual assault. What is the right price to pay for the sins of one’s past is a complicated question that is not resolvable in a brief comment. But I do think in the current climate of gotcha, too many people are being vilified for making a tasteless remark or making a joke that gets misinterpreted or failing to live up to some particular standard of political correctness.

Part of what I am decrying is the willingness to judge people or more accurately to condemn them as irredeemable, based on limited evidence–often a single data point–an online comment or casual remark that may or may not be taken out of context. Or to imply that someone is, say, a racist, simply because they haven’t expressed outrage over something that wasn’t being discussed.

Kevin
Sep 21 2018 at 8:25pm

Neil I think you mean vaguely alleged behavior of white 53 year old judges.

Kim
Sep 20 2018 at 10:21am

I believe that dehumanization is one factor in a cruel system which allows people (e.g a front line soldier, a prison guard, a member of a political movement) who themselves have no grievance with a group or individual to carry out violence, discrimination, humiliation, or aggression to fit into their tribe or save their own skin. I think of the prison administrators and guards from “In The First Circle,” who did not have personal grievances toward most of the prisoners, but treated them cruelly because it was expected and required of them. Failing to implement cruel policies would result in their own destruction. Dehumanization, in that case, is a coping mechanism which eases the mind of the burdens inflicted by treating another human being poorly, not a motivation for it.

Have we gotten nicer over time? A very interesting question. I agree with Russ, we have not gotten nicer. Paul Bloom points out tolerance toward sexual minorities as an example that we have become more accepting and compassionate toward others. It reminds me of the blog post “I Can Tolerate Everything Except the Outgroup” from Slate Star Codex that Russ referenced in a recent Econtalk episode. In that article, Scott Alexander makes the point that we do not tolerate behaviors, beliefs and lifestyles that truly upset or bother us. The perceived abundance of tolerance nowadays does not spring from an increase in our compassion, empathy or forbearance, but is explained by the shift toward moral relativism in which much more is permissible than in past generations. We are not forgiving our fellow man more readily than ever before, we simply aren’t that bothered by his behavior. What is good for society after all? Who knows!

Ben Riechers
Sep 20 2018 at 11:03am

I really enjoy EconTalk and I usually listen to each podcast twice. You dig into a wide range of topics in a way that is educational, timely and typically trigger a bit of reflection.

As a numbers person, I respect that you are always mindful of the fact that data analysis can be done poorly even by experts and that far too many people are ready to interpret a few data points, sometimes a single data point, into a broad conclusion. Such conclusions cause problems in every relationship from marriages to politics.

If memory serves me correctly, many months ago you had a discussion with a guest about how the meaning of words morph into new meanings over time. If for no other reason, I wish you would more routinely (like for someone listening for the first time) define what you mean by words that have found their way into our everyday language and whose meaning can only be determined by the context of their use. Without such clarity, we are almost certain to talk past one another. I have noted to a few people that they are in violent agreement…which usually creates a useful pause in the discussion.

SaveyourSelf
Sep 21 2018 at 8:40am

Paul and Russ had a thought provoking discussion about consciousness and morality in this episode. Paul was endeavoring to tie the two concepts together so that establishing consciousness was an adequate prerequisite for moral inaction. I found Eric’s comments on the subject (above) particularly enlightening. What little I have to add would fall under Eric’s “bottom up view of reality”.

 

Consciousness:

 

In order to have “consciousness” the ability to make abstractions is required. Because that is what consciousness is, an abstraction of an organism when reflecting on itself. Generalization, symbolic representation, categorization, etc. could all suggest this ability to make abstractions to an outside observer. But the ability to consider portions of reality and in particular the self as separate from reality is not sufficient to define consciousness. The ability to remember is also required. Probably because abstraction is not possible without memory.

I am frequently tempted to also say that the ability to establish priorities is a third requirement for consciousness, but scarcity and opportunity costs are so universal that their influence is observable even in simple, blatantly unconscious systems. To wit, simple systems can demonstrate priority seeking through their actions and activities. Since having and acting on priorities is universal, it is not helpful for differentiating conscious action in complex systems from unconscious actions in complex systems. So I reject prioritization as a criteria for consciousness, but I’ve included the reason for that rejection here to enlighten the discussion that follows.

The conversation between Paul and Russ regarding consciousness had a particular question associated with it—something along the lines of “When is it immoral to kill?” Toward that end—in this podcast and in the NY Times article—Paul Bloom gives the following definition of consciousness, “Sentient beings with beliefs, desires, and, most morally pressing, the capacity to suffer.”

I’m not sure what he means by “Sentient”. Dictionary.com defines sentient as the ability to perceive or feel things. Perhaps that means the ability to gather data and process it.  I’m fine with that. “Beliefs” would satisfy my criteria for “the ability to make abstractions” so I’m fine with that too. “Desires” I don’t see at all as being required for consciousness. “Desires”, in my mind, are demonstrations of priorities, which I rejected as a criteria above because unthinking systems can demonstrate priorities and therefore desires. I think Paul Bloom’s inclusion of “desires” is only important in so far as it is a setup for his third criteria “Capacity to Suffer.” Because without desires, suffering is impossible, or so the Buddhists say. And since [as I presumed to argue above] action without consequence is impossible [opportunity costs and scarcity apply even to simple systems], action without desire is, by my definition, also impossible. So leave desires out. It’s not helpful.  But what about “suffering?”

I think it important to note here that suffering, at least as we commonly think of it, is not possible without memory. [On that note, one of the tricks we use when performing painful procedures in medicine—surgery for example—is to give an amnestic medicine that make remembering what happened difficult or impossible. Trust me, it works like a charm. As a further aside, if you ever have a choice between having pain reduced fractionally or feeling all the pain but remembering none of it, choose the forgetting option. But don’t worry, in actuality you’ll get both and then some. As a continued aside, sleep deprivation makes remembering difficult or impossible. And thank goodness it does. Otherwise I don’t think many women would give birth again after their first child. The massive sleep deprivation in the last month of pregnancy and for the first month of infant rearing interfere with women’s ability to remember the true pain of their delivery. Since they can’t remember the downside fully, they are more likely to do it again the future. Insomnia is literally nature’s anesthesia.]

But back to the main point, given that memory is required for suffering, I think Paul Bloom’s inclusion of suffering in his criteria for consciousness is true only in so far as it represents a recognition that memory is a requirement for consciousness, which is fine, and as a further attempt by Paul Bloom to merge morality into his definition of consciousness, which is not fine.

The problem with that muddled approach, besides being imprecise, is that consciousness alone is not satisfactory to humans when trying to decide moral questions like when to kill and when not to kill. For that, the organism on trial must also be able to take actions to protect itself and its interests, because survival is the master virtue. All morals derive their meaning from their relationship to that master virtue. As such, consciousness is not a virtue at all except to the extent it improves survivability. That being the case, the primary moral question when considering whether to end something is not whether it is conscious, it is whether destroying it or not destroying it carries any risk to us, the killers.  Or, but not quite equivalently, “Would killing the creature or allowing it to continue to exist increase our chances for survival?” I think Paul Bloom and Russ Roberts are assuming a neutral answer to those all-important moral questions. So killing it poses no risk or benefit to us and letting it continue to live poses no benefit or risk either. That out of the way, the two men are contemplating a secondary question, which is the same question we posed for ourselves except from the perspective of the target.  But for that exercise to have much meaning from a moral perspective, the defendant must first be able to ask those question for itself, thus the need for a consciousness.  As thought experiments go, on its face, this one seems rather rhetorical. “Would people killing me reduce my chances of survival and therefore be immoral?” HA!  The answer is pretty obvious. Which is probably why establishing consciousness alone satisfies Paul Bloom to the point he is willing to make “do not kill it” a creature with a consciousness an automatic moral position.

Stu
Sep 21 2018 at 1:44pm

The following podcast discusses research showing people are inclined to treat robots as human.
https://www.npr.org/2017/07/10/536424647/can-robots-teach-us-what-it-means-to-be-human

 

John Alcorn
Sep 23 2018 at 4:56pm

Thank you for another stimulating podcast conversation.

Re: The psychology of cruelty.

Jean de la Bruyère (b. 1645, d. 1696) observed: “As our affection increases towards those whom we wish to assist, so we violently hate those whom we have greatly offended.” (Characters IV, “Of the Affections,” No. 68.)

I believe that this counter-intuitive, but all-too-real psychological mechanism played a large role in spontaneous visceral hatred and cruelty towards helpless Jews in the Holocaust.  This sort of explanation can’t be established empirically by historians, by archival evidence, but might be brought to life in historical novels or films.  Narrative artists (writers, actors, cinematographers, and so on) have special powers of observation, introspection, description, portrayal.

 

John Alcorn
Sep 23 2018 at 5:55pm

Re: Utilitarianism.

True, when things go badly in democracy, the culprit is rarely utilitarianism.  But utilitarianism (or any grand moral theory) is an implausible antidote.  Utilitarianism has little purchase on individual moral psychology.  Jason Brennan explains:

“Suppose I’m in some difficult moral situation.  I don’t need to appeal to a broad moral principle by asking, ‘Is acting on this maxim in this situation something I could rationally will the whole world to do?’  Nor would I ask, ‘Does this action produce the maximal expected utility?’  Instead, I’ve got a handful of commonsense moral principles […].  […] in commonsense morality, wise people weigh competing principles, use their best judgment, make a decision, and move on.”—”Moral Pluralism,” Chapter 9 in Aaron Ross Powell & Grant Babcock, eds., Arguments for Liberty (Cato Institute, 2017) at pp. 302-3.  Available online here.

Grand theories do correspond to intuitive moral questions.  For example, a person in a difficult moral situation might ask herself, “What will do the most good?” (a utilitarian intuition). Or a person might attempt to steer my moral compass by pointedly asking me, “What if everyone did that?” (a Kantian intuition). But these intuitions are simply bits of our commonsense moral psychology.

Kevin Frei
Sep 24 2018 at 3:46am

Russ, there’s a Twilight Zone episode called “The Lonely” that explores the android humanity issue beautifully. I think it’s on Netflix – at least it was a few months ago when I watched it.

I really liked your insight near the near the end of the interview when you point out that in Westworld, the humans feel no sympathy for the robots, whereas the real-world viewers do. Doesn’t that mean the show is overly cynical about human nature?

John
Oct 7 2018 at 9:23pm

Russ,
If you haven’t read it already, I recommend a lighter book than you usually read: “The soul of baseball”, by Joe Posnanski (a good friend of Bill James.)  This episode reminded me of a passage in the book where some guy “steals” a foul ball from a child and an interesting exchange occurs between Joe and Buck O’Neil.

 

great episode by the way.

 

 

JNF
Oct 12 2018 at 10:54am

This is my first comment, I usually just lurk and listen to the podcast: this was an amazing amazing episode. Thank you for this.

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:33

Intro. [Recording date: August 29, 2018. N.B. Audio accidentally reports recording date as Aug. 24, 2018.]

Russ Roberts: My guest is Paul Bloom.... This is Paul's second EconTalk experience; in February of 2017 he talked about his book Against Empathy. Today our topic is cruelty. Now, I want to say up front that anyone listening with young children may want to screen this episode in advance, as we are likely to discuss some disturbing behavior.... So, this conversation is going to be based on three essays you've written: one in the New York Times with Sam Harris, one in the Times with Yale graduate student Matthew Jordan, and one you wrote in the New Yorker. I want to start with the piece in the New Yorker. You are reacting in that piece to a very standard argument: that human cruelty is driven by dehumanization. That we see the people we oppress often as less than human--as animals. How does that argument go? Flesh it out.

Paul Bloom: So, it's not a bad argument. I think it's an argument that's right in many cases. And the argument is that some of the worst things we do to other people are because we don't think of them as people. We dehumanize them. We think of them as animals, or as machines, or as material objects; but we strip away their humanity. And once we do so, it licenses us to do all sorts of terrible things to them. If I think of you as a person, I can't--then it's morally wrong to take away your property or to kill you because you get in my way. But if you are a rat or a vermin or a cockroach, then it's totally fine. And so, a lot of really smart scholars--humanists, social scientists--have argued that dehumanization is one of the causes, the big cause of evil and cruelty in the world. And, my article in the New Yorker was looking at this critically, you know, talking about it, looking at the arguments in favor. But also pointing out that this might not be entirely right--that there's a lot more going on.

Russ Roberts: Yeah. One thing I want to add is that over time we've gotten more respectful--I think accurately, but there's no way of knowing for sure--of the consciousness and worthiness of animal life. There's a certain irony here, that the whole idea of dehumanizing is less compelling because a lot of people think we should treat animals at least as well as humans.

Paul Bloom: So, there's all sorts of subtleties here. We got a puppy--somewhat against my will, but we have a puppy nonetheless. And we were playing with the puppy last night, and we were having a great time; and, the puppy is not even close to human, but I would never hurt it or harm it. That sounds horrible. In fact, I think we are harsher toward those who harm innocent animals than those who harm people. And so, even on the outset, the idea that once you don't think of somebody as human, all the moral rules go away, has its difficulties. You are right.

Russ Roberts: But you make a deeper point. So, what was your other criticism, or main criticism, of this dehumanization theory?

Paul Bloom: So, I'm drawing on the work here of a lot of people, particularly the philosopher Kwame Anthony Appiah and the philosopher Kate Manne. And in different ways they both make the following argument, which is that, if you look at the actual atrocities people do to each other--the Holocaust and slavery and misogynistic violence--you don't necessarily see dehumanization in any sense of the term. Rather, what you see is a full recognition of the humanity of other people. So, Appiah points out that in genocide in, say, in Germany in World War II there are all sorts of humiliations and degradations done toward the Jews, and it's hard to make sense of this if you say the Germans didn't think of the Jews as people. Rather, they thought of them as people and they wanted them to suffer as people. They felt that they were morally tainted as people. They recognized their humanity and they hated it.

Russ Roberts: Incredibly deep insight. Very depressing, but incredibly deep.

Paul Bloom: That's right. And others have made it, as well. There's a scholar of the Holocaust I quote who says, 'You might think of what happened in these death camps as dehumanization, and some of it was. But some of it was also just a desire to dominate, to degrade, to humiliate.' And you don't do that with creatures you don't view as fellow people. And Kate Manne, in this wonderful, very powerful book called Downgirl makes a similar argument about misogyny and misogynistic violence. So, she says: it's the standard view in feminist philosophy that men objectify women, and think of women as mere things. This is a common analysis of pornography, say. But she [?] a lot of case studies and argues convincingly that when men act really badly towards women, often it's because men expect things of women. Men feel cheated. They feel disrespected. And these are attitudes you have towards other people, not towards animals.

Russ Roberts: Yeah. It's a great point. I think there's probably a distinction to be made between people on the ground working in concentration camps in Nazi Germany, guards in the Gulag in Soviet Russia; and their job was probably easier if they thought of their charges as less than human.

Paul Bloom: Yes.

Russ Roberts: The tattoos of numbers on the arms of Jews is just an obvious example. The relentless cruelties of prison life, the pettiness and the suffering that's imposed on people I think probably at some point allows the people running the system on the ground to be inured to the humanity of the people they are working with. And, the flip side of this of course is that a doctor has to objectify a human being at times--and should--to do the job correctly. A surgeon, a doctor giving advice to a family. So, there's something understandable about that and laudable about it in situations of compassion and kindness. In the situation of cruelty, it's unbearable. But I would think that the point you are making about it's their explicit humanity that they want to destroy is certainly coming from the architects--and somewhat on the ground as well. The petty humiliation that you talk about and the pleasure that people get from that.

Paul Bloom: Let me just get--because you're raising so many things there. Look, I agree. I think a lot of the stuff on the ground, as it were, really is dehumanization. These are often the people who do the most terrible things have no wish, have no animus to harm other people, so they just tell themselves, 'These are not people,' and that just makes it go easier. I have a feeling that even in our modern world a lot of military action, which is at a distance, isn't done because 'I'm going to take my vengeance,' and 'I'm going to make these people suffer.' Rather it's just, 'I'm going to just think of these--I'm going to code them as combatants and that's all I'm going to think about.'

Russ Roberts: Yeah, 'I'm playing a video game.' Yeah.

Paul Bloom: Yeah. Exactly. Exactly. And so, I think you get both. There's a lot of historians of the Holocaust and historians of massacres and genocide more generally who point out a lot of people really enjoy this stuff. Books like Hitler's Willing Executioners, and ordinary men, I think, put aside the myth that all of the villains in the Holocaust were these kind people who decided--who had to be obedient. Following orders--

Russ Roberts: Yeah. Automatons; they have a German culture of respect for authority.

Paul Bloom: That's right. There's enough stories of [?] went above and beyond orders, and took pleasure in it. And I don't think--I think [?] some proportion are actually honest-to-God sadists or psychopaths. But I think a lot of people who do terrible things say, 'Well, these people have it coming to them.'

Russ Roberts: Yeah. Well, and also--we'll come to this in a minute--I say this with some unease, but I think all of us have a little sadist in us. Or many of us do.

Paul Bloom: I think I'd say all of us.

Russ Roberts: Not my mom. Who just has a heart of gold.

9:22

Russ Roberts: It reminds me also of the somewhat recent episode we did with Mike Munger on slavery in the Americas, and the eagerness, how eager slave owners were to perceive Africans as inferior and to justify what they were doing as compassionate. Not just, 'Well, it's okay; I'm cruel to them.' It's more like, 'I'm doing them a service, because they can't run their own lives.'

Paul Bloom: Yeah. There's a whole literature on this written by people trying to justify themselves. They say, 'What do you treat better? Do you treat something you rent better, or something you own better? Certainly, if you own something you treat it better. You take care of it. And that's why the relationship between a master and a slave is a moral one.' And, these people are working very hard to frame what they are doing as morally good.

Russ Roberts: Yeah. I'll never forget the quote--I heard it from David Henderson. I think it's from a transcript; and I don't know if it's literally true, but it's good enough, whether it's true or not; could be apocryphal. But, it's the runaway slave, caught, and now before a judge. And the judge is saying, 'How'd your master treat you?' and he explains he had food, he had a bed. And the judge says, 'Well, it doesn't sound so bad.' And the slave says, 'Well, you know, there's an opening, if you're interested.' And it's so easy to strip away--especially the intangible parts of this. You know, we're doing a book club on EconTalk on Solzhenitsyn's In the First Circle, and the relentless petty humiliation of one's humanity in that setting of the prison camp--even in a "nice" camp, which is what Solzhenitsyn was in and which he writes about in that book. The irony of our conversation is: it is dehumanizing. The treatment is dehumanizing.

Paul Bloom: Yes.

Russ Roberts: Your point is that it's not the dehumanizing that allows the treatment.

Paul Bloom: That's right. I'm more interested in what goes on in people heads when they do cruel things. In some sense, this is obviously all dehumanizing. But, I'm very interested in the claim that when we are cruel to others it's because we strip them of their humanity and don't think of them as people. And, you know, just to go back to your bigger point you made--I guess, what I try to argue in the New Yorker article is, it's more complicated in two ways. So, in one way, it's not the case that if I recognize your humanity, all of a sudden I'll be kind to you. Maybe if I recognize your humanity, I will despise you. I will envy you. I will want to dominate you--which gets to the sadism part. So, it's a mixed bag. Recognizing other people as people by no means makes us kind. And on the flip side which you mentioned briefly, what's always fascinated me is that treating you not as a human also isn't necessarily a bad thing. A surgeon--you know, there's a wonderful phrase by Atul Gawande: A surgeon treats his or her patient as a problem to be solved. I think a utilitarian philosopher goes around to make the world a better place but doesn't actually think of each person as a person. Understands there are people, but when you do the math and say, 'Well, this will save 100 people and this will save 200 people, so we'll save 200 people,' you really are thinking of things in mathematical terms; but at the same time, you could be making the world a much better place.

Russ Roberts: Yeah, I guess my natural thought there is there are so many things that don't go into the math that are hard to take account of. That's one of the problems I have with utilitarianism. But, it's not the only one.

Paul Bloom: So, this is a standard complaint about the utilitarian, a modern one, I think, is that the utilitarian is cold-blooded and doesn't understand the specialness and distinctness of individuals. But, [?] I think in the real world it's a feature, not a bug. I think that often our very worst decisions are made because we very much take into account individual cases, and we are moved by them--say, the murder of somebody by an immigration, a gun death, a sexual assault--and our feelings about the particular individuals there which will vary from person to person will often distort policies in all sorts of crazy ways. Whatever you'd say about policy making in our time by our government, I don't think it's too utilitarian.

Russ Roberts: Well, that's a good point. And it comes back to your book Against Empathy, which I encourage listeners to go back and to read that book and to listen to our earlier conversation. I'm very sympathetic to the point, because case by case--it has an appeal. 'We'll just go case by case' is often fraught with all kinds of problems. At the personal level it's, 'Well, I'll just see if this potato chip is worth having.' It always is. 'Just one.' And, in the policy arena it's negotiating with someone holding a hostage; having a rule that we do not negotiate with people that take hostages is a very powerful rule that you want to break every time when the family is crying. But, if you look ahead to the consequences you might decide that it's better to suffer the consequences now to prevent further harm in the future. And I think that's an incredibly important point.

Paul Bloom: I think so, too. I most recently watched the most recent Mission Impossible movie, Fallout. It was a good movie. The first half I thought was great. But, on not one, not two, but three occasions somebody says--

Russ Roberts: Spoiler alert! Hang on--

Paul Bloom: Spoiler alert; but this will not harm the movie. No more than what you've seen in the trailer. Says to the main character, Ethan Hunt, the Tom Cruise character, 'You are a great person. And you are a great person because you believe that the life of one person matters more than the life of a million.' And I'm there like saying, 'Hey. I want to rebut.' No, that isn't a good--the movie is orchestrated so that favoring the one over the million works out, because it's a movie. But it's actually--it's a horrific policy.

Russ Roberts: Yeah, it's interesting. As an economist, there's many movies I can't enjoy because they bother me for those kind of reasons. And that bothered me, too. I knew I was being manipulated there. I was supposed to think that Cruise was honorable for his putting an actual human life first. But actually he was endangering many, many more. And there is--it's actually the only thoughtful thing in the whole movie. The movie is an extended chase scene. But that tension over the life in your own hands that's visible and tangible versus longer term costs--they beat it to death. I just want to say--I was very disappointed in the movie. But that's neither here nor there. But that's an excellent point. It's really exactly right.

Paul Bloom: A colleague of mine, Molly Crockett, has just published a paper with several other people supporting the idea which I think you and I know intuitively that people like deontologists. They like people who have moral rules; they like people who favor their friends and their family; and they have little patience for utilitarians, even for effective altruists. I think that--and I think there are interesting reasons why we are constituted that way. But, the utilitarian has few friends. And I've got to say: This is a problem with your discipline, which very much leans utilitarian.

Russ Roberts: Yeah, it's true. Yeah. Well, some of us deserve fewer friends. Perhaps.

17:36

Russ Roberts: I'm going to make an observation here, get your reaction; it's not really the point of your essay. But, I find it interesting how hard it is for people to accept the possibility that people enjoy being cruel.

Paul Bloom: Yeah. You know, you had Jordan Peterson on your show. And, once, twice?

Russ Roberts: Once. So far.

Paul Bloom: And I disagree with Jordan Peterson--I haven't met him--I disagree with him about a lot of things. But there is one thing he says which I think is true and important and doesn't get said enough, which is: He talks about the desire for power and domination. And he talks coherently; then he says, 'Look, people get a pleasure and a satisfaction about dominating others.' And, it's not sadism, strictly speaking. I think what it is, is we are hierarchical creatures, and we want to be on top. And there's all sorts of ways of being on top. There's to be respected and admired. But, failing that, terrifying somebody and making him fall before you is a go-to some individuals use.

Russ Roberts: Yeah; well, while we're talking about movies, I just recently saw Richard Curtis interview; it's going around Twitter, this clip--he's the director of Love Actually, which, for better or for worse is one of my top 10 movies for watchability. I love that movie.

Paul Bloom: Boy, you are a contrarian.

Russ Roberts: Yeah. Exactly. Well, I'm not. But among our friends, yeah. Because a lot of people seem to like it. But he makes sentimental movies. And he defends it saying people are basically good. He says: If you make a movie about a sadist who deserts from the army and violates a woman who is pregnant, a nurse--that's gritty realism. My movie about people falling in love is considered sappy and sentimental. But, he said, there are a million people falling in love in England right now. And, that's real life. He's trying to defend this idea that people are basically good. And, although I like the movie and I like to think--personally, I do occasionally like to believe people are basically good, I don't necessarily think that's the best way to approach life. But, it's a good way to approach friends. For sure.

Paul Bloom: Yeah. And I think people are basically complicated. I'm a developmental psychologist, and sometimes I get the question: 'Are babies innately good or innately bad?' And I always answer 'Yes.' We have both appetites. And I think part of the badness in all of us--maybe except for your mother--is a desire to be on top. To dominate. At least not to be on the bottom. And that doesn't necessarily mean, 'I have to round up a bunch of people and put them in a camp and sit in a guard tower and take pot shots at them.' But, the appetite that makes us do that is, I think, a grotesque and exaggerated form of an appetite that exists in all of us.

Russ Roberts: Yeah. It's related to the Adam Smith point about--which listeners know well--man not only desires to be loved but to be lovely. And he takes the 'to be loved' part as a given: Not only do we want to be loved. Of course we want to be loved. Of course we want to be paid attention to. Of course we want to be respected, admired, honored. And, when we don't receive that, we don't take it so well. And it's just something I'm increasingly aware of, something that economists have zero insight into after Adam Smith. Unfortunately or not, but that's like the reality. And it's a big part of human life. It's a big part of work. It's something I think economists have ignored. Which is too bad.

Paul Bloom: It is. Tom Tyler, a colleague at Yale Law School--you know, has long has evidence that what matters to us most in our workplace is the feeling of being respected. And, you know, being respected, being treated as a serious individual, worthy, a creature of dignity, is worth a lot of money. And it's also, I think, the right thing to do. But, I think that this desire actually can lead into the almost sadism we're talking about earlier. I was having an argument with my family--people around the room who I love the most in the world. But, I feel like I wasn't being heard. There was [?]--I couldn't get the words in; I wasn't being heard. And I felt this frustration. And it's very human to say, 'No. You let me speak. You listen to me. You listen to me; you respect my views.' And it's not the prettiest trait.

Russ Roberts: So, why is it that in a faculty meeting where you have something deep and profound to say, Paul--which I'm sure is all the time. But in a particular case where you feel you actually have something important to contribute; and you can't get a word in. And your insight's lost; and people didn't give you a chance to get your insight across, you get a little bit annoyed. But, with your family who you love, it can be infuriating?

22:33

Russ Roberts: So, why is it that in a faculty meeting where you have something deep and profound to say, Paul--which I'm sure is all the time. But in a particular case where you feel you actually have something important to contribute; and you can't get a word in. And your insight's lost; and people didn't give you a chance to get your insight across, you get a little bit annoyed. But, with your family who you love, it can be infuriating? Why? What's the difference there? Shouldn't it be the other way around? It's like, 'I love these people. I'm not going to get made at them.' Why do we care more, sometimes, in those settings.

Paul Bloom: So that gets back to the whole issue of humanization and treating people as people. And it connects to the misogyny work of Kate Manne. And the answer is: I like my fellow faculty members. Many of them are good friends. But I love my family. And, because I love my family, their rejection, their failure to take me seriously, their failure to listen to my 7 points about why Trump would[?] be re-elected and to fully appreciate it--

Russ Roberts: Deep insights--

Paul Bloom: and it really bothered me. While, my colleagues listening to my plan to recruit 10 graduate students to work with me and [?]. And, of course, this is reflected more seriously. In fact, I'm far more likely to kill my family.

Russ Roberts: Yeah. It's horrifying, isn't it? Yeah.

Paul Bloom: And this is the thing: this is how I end the New Yorker article, which is that there is this view we have that if only we were to--that--the dehumanizing view is a very optimistic view because it says that so much of the badness that we do is based on a mistake, 'I'm just not recognizing the humanity'--

Russ Roberts: So you can seek re-educating. You need a sensitivity session.

Paul Bloom: 'I need a sensitivity session. I need to see some slides. I need to talk to them.'

Russ Roberts: Yep. Once you talk to them, you won't hate them any more.

Paul Bloom: Once I talk to them it will all be worked out. And I think that's one of the biggest mistakes we make about morality. I think that the reality is that fully appreciating someone's humanity opens up so many positive things--you can't be human without it; you can't have a decent relationship. It's the foundation of love, and friendship. But, it carries with it so many terrible risks. Really loving somebody, really knowing somebody opens up the possibility for love; but it also opens up the possibility for hatred.

Russ Roberts: It's so unintuitive, but it's so true. Talk about that some more.

Paul Bloom: So, I like Manne's analysis here. And she talks about--she gives the example of Elliot Rodgers [Rodger--Econlib Ed.]. Elliot Rodger is this kid in California. He left a video describing his motivations; and he went on a shooting spree and killed many people. And, he was in some way the first in-cell. He argued, he claimed that he was killing people not because he didn't see them as people, but because they rejected him.

Russ Roberts: He's got a grievance.

Paul Bloom: He had a grievance. That's a lovely term, 'grievance.' You don't have a grievance against a dog, or a rock. You have a grievance against a person. And he grievances against the beautiful women who he said scorned him and wouldn't sleep with him. He had a grievance against--again, he'd call them 'chads'--these handsome young men who were romancing and having sex with these beautiful young women. And he felt left out. He felt--and there's this profound resentment. There's this profound feeling of grievance. And I think that sort of instance captures the psychology of cruelty, maybe better, than a picture, which sometimes is true, of sometimes someone gunning down a bunch of people because he thinks they are monkeys.

Russ Roberts: Yeah. I keep thinking about the fact that--the flip side of this, and I write about this in my Adam Smith book: If I'm having an argument with a family member and I'm just really frustrated and angry--which I hope doesn't happen very often, but it does happen. And I get a phone call. And for reasons not worth going into, I have to take it. Somehow I go, 'Oh, hello.' I don't say, 'What do want?!' I just say, 'Hello.' Because I can turn that off when I want to. And yet, I can't turn it off sometimes in the middle of that argument. Which is strange. And I think part of what's going on there, the love/hate thing, is that: I want your respect, Paul, because I think you are a smart person and you have achievements; and it would make me happy if you walked away from this interview telling your friends, 'That was a great experience. EconTalk's a great program,' blah, blah, blah. Right? It's going to make me feel good. You can send me a video, by the way, if you want.

Paul Bloom: You know this happened after our last conversation?

Russ Roberts: Of course it did. For hours. And I really appreciate it--

Paul Bloom: So now I don't know whether there may be diminishing returns.

Russ Roberts: Yeah. That's a good point. But I would do--in the abstract, do like your respect. But my wife's respect, that matters a lot more to me. Right? And so, it's not--even though I can tell myself, 'Well, of course she respects me. She's my wife,' I think in the day-to-day tension we have with the people closest to us, that can be a challenge. We expect so much more of the people around us. And, the flip side of that--which also drives me crazy--is, I can be disrespectful to my family. I use the example of taking a phone call. That would be one example. I would never do that in the middle of an argument with you in person. But I can disrespect them and rationalize it and say, 'Well, they're my family. They love me. I don't have to really treat them that well all the time.' When it really, to me, morally, it should go the opposite. I should put them on a pedestal constantly.

Paul Bloom: Yeah. Yep. There is--of all things--and I'm going to bring up a story I heard from a rabbi, once, and it's about this guy, and he and his wife are at a hotel; and his wife is fumbling to get something out her bags, at the front desk. And the guy--and they've been married for 30 years--looks at the guy behind the desk and rolls his eyes.

Russ Roberts: Yeah. It's a fantastic story--

Paul Bloom: And, the point of the story is, you're betraying[?] how bad that is. And it's the sort of thing I can see myself doing. I could see--but you are betraying somebody you love to a stranger, showing his dominance. But, that's part of the package. You know, there's this study--and I don't know if this is true. This was from a long time ago, the sort of thing you see in Intro Psych. But it rings true. It was from a--I forget--it was done by a marriage expert. And, they do videotapes of couples interacting. And they wanted to predict what aspects in interactions predicted that couples would break up. And it turned out, in this story, that it wasn't whether or not they were quiet or noisy. Not even whether they screamed at each and fought. It was whether they showed contempt for each other.

Russ Roberts: Yeah. That's a great point.

Paul Bloom: Contempt is like the relationship killer. It means you are no longer--we've gone beyond grievances, here.

Russ Roberts: Well, [?]--you could argue contempt is the opposite of respect. And without that, you can't have a marriage, in my view: a good marriage can't have a good friendship.

30:03

Russ Roberts: Going back to the--to bring us full circle, and I want to shift gears at this point, but going back to the hotel front desk story: When you roll your eyes with a stranger or a friend over a spouse's behavior, that is the ultimate dehumanization of your spouse. What you are saying is, 'I'm going to use this moment of frailty. Of human imperfection, that, say, 'My spouse is disorganized. And I'm going to use it as the source of camaraderie and humor with you, the stranger. I'm going to treat it like a clip from a movie that we might both react to.' And it just--it's a bad idea.

Paul Bloom: Yeah. And it brings up something that I don't think I did, I gave enough time and respect to in the New Yorker article, which is: You know, treating somebody as a person has all sorts of moral risks, but there's dehumanization of that sort in everyday life. And it could really be ugly. And it could be a relationship killer. And maybe one way of looking at the contempt finding, if it's true, is that you don't even think of the person as worth being--as that contempt is a dehumanizing emotion. And that's fatal for a relationship.

Russ Roberts: Yeah. I just want to close with a really sappy story, which I try to think of--I don't think about it often enough--but it's either a cartoon or a video where it shows a person going through their day; and each of the people that the main character encounters, instead of seeing them as human beings, they are just the thing that they are doing. So the janitor is just a broom--physically, a broom, or has a broom for a head, all they are sweeping up. Or the bus driver is a steering wheel rather than an actual human being. And, it's easy to point that out, that that's often the way we treat folks who help us in our lives, that way. I think it's an interesting challenge about how to overcome that. I try to say, 'Thank you'--I walked in the mall one Sunday to do an errand, and there was a woman pulling together some garbage bags from the outside trash can; and I smiled at her and said, 'Thank you.' It made me feel good, by the way. I'm repeating it, of course, to earn your respect, Paul. But, more seriously--you know, I thought about that--that's a minimal thing to do, as a--but maybe she thought I'm crazy. Like, 'Why is he thanking me?' But for me it's a way to say, 'I don't see you as a robot who happens to pick up trash. I see you as a fellow human being who is doing something that makes my life a little bit better.'

Paul Bloom: It's a struggle. Because, yeah: this anthropologist, Fisk[?], talks about these different levels on which we interact with each other. And there's a market level, where, you know, you pay your money, you get your product. And things go smoothly. It's based on autonomy and mutual consent. But there is also a friendship model. You know: If I go to your house and you make me a nice meal, I don't give you money afterwards. That would be disrespectful. We just have the wrong model. But in so much of our life we are dealing with people, and you have to navigate both. And it's very difficult. It took me a long time--at some point in our lives, my wife and I got somebody to clean our house. Because we had a very messy house and we had some affordable income. And we viewed as someone--the person who cleans our house, benefits from it. But it felt so weird, having someone walk through the house picking up my stuff.

Russ Roberts: Yeah.

Paul Bloom: And, yeah.

Russ Roberts: Yeah. No. It's a strange thing. And they are very glad to have that money that you pay them for picking up your stuff. My view on that--I think I've talked about it on the program before--is, when my kids would throw up, I would not say, 'Well, the cleaning lady is coming tomorrow,' or 'tonight,' or 'I'll just leave it for them. Because that's what we pay them for, isn't it?' Not really. I've got to clean that up. So, I think there are cultural rules about what's considered acceptable. It's a very interesting question when there's a large gap. It's one of the places where I think inequality is real--

Paul Bloom: Yes.

Russ Roberts: if I have a person in my house whose life is not the same as mine, and she is going to be in my space for 2 hours with her crew, for an hour. And that's a little bit weird. Her car is a little bit nicer than mine. That makes me happy. But--and I always like to point that out. But, I say that--it's not really a joke. The point of that is, I think it's real: It's true that she doesn't have--my house is probably bigger than hers. But, there are many things she has that are not so bad. And so, you have--it depends where you are, what country you are in, what culture you are in, what the future holds. It's a little bit complicated.

Paul Bloom: Yes. The morality of it, the psychology of it--I think this is a case where maybe ethical philosophers have let us down. Because, I don't see any good guidelines on how are we to properly deal with each other in these market circumstances? And, it is often complicated and troubling. And it's often a source of embarrassment and humor. Because it's such a difficult, troubling situation. And obviously there are gender and racial issues that arise here.

Russ Roberts: Yeah. For sure--with class, and--

Paul Bloom: Yeah.

Russ Roberts: What I do is I ask them what kind of music they want--the cleaning crew. And they are from Central America; they want bachata music which has its own station on Spotify. And I put it on. I blare it as loud as they want. I don't particularly like it. I've come to like it a little bit, now, actually. It's grown on me. It's not my style of music, but it's grown on me. And if I knew what the words were I think I would really like it. I think it's very sentimental.

35:59

Russ Roberts: Let's shift gears. Let's talk about the Derek Parfit thought experiment that you write about in a different article that we're going to now turn to. What was that thought experiment?

Paul Bloom: So, this was some work I did with Matt Jordan's wonderful graduate student at Yale. And, Parfit is one of my favorite philosophers--a lot of people's favorite philosopher. He passed away very recently, very sadly. And this brilliant and very moral person. And he had this experiment, this thought experiment, in reasons and persons. He called it a harmless torture. And the idea is--I'm going to sort of simplify a little bit, but just to get the form that I remember: But, somebody is in a tiny bit of pain, and you could press a switch; and by pressing a switch, you add to their pain. It's electric shock, say. But you add to it so infinitesimally that they don't even know there's a difference. And you walk around throwing the switch on a thousand people, not affecting anybody. But there is also somebody behind you. And he is also throwing a switch on a thousand people. And there's say, a thousand people. And as a result of everybody throwing a switch, each individual ends up in terrible agony. So, as a sort of group, you have all tortured all these people. But as an individual, you haven't increased the misery of not a single person. Or if you have, you have done it in a very tiny way. And this is sort of one of these odd thought experiments designed to sort of explore utilitarianism and so on. But, what was interesting about it is, I think what Matt and I argued is, it lives itself out now in social media. In that, you know, somebody says something really bad about me on Facebook; I look at it; somebody Likes it. It's not a big deal. But then at the end of the day, a thousand people have Liked it. And I feel crushed. I feel humiliated. And so, this is a sort of social mobbing that happens where no--what's interesting for us is, nobody is doxing [i.e., publishing private information about--Econlib Ed.] anybody. There's no rape threats. There's no death threats. There's just an accumulation of tiny acts that in the end have terrible consequences through our technology. And so we explore that. We talk about this in the context of micro-aggressions--this writer Julian Sanchez makes the point that a lot of small insults, each one very minor, may be not even a problem, but if you get a lot of them it could just grind you down. And we take that as a moral issue people should take seriously.

Russ Roberts: I think it's extremely interesting and important. We had an episode with Megan McArdle on internet shaming that I thought was really--she was fantastic. And it's--the flip side of that is that, 'Well, some people deserve'--not you, Paul, of course, but 'some people deserve a comeuppance. And all this is, is a way of sending moral signals to people that they have done the wrong thing.'

Paul Bloom: So, I've heard that--

Russ Roberts: I don't like it, either.

Paul Bloom: I wouldn't doubt that, in some cases, we are really hitting at the right person and maybe the effect will actually be a good one. I just think in a real world, in the cases that I see so often: First, they are always punching down. They are always some obscure schmo who has done something stupid. Who has been caught in the worst 30 seconds of their life. And there it is on a video. So, it's often punching down. It's often based on false information. Just the most recent case, like a week ago, there is some film of some guy catching a baseball--

Russ Roberts: Oh, yeah, it's a fantastic example--

Paul Bloom: And he keeps it.

Russ Roberts: No, I think he takes it away from a small child, it appears.

Paul Bloom: That's right--

Russ Roberts: Either the kid bobbles it and he grabs it; or--I think that's what happens.

Paul Bloom: That's exactly right. And so, of course, Twitter goes berserk--

Russ Roberts: 'A monster'--

Paul Bloom: And the answer is: 'Well, now, let's ruin his life.'

Russ Roberts: 'He deserves it.'

Paul Bloom: 'Let's ruin his life. Let's make sure he loses his job; he can't live anywhere. Maybe he'll kill himself.' And then it turns out like, what, 5 minutes later, it's revealed he'd given like 3 balls to the kid in front of him. And then later on, you get it that[?] his wife gave it to the kid.

Russ Roberts: I think actually what happened is--I'm a big baseball fan, Paul, so I was really interested in this; and I also have taken my children to games and I'm very interested in getting a ball.

Paul Bloom: I'm faking all my sports knowledge.

Russ Roberts: Yeah. I can see this. So, what actually happened is that the clip ends with him seemingly gleefully taking the ball from this little kid. But it turns out: He's giving it to another kid, who is in a different spot, two seats over. The kid in front had already either gotten one, or gotten two--which is absurd, but they had been sitting near the field and it had been tossed to them by a player, I think. So anyway, it's a perfect example of: You didn't know the whole story. You went down this guy's throat.

41:03

Russ Roberts: But, I want to take a different tack on this. Let's suppose that's where the clip really did end. That's where it should have ended: He's a jerk; never he got a never got a ball as a kid himself. Maybe he just likes baseballs. Maybe he didn't like that kid in front of him. Maybe he's just selfish. The idea that this somehow is irredeemable--the idea that is--there's no limit to what punishment this deserves--it's like when--I don't want to pick an example, but there's so many, where some public figure does something stupid, in their youth, say, and now they're up for this new job as editor of this magazine or star in this movie or running for office or appointed to that position; and somehow that one mistake is considered decisive. The idea is that we now know--because we've seen this clip or we've read this quote from when the person was 20 years old, or 5 years ago--that obviously they are a bad person. As if you can just put people in boxes. And I think that's an evil, evil thing to do: to presume that people are either good or bad and therefore--that's dehumanizing. Basically saying this person is irredeemable and they deserve everything that comes down on them.

Paul Bloom: I agree. I agree. You know, there's Jon Ronson's wonderful book, So You've Been Publicly Shamed, where he looks at these cases of people who have been attacked by mobs and how their life is ruined by social media. And he points out: Some of them are guilty. Like some of them did something pretty bad.

Russ Roberts: Yeah. Bad! Yeah.

Paul Bloom: But, there's no sense of mercy or justice or proportion. And, you know, the legal system has this. Bureaucratic systems have it. You know, if I steal some paper clips from the front office maybe I get in trouble, but I wouldn't lose my job. And even informal social systems--my families have it. So, if my kid bails on me when he's supposed to get together with me for lunch, I say, 'That's really rude.' And maybe I sulk; and maybe we have some harsh words. But I'm not going to take him out of my will.

Russ Roberts: Maybe a page --

Paul Bloom: I'm not going to lock him out of my house.

Russ Roberts: Maybe a page of it. But not the whole thing. He might lose a couple of things.

Paul Bloom: He loses a percentage each time. He doesn't need that. But it's proportional and balanced, because people live together. And it makes them--if you and I are part of the same family, we want proportionality, because even for just self-interest: one of you will do something wrong. And the problem with social media--I think you are right--even if the person did it, even if all the facts are out there, it's just a mob. And, it's just a mob that--and people jumping into the mob because of the pleasure they get. In part it's a pleasure of punishment; I think in part it's a pleasure of affiliation.

Russ Roberts: Yes: some signaling going on there, big time. As you point out, I think.

Paul Bloom: I've never been part of a group that's stoned somebody. But, it's not hard for me to see how much fun that must be, to be part of people throwing the rocks and screaming--

Russ Roberts: Justice.

Paul Bloom: Yeah. You know, I enjoy the same--maybe I enjoy the same movies that you do; and a lot of these movies kind of pull on our vicarious feelings of justice.

Russ Roberts: Yeah, our desire to do that.

Paul Bloom: How good it is. I don't know if I'd like to be Dirty Harry, but I sure like watching Dirty Harry.

Russ Roberts: You write, when you're talking about this internet shaming

But isn't this death by a thousand cuts a good thing? If it were Hitler, wouldn't you be right to let him have it? Yes--but the problem is that when we are infused with moral outrage, acting as part of a crowd and operating in a virtual world with no fixed system of evaluation, law or justice, all our enemies are Hitler.

Paul Bloom: Yes. And I've actually seen that literally, on Twitter. I've seen people say, 'I think we should hear this person out,' and they say, 'Would you hear out Hitler?' Well, Hitler, I think is a terrible paradigm for everyday moral judgments and moral responses. So, my view is--and people--I've had pushback on this. We tried to make our examples both sort of liberal and conservative, because the mob has gone after both. The mob has gone after people for--you know, on all sides of the political spectrum.

Russ Roberts: For sure.

Paul Bloom: So, you know, this is not meant as a critique against marginalized groups speaking out against bad people. And it's also not against--somebody who I said some critical remarks on in our article said there's something hypocritical about it. You know, 'Bloom is against public shaming but there he is criticizing me.' But, I'm not against public criticism or debate. If you dismantle my arguments over Twitter and everything, that may not be fun for me, but that's actually a good thing. It's just that if you, you know, post a funny picture of me falling in a puddle of mud and 15,000 people retweet it with the word 'Loser' on it, that's a different story.

Russ Roberts: Yeah. It's just cruelty. It's not--

Paul Bloom: It's cruelty.

46:25

Russ Roberts: And I just want to mention--we talk a lot on this program about Bootleggers and Baptists theory of regulation, where politicians will invoke high-minded principles--that's the Baptist side, say, not having liquor sales on Sunday--but then, they are really also taking care of their Bootlegger friends who make the donations to their campaign on the sly, because Bootleggers benefit from liquor stores being closed on Sunday. And, I think that's a very important perspective on politics. But it's also a very important perspective on our own humanity. And we will do evil things, because we enjoy them--that's the Bootlegger side. But we'll explain that it's important to shame this loser, because he's a bad person. That's the Baptist side. So, I feel virtuous. It's not just that I'm cruel to somebody online; I'm going to feel virtuous about it? Because that's where we're at right now.

Paul Bloom: Right; and that's sort of a theme which pulls together the conversations about the New Yorker article and about this article. What they have in common is the people who are being cruel don't think of themselves as villains. They don't even think of themselves as morally neutral. They think of themselves as good people--

Russ Roberts: crusaders[?]

Paul Bloom: They are--exactly--to strike at somebody who is doing something wrong is easily seen as a good thing. And the good thing--it's not irrational. Sometimes it is a good thing.

Russ Roberts: Yep.

Paul Bloom: You know, one of the sort of thing--Evolutionary Psych 101 says the only way we could be good people to each other, as a society and as a species, the only way goodness could survive, is if we had some mechanism in place to punish or shun people who are not good. Otherwise goodness is a crappy strategy and will just fall by the wayside. And, this is true, I think, from an evolutionary point of view. But it's also true on a societal point of view. Which is: You want there to be cops. And, for situations that don't demand the level of cops, you want people to be able to say, 'Hey, dude, that was really racist. You shouldn't talk like that.' Or, 'I'm not going to invite you to my parties any more if you throw up all over the furniture.'

48:36

Russ Roberts: Well, I think the real issue here is the magnitude of the punishment.

Paul Bloom: Exactly.

Russ Roberts: So, let's say that guy did really steal the ball from the kid. And his wife might look at him and say, 'That was awful.' Now, she's not going to roll her eyes on the guy on the other side of her husband. But let's say she turns to her husband and says, 'That's not nice.' Well, that's probably an appropriate response. Certainly it's an appropriate response to your child, who has kept it--your teenage kid, who has taken the ball from the 7-year-old from the row in front of you. But, it's not a--it shouldn't get a death sentence. It shouldn't get you fired from your job. And, most people, I think, think--but it's just wrong. So, it doesn't matter how big the penalty is. And one of the things I've learned from the Coase Theorem--I hate that phrase--from Coase's article on social cost, is the idea that it's true that we should punish bad behavior. But, if you punish bad behavior too much, there is such a thing. And you'll change people's incentives that actually are important to do. Like, you won't go to baseball games, say. You won't take your kid to a baseball game because you might get caught on film doing something horrible. And so I think we really need to work out the norms--not work out--but the norms and culture of these kind of issues of public clips and shaming and all that. It's going to change. I think various things will respond. And I don't know how it's going to turn out, but I don't think it's going to be exactly the same way it is right now.

Paul Bloom: That's an optimistic point of view. In some way in our article we blame technology, because normally, you know, if I say to you, 'You've been a jerk,' well, I'm saying that to you. I'm face to face. I'm saying that I understand the consequences; I know what I'm doing. But if I tweet that at you--

Russ Roberts: Anonymously, too, sometimes

Paul Bloom: anonymously--and then a thousand other people do it, I'm part of a group. My own intention was never to destroy your life, perhaps; or I [?] might not even think about it that much. But, we are not--we are ill equipped to think about the magnitude of such things. And, this made me think of something which I was a bit skeptical of: I read this article, I said, by Julian Sanchez, and he talks of it in terms of what are now called microaggressions. And, I used to roll my eyes a little bit about some of this. But, it's not hard to see if you think about it there's a parallel thing, where--

Russ Roberts: That's a great--

Paul Bloom: where I say something to you, and I'm--it's a joke. Maybe it's a remark about something. And, in itself it's no big deal. And if you have[?] called me out about it, I'd say, 'You are so touchy. It's no big deal.' But, maybe, if it's multiplied--maybe if you'd experienced this a hundred times each day, your life becomes unbearable. And, although, my own act in and of itself was no big deal, I am part of a general problem that makes you miserable. And so I think we need to respect--we need to respect the fact that often we had no bad intentions and we will be right; and yet we can appreciate that our own small acts when accumulated makes people's lives miserable. And so we should stop these small acts.

Russ Roberts: Yeah. It's the categorical imperative. It's Kant--

Paul Bloom: Yes. That is exactly right--

Russ Roberts: where he's saying that: You should decide. Even though your snide remark, cheap shot, raised eyebrow is not a big deal, you should be aware of the fact that if everyone did it, it would be unbearable. And therefore you shouldn't do it even once. And I think people very easily rationalize. My silly example of this is that people eat grapes out of the grocery store bags--just one. And they let you do it, because the bag's open. I mean, obviously, they don't mind if you do it. They don't mind. What they do is they raise the price of grapes to include the fact that they have to or they wouldn't make money. Right? All of shoplifting is included in the price of the food we eat. So, it's true: any one shoplifter has a trivial effect. But if lots of people start shoplifting--which they do, of course--there is shoplifting--

Paul Bloom: yep--

Russ Roberts: everybody pays a price. And that's stealing. For obvious reasons. But anyway.

52:53

Russ Roberts: Let's turn to the last essay, which is in many ways the most interesting one. And I hope we can go a little bit overtime, if that works for you.

Paul Bloom: Definitely.

Russ Roberts: You wrote an essay on Westworld. We're going to talk a little bit about that program. Talk about the premise of the show and what are your observations on it. This is your article with Sam Harris.

Paul Bloom: That's right. And this grew out of--no offense, but it was on another podcast--another podcast with Sam Harris--lack of fidelity. And we got to talking about Westworld, and we decided we should turn our thoughts into an article. And so, the premise of Westworld, for people who haven't seen it--and we wrote a base on Season One, which was considerably simpler, I think--is that there is a futuristic amusement park which is like the Wild West, with, you know, bandits and prostitutes and bartenders. But they are robots. They are humanoid robots, indistinguishable from people. And you, as a vacationer can go there and interact with them. And you can go as, as that you'd say in this show--you'd could be a white hat and just like have adventures and try to save damsels in distress. Or even just use it as a vacationland or families who go there with children. But, you can also go black hat: which means you could rape; you could murder; you could torture these robots. And you are allowed to do this. Because you are not harming people. And, so, Sam and I are very interested in the morality of that. And so we explore the issue; and make sort of two general points. One point is that we don't know: It's impossible to tell whether or not creatures as sophisticated as we depict in Westworld are sentient. And, if they are, and even if there is any risk that they are, such behavior is terribly immoral. A second point is that, even if they aren't--even if you know they are like toasters--it might be morally corrosive to treat things that are indistinguishable from people in terrible ways. It might bleed over into your interactions with people. That's a speculation. But we think it's plausible. And I guess I'll also say just a third point, you know, while I'm monopolizing the stage, which is that: It is an interesting, a metapoint is: It's an interesting way to do philosophy. Philosophers love thought experiments. But, Westworld is itself a thought experiment. Having the robots played by such brilliant actors makes you feel the force of the fact that you shouldn't harm them. If I just told you about it, you may--if you want to show, you'll see that the terrible things, these robots, are doing terrible things. And the TV show makes it clear in a way that other ways of expressing things aren't. And so, I have this sort of pet idea that a lot of movies and movie shows--the movie Memento being another example, are actually themselves works of philosophy.

Russ Roberts: Well, I saw--I saw Episode One of the series.

Paul Bloom: Yeah--

Russ Roberts: And my wife and I watch occasional series together. We watched The Crown. We watched The Wire--to span the entire range of, creating--gritty realism to period pieces of termed existence. But I happened to watch that without her. And after watching it, I realized, it's not for her. She really doesn't like violence. Even it gets-she would be uncomfortable watching-those robots getting killed, because they bleed. In case you haven't seen the show. But in the first episode, I found it so haunting--I could not stop thinking about the program. And I may go back and watch the season, maybe more. But I was just overwhelmed by how thought-provoking it was, even just the first episode. And how it jarred me. And how it's an incredibly well-made show. It's just extraordinary.

57:15

Russ Roberts: Let's start with the second point you made; and then I want to come back to the first at a little more length. It's just interesting, this idea that it's morally corrosive--we have the same argument against letting people play video games that are becoming increasingly--not stopping them, say, legally, but not wanting one of your kids to play 'em. Because they've become increasingly realistic. And that perhaps that makes it harder to treat human beings well. But also it reminds me about our conversation about animals. Right? I think one of the arguments for treating animals well even if they are not conscious is that, if you treat animals badly, you might just have trouble treating humans well. And I think it's--they are not unrelated. That would be the claim. I don't know if it's--I don't know if you'd find out that's true or not, whether it's just sort of empty-armchair theorizing.

Paul Bloom: I don't know if that's true, either. Kant made the claim about animals. Because, he was--maybe[?]--contra Descartes. One of them was committed to the views that animals didn't really feel pain. Which is an unintuitive view, to say the least. But then, he said, 'You shouldn't harm them because it would lead you to feel cruel to people, who definitely do feel pain.' And then there's the video game case. So, it's sort of strange to find myself making this argument for Westworld, because I think the video game case, simply the evidence is tremendously weak--

Russ Roberts: yep--

Paul Bloom: the idea that that--my kids would play extremely violent--you know, first person, shooter games. Um, and, you know, I wondered: Is this going to turn them into killers? And, you know, a). it didn't. But, b). I had enough of an understanding of literature to appreciate there's no evidence that it does: that it has any bad effect.

Russ Roberts: It could go the other way. Right? It's a way to get your aggressions out in a way that's harmless. It could easily be a positive--

Paul Bloom: You could tell a catharsis, if you were. It does do that[?]. I think that's--too much. On the other hand, you know, people like Steve Pinker correctly point out that the period in which the games have become more realistic and more violent is the same period in which violence in that age group has dropped a lot. And, compassion towards minorities and out groups has increased. So, you know, if you are going to make a big deal about these correlations, you would have to be forced into the view that violent video games make us much nicer.

Russ Roberts: Yeah. I think that's ridiculous. Correlation is not causation. And I'm not sure how much nicer we are to people, really. And a lot of the darker sides of us are just bubbling--I like to say that the veneer of civilization is thin. I don't know.

Paul Bloom: That's interesting. I mean, that's sort of--I'll finish the thought. Which is that, I do, however, think that actually raping somebody or beating up somebody or torturing somebody to death in a situation where there are immaterially indistinguishable from a person, will lower your inhibitions towards doing it again towards real people. And I don't have empirical evidence for that. But, I think it's a reasonable worry to have.

Russ Roberts: Yeah--I feel that way. Again, one of the--there's a cheap trick at the heart of Westworld, which is that they are not actually robots.

Paul Bloom: Yes. Yeah.

Russ Roberts: There are actually people playing robots in a way that makes them semi-close, very close to human, but not quite so--and in course--it's not a spoiler alert, but that's going to get blurred over time in the series. It's what makes it so interesting. And--

Paul Bloom: But it illustrates what it would be like to have humanoid robots. Which is, they'd be indistinguishable from actors playing robots.

Russ Roberts: Yeah. If they got--as the technology advanced. And, it's an interesting thing: we tend to make our robots look like humans, in a generic sense, but not so much in a literal sense. And it will be interesting to see how that evolves as the technology gets better. Right?

Paul Bloom: I wonder for reasons connected to what we were talking about before regarding our relationships to those who serve us that there will be a movement to make us look different from humans. That some of them will be specifically built so you don't treat them as if they are humanoid, because the humanoid part of them might make it very difficult for you to use them in certain ways.

Russ Roberts: Of course, if you are worried about robots destroying humanity or abusing us, you'd want them as human as possible so you could hit them just like we can hit other people.

Paul Bloom: But you don't want them so human that they could mix in with us and then subvert [?]--

Russ Roberts: Exactly.

Paul Bloom: You don't think we've gotten nicer over the last 50 years, in an interesting way?

Russ Roberts: I used to think that. I think that's--one of the things I've learned from Jordan Peterson is that he reminds us that that's not necessarily the case. I don't, I really don't. I think our potential for cruelty is unchanged. I think the fact that we are living in a liberal democracy, a representative government--with a representative government--is something of--it's an historical aberration. It may persist. That would be just grand. But it may not. And, I'm not nearly as optimistic or idealistic about that future as I was 10 years ago.

Paul Bloom: Huhh. So, you don't think our psychologies have changed. You think we still happen to find ourselves in an environment which rewards certain forms of kindness and good behavior. But if the environment goes away--and it could go away very quickly--we'll be back to where we started from our worst.

Russ Roberts: Sure. Don't tell me you don't believe otherwise? That you believe otherwise? You don't think we've--you said 'our psychology has changed.' I'm not sure what that word means, 'psychology.' But, I don't think our nature has changed one whit since, say, the gladiators, or throwing people to the lions in ancient Rome, or leaving kids out on the hillside in Sparta. Or Crusades massacring people, or the death camps of Auschwitz, or the Gulag. Nah. We're all the same. We haven't changed a bit[?].

Paul Bloom: I don't think our natures have changed. We're the products of evolution. We're mammals. We're primates. But, I do think our psychologies, as individuals or in our culture, have changed. So, just to take a--you know, the attitudes of the people in our community toward sexual minorities, like gay people or trans people, are genuinely different than they used to be when I was a kid.

Russ Roberts: I agree with that.

Paul Bloom: But it's not that--you can imagine a case where, 'Well, there's just more laws, and there's more restrictions.' You can imagine some cases where people's attitudes are exactly the same, but you are forbidden to do or say different things. But, at least in that example I think the attitudes and feelings are genuinely different. And in a positive way.

1:04:33

Russ Roberts: I think so. I think the tougher case would be minorities. Racial differences, religious differences. It really goes back to what we started with. Right? People of different races are just like us, I think, more or less. That view was wildly unpopular a hundred years ago. It was believed that they weren't like us. We were better--whoever 'we' were; and [?] doesn't matter: 'Whoever they were' doesn't matter. I guess part of my pessimism is the rise of tribalism that we've been talking about on the program in the last few episodes, and recently. And, I guess the desire to dehumanize, to oppress, to see others as not like us, is so deeply embedded in us that I think we're capable of great cruelty, to people we think are on the wrong side of history, the wrong side of--so, while there's, I don't know, while there's increased tolerance of some types of behavior, some types of choices, some types of differences between people--I also see at the same time--I mean, get Nazis marching in Virginia? In America? And it's a tiny number. I don't want to be--I'm not paranoid. But, how did that happen? How did that become culturally like--wow. And I think that's the veneer of civilization covering over what I think is still out there, racially, religiously. A lot of hatred inside the human heart. Which is a horrible thing to say. At the same time, I--I took my notes before the show, and I wrote down things about how I'm increasingly eager to see people as being inherently good. And, I think on a day-to-day, one-on-one basis, I think, as I've gotten older, I think I'm better at that. I think that's a good thing. But, at the global, meta-, big-picture level, I'm not sure we've made any progress.

Paul Bloom: Well, it's probably better to love people and distrust humanity, and the other way around. But, I'm--I mean, I see what you are saying. And everything you say rings true. But, it was a really small number of people. And Nazis have marched in America before--

Russ Roberts: It's true--

Paul Bloom: in much greater number, actually. In World War II, they filled Central Park in Skokie. They marched. And then, the anniversary of the march was this joke. There was like a dozen losers wandering around aimlessly surrounded by thousands of anti-[?] members.

Russ Roberts: Yeah. That's a good point.

Paul Bloom: I guess the question is: We are entering--it is impossible to doubt the last few years have been very bad for cosmopolitanism and very good for tribalism. And this is, in some sense, goes back to the argument I was having last night in my family. And I can't say--[?]--

Russ Roberts: Let's just put the video up online about that. You don't have to say anything else.

Paul Bloom: No, you don't want to see it. Which is: Is what we have now a kind of blip? Where, it will go back to normal and then the next President will be, I don't know,--

Russ Roberts: mainstream--

Paul Bloom: or another Jeb Bush.

Russ Roberts: Yeah. Lower the flag to half mast, say, half-staff, when a person of his party dies.

Paul Bloom: That's right. Or, will the next, you know, President be, some--or will things just get worse? And a tribal, the tribal appetite, which definitely exists, will just get more and more play in modern life. And I tend to be an optimist. I tend to think this is just an aberration we're living through now. But, we'll know more in 5 years.

Russ Roberts: Yeah; boy, I'm on the other side of that one. My joke is it's going to be Oprah against Ronda Rousey in 2024--

Paul Bloom: That's right--

Russ Roberts: As opposed to Mitt Romney against, say, Joe Biden or whatever Joe Biden's equivalent will be in 2024. It's a good question.

Paul Bloom: Well, don't count out Dwayne "Rock" Johnson.

Russ Roberts: Yeah. No. He's going to be strong. He's going to be strong.

1:08:53

Russ Roberts: Let's go back to Westworld. What was your first point, which we didn't get to, about why there's something disturbing about the program?

Paul Bloom: The first point is that the robots are probably sentient. I mean, it's impossible to know. It's the standard, you know, undergraduate dormitory argument at 2 in the morning, how can I know you're conscious? How can you know that I'm conscious? But, these robots are of such sophistication, complexity, it beggars belief that they don't have feelings. And, the fact--and you could say, 'Well, they aren't born like we are. They aren't human. They aren't made of flesh and bones like we are.' But, like most cognitive scientists, most psychologists, I would shrug and say, 'It's not clear that that matters.' And so, if you have in front of me has a machine who begs for mercy, has a conversation, has goals--seems to have goals and desires--I think our very best bet is to say there's somebody home. And if there's somebody home, I shouldn't treat them badly. And, I guess what I'm saying is: When these people walked onto Westworld and they saw these robots--they knew they were robots but they talked with them--they should have said, 'I can't harm them because they are, they have, they are conscious.'

Russ Roberts: Do you think it's wrong to put a dog to sleep that has cancer that you don't want to pay for the chemo?

Paul Bloom: No, I don't think that that's wrong. But that cuts across two different issues--

Russ Roberts: I don't, either, but I'm not sure why, when I think about it.

Paul Bloom: One issue is that the question of killing is somewhat different. I would think that it probably is wrong to let a dog suffer in agony.

Russ Roberts: Yeah, that's a good point.

Paul Bloom: Because [?] putting a dog to sleep or curing the dog would take up too much time, would be too inconvenient, would be too expensive. I think that--so, I don't think the issue is so much life or death. And I feel this way also regarding the manufacture of animals for food. It's not so much the killing. It's the suffering.

Russ Roberts: Yup.

Paul Bloom: And there's every indication that those things in Westworld can suffer.

Russ Roberts: Yeah. I don't agree with you on that. At least for now. I'm going to read your quote. You write,

The biggest concern is that we might one day create conscious machines: sentient beings with beliefs, desires and, most morally pressing, the capacity to suffer. Nothing seems to be stopping us from doing this. Philosophers and scientists remain uncertain about how consciousness emerges from the material world, but few doubt that it does. This suggests that the creation of conscious machines is possible.

I agree with that; but I want to take a twist on it. You want to say something?

Paul Bloom: Nope, nope.

Russ Roberts: So, I've been thinking about this for a little while. I don't know if I have anything interesting to say. But, I was stimulated by an insight of Harry Frankfurt about what makes us different from animals. His test is: Can it have desires about its desires? So, my example that is: can a smart vacuum cleaner feels sadness in not having a chance at not having a chance at being a driverless car? And I think the answer to that is, 'No.' I don't think it would feel sadness. I had a guest on this show, Pedro Domingos, talking about artificial intelligence and machine learning; and he has this beautiful parable of a robot growing up alongside you, experiencing--taking in--all the data that you are taking in as a child, as an infant; and processing it, with its own brain. And, in theory, it could have consciousness. And my question would be: If you gave it, if you showed it at 40 years old the teddy bear that you went to bed with every night, would it feel nostalgia? And one argument is: Well, of course it would. It would have the same brain that you had. It would have the same emotional response. And, I don't think that's obviously true. I just don't see it.

Paul Bloom: So, what's the stopping point for you? Why do you think that these capacities, which you are pointing out are sophisticated capacities, somehow have to be embedded in flesh and can't be part of a machine?

Russ Roberts: I guess because fundamentally--it's an interesting question. It's just an intuition; obviously, it could be wrong for many reasons, one of which is religious feelings, something about a soul. It seems to me that--well, not seems to me--a brain is not a computer. So, the question is: Okay, true. But can we create a machine that will exactly mimic a brain? And I--it's theoretically possible. Will it--I guess, although many, some have suggested that it's not. Not on religious grounds. The material complexity of it is the issue: The way the brain is grown, it just can't be mimicked literally. But maybe it could be. And I guess, if it could be, it's--it's hard for me to understand, just may be my human limitation, how a machine could have feelings. And, the answer, I guess, is, 'Well, all those feelings you think you have aren't really feelings. They are just neurons firing.' And, I guess that's possible.

Paul Bloom: I guess I'd say two things. One thing is: I also find it very hard to understand how a machine--you know, wires, and you know, silicone and so on, could have feelings. I find it equally hard to imagine how a lump of flesh and meat above my shoulders can have feelings, either. Apparently it does.

Russ Roberts: Well, that's the fundamental mystery. Iain McGilchrist on this program and in his book, The Master and His Emissary, basically says consciousness is the one thing we have trouble having consciousness about; and that any theory of how it emerges, the brain, is almost certainly wrong. But, maybe someday, they'll, philosophers and scientists, will solve this problem. You know, many have suggested--David Chalmers, most publicly that I know of--is that our whole theory of science has to be wrong and re-evaluated because if it can't explain consciousness--which it doesn't seem to be able to--that's very dissatisfying. Something is fundamentally wrong with it.

Paul Bloom: I would agree with that. But these are early days. I guess I would also say is that one way to build a sentient robot is to perfectly duplicate a brain. And, you know, if you could perfectly duplicate a brain then it's hard to see why that wouldn't count as fully sentient. But it's not the only way. I could imagine there could be many ways to construct intelligence and consciousness. So, most--you know, the everyday robots we live in are not duplicates of brains. They work in very different ways. And, the machines in Westworld may be--there may be many routes to consciousness. I don't have any problems to take it out of the AI [artificial intelligence] world to imagine another planet or to be conscious beings that have evolved on a very different planet than ours.

Russ Roberts: Yeah. It's true.

Paul Bloom: You sound skeptical.

Russ Roberts: No, no, no; I'm just thinking aloud. I'm not skeptical at all. I really don't mean to be. I find that really interesting. I think it's kind of Iaian McGilchrist's point: I don't think my consciousness is capable of absorbing that thought experiment very effectively--how those different consciousnesses would emerge through a different process. I guess what I'm reacting to is that the Turing Test--so, I'm talking to this robot or this seemingly human thing: I don't know if it's a human or a robot--and I insult it; and, you know, maybe it's my mechanical spouse, my robot spouse at the front desk. And I roll my eyes at the front desk. And, the mechanical spouse knows to respond as a human would and starts crying, 'You've hurt my feelings,' and starts to cry. And has a system built in to replenish tears, of course, and have them come out in a way that reminds me of humans. And, I understand that I'm going to feel remorse, if I'm a good person, that I've hurt the robot's feelings, just like I am sympathetic to the cartoon character with the small nose because it taps into my human DNA [deoxyribonucleic acid] that says small creatures with flat noses are like infants and they have big eyes--oversized eyes--and I'm going to respond to that emotionally. But I realize in the case of the cartoon that it's just a cartoon. And, should I really feel bad about that, that robot spouse, when I rolled my eyes?--

Paul Bloom: So, what you are saying, going back to our--

Russ Roberts: Are they really sad? That's the fundamental question, right? It might cause pain. Suffering.

Paul Bloom: Right. Yeah. And, if it's--I guess I'd first say, we can never know, at a certain level, just like you could never know that for your real spouse.

Russ Roberts: Yeah. Per [René] Descartes. That's a big problem.

Paul Bloom: Right. It's possible that I'm the only conscious person around.

Russ Roberts: Yup.

Paul Bloom: It's not possible that you are but I know I am. But, the second thing is, I think in the real world, how you would respond would depend on the complexity of it. If it's a sort of a Robbie-the-Robot-looking machine and it says, 'I am sad,' and tears[fears?] sprout[spread?] out, and then it spins around a bit, you'd say, 'Well, that's just a toy.'

Russ Roberts: And you've got to fill up the tear cartridge. Replace the tear cartridge that night.

Paul Bloom: [?] a little tin can, everything. But, if it looks like Thandie Newton from Westworld, and starts arguing with you, and it may be irresistable. You may have no choice but to view it as a person. Now, it's possible you would say that's [?] a very powerful illusion.

Russ Roberts: Yeah. I don't know.

Paul Bloom: Just in some way, like, you know, you see images on a TV [television] screen and you immediately say, 'Oh, there's a person and there's a car and there's a house.' But you know it's just images on a 2-D [two-dimensional] screen.

Russ Roberts: But you cry anyway. You cry at the--at Love, Actually, in more than one place. And I know that they are just actors, pretending.

Paul Bloom: That's right. And I would think--I would bet that, if you spend an hour with one of the Westworld characters, the Thandie Newton character for instance, and then left. And then they said, 'Okay, we're going to chop her up into small bits to, you know, reconstruct her,' you would say, 'No. You shouldn't. That's murder.'

Russ Roberts: I don't think so. I think it would bother me deeply. I think I would howl, when the axe fell. I think I would--but that's an illusion, too. That would be a bad--that bit of information would misinform you about what I really think, because that's not--

Paul Bloom: yes--

Russ Roberts: you are tapping into my deepest, deepest, uncontrolled responses.

Paul Bloom: So, I'm not asking how you'd react if you had to see the axe fall. I'm asking how you'd react if she left; and then you were asked, 'Is that a person?'

Russ Roberts: I don't know.

Paul Bloom: Yeah. I don't know, either. Apparently--apparently, in the TV show, people do not have that reaction. People are entirely comfortable taking these sentient humanoid things and treating them in all sorts of terrible ways--both face-to-face, but also at a distance. They feel no compulsion to say, 'Oh, my God, they are people.'

Russ Roberts: You mean the characters on the show.

Paul Bloom: The characters. That's right. I think the viewers see it very differently. One of the ironies of the show, which is built into the show, is you very quickly take the side of the robots--

Russ Roberts: Yeah--

Paul Bloom: and begin to be repelled by the people.

Russ Roberts: Well, it's a genius idea. For sure.

Paul Bloom: It is.

Russ Roberts: I just don't--I think all these are--well, they are wonderful thought experiments. Which, we'll come full circle with that. They are wonderful thought experiments that force us to think about our own humanity and what makes--what makes moral behavior. How we should treat the people around us.

Paul Bloom: Absolutely.