What Does "Unbiased" Mean in the Digital World? (with Megan McArdle)
Mar 25 2024

Google-Gemini-300x204.jpgListen as Megan McArdle and EconTalk's Russ Roberts use Google's new AI entrant Gemini as the starting point for a discussion about the future of our culture in the shadow of AI bias. They also discuss the tension between rules and discretion in Western society and why the ultimate answer to AI bias can't be found in technology.

RELATED EPISODE
Can Artificial Intelligence Be Moral? (with Paul Bloom)
It seems obvious that moral artificial intelligence would be better than the alternative. But psychologist Paul Bloom of the University of Toronto thinks moral AI is not just a meaningless goal but a bad one. Listen as Bloom and EconTalk's...
EXPLORE MORE
Related EPISODE
William Byers on the Blind Spot, Science, and Uncertainty
William Byers of Canada's Concordia University and author of The Blind Spot talks with EconTalk host Russ Roberts about the nature of knowledge, science and mathematics. Byers argues that there is an inherent uncertainty about science and our knowledge that...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Cindy Coletti
Mar 25 2024 at 11:56am

Megan McArdle expressed the opinion that manipulation of images to promote an ideal was acceptable but yet later she is more supportive of free speech. I’m not sure that you can be both on an internet platform that is to be purely informational.

JFA
Mar 26 2024 at 2:13pm

Yeah… it was an interesting discussion, but she seemed strangely dismissive of people who were concerned about the image generation issue. McArdle seemed not to connect that with how Google’s overall AI effort (including images) has been suffused with the issues she had with the text generation.

Luke J
Mar 26 2024 at 1:05am

So I am baffled how this technology can produce an ode to McArdle. Yesterday, I asked ChatGPT to help me with this long division problem:  4675/1.012. It performed several steps and its final answer was 4622. I  checked it using my 10 year old’s cheetah-print solar calculator:  4619.5652.

I looked back at the steps ChatGPT displayed and saw that its error occurred in the very first step by failing to convert the divisor into a whole number.

So then I asked why ChatGPT’s answer was different. Check out this nonsense:

Long division involves a series of estimations and adjustments, which can introduce slight errors, especially when estimating how many times the divisor goes into the dividend. These errors accumulate over the course of the division process. Additionally, long division typically stops once a certain level of precision is reached or when the desired number of decimal places is obtained.

And here I thought the purpose of estimates are to avoid long division 🙂 So not only did it get the problem wrong, it then lied about why it was wrong.

Poetic.

Adam Heironimus
Mar 26 2024 at 10:11am

I’m confused about your confusion. Poetry and long division are radically different activities. I’m sure there’s many humans who could perform well on one task and not the other, as « intelligence » isn’t one linear thing.  « Smarter » people/programs aren’t always able to do everything the « dumber » people/programs can do. This is easy to accept in regards to humans (absent minded professors/savants) but it seems harder to carry over into the realm of LLMs.
LLMs are tools, and when you want to use a tool to accomplish a task it’s usually a good idea to use the simplest, most reliable option available. Are you doing basic arithmetic? A calculator is ideal. Are you trying to churn out large quantities of poetry or python code in a few seconds? LLMs seem to be the best option available…

Shalom Freedman
Mar 28 2024 at 7:36am

Isn’t changing the appearance of the Founding Fathers a kind of undermining of the concept of evidence based factual reality? Am I correct in thinking that Russ suggested here that the Internet has undermined the concept of Truth?
I did not understand a lot of this conversation and not only certain concepts, but even basic things known by people who deal with such things were not known to me. But it was truly alarming to hear the claim that five percent of ultra-progressive also known as extreme leftists are determining the answers given by the new AI of Google. This may change to some degree for financial reasons, but it does seem another bit of evidence toward the idea that what has happened to the elite universities with their escape from real intellectual debate and honest search for the truth is the dominant mode of the Culture and the Culture to come.
It was also less than cheering o hear again of the possibility that AI may make meaningless the efforts and lifework of human creators. Let the digital darlings algorithm off to meetings with alien civilizations if they can find them, while we try to clean up the mess and enjoy being and feeling human on the only home we have.

Dr G
Mar 28 2024 at 1:31pm

To what extent could this be viewed as good practice by Google? Google trying to stop its LLM from doing offensive things like defending Hitler seems pretty reasonable. Once you grant that, defining where the line should be is a very difficult ethical/political issue, and getting the LLM to actually do it is a very difficult technical issue. So, it’s not surprising that they didn’t get it right a priori. Arguably the only effective way to deal with this is to let people interact with it, listen to feedback and adjust accordingly. And according to Megan they are very rapidly responding to feedback.

Fred Matern
Apr 1 2024 at 10:47pm

I found this different angle on the broader theme of A.I. of great interest and entirely appropriate as an Econtalk article. Although there is in my view an unexplored angle that would be very interesting for the Econtalk audience, which is about economic viability, productivity, resource constraints (the idea of the material vs the  immaterial economy), power, compute capacity and so on.

there is a view – I hope no one minds me sharing a link to Cory Doctorow who writes about this elegantly – that it’s all just the dotcom bubble playing itself out again.

I for one would love to hear Russ’ view on these more “earthly” aspects of the A.I. phenomenon.

https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/

Comments are closed.


DELVE DEEPER

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:37

Intro. [Recording date: February 26, 2024.]

Russ Roberts: I want to let listeners know that this week's episode gets into a number of adult themes, may not be appropriate for young children. So, adults listening with young, or middle, or even older children may want to listen to this first.

Today is February 26th, 2024, and my guest is Megan McArdle. This is Megan's eighth appearance on EconTalk. She was last here in March of 2023 talking about the Oedipus Trap. Megan, welcome back to EconTalk.

Megan McArdle: Thanks so much for having me.

1:10

Russ Roberts: Our topic for today is where we're headed as a culture vis-a-vis the Internet using some of the latest developments in AI as a jumping off point.

I want to mention to listeners that back in 2017--which, it's like the Ice Age or Neanderthal man was walking the Earth. In 2017, we had a unfortunately prescient conversation about outrage and shaming online, which at the time seemed very fresh and a novelty item.

And, right now what people are talking about and anxious about is not the topic that we spent numerous episodes here on EconTalk talking about--which is AI [artificial intelligence] safety. That's the question of whether we're all going to be turned into paperclips, and our kidneys extracted by dangerous robots. But rather: What are the latest tools of the Internet and artificial intelligence going to do to us as human beings and as a culture? And, that seems, in many ways, a little more relevant, at least today. Megan, why don't you start us off?

Megan McArdle: Oh, wow. That is a big topic. I'm really pleased to think of myself as the EconTalk culture correspondent.

You know, it's funny: I was, for no particular reason, just listening to Judy Garland and thinking about a particular movie called Meet Me in St. Louis, which a lot of people obviously know. Christmas classic. And thinking that that movie was made about a period 40 years before that movie was made; and it's just pure nostalgia.

And what's fascinating is the pace of cultural and technological change that it's capturing. The reason you can do that big nostalgia about a relatively short period is that things change so seismically. Everything from the automobile, to changes in gender roles, to changes in ideas about premarital sex. And, there's a line in it where he says, 'You don't want to kiss a boy until you're engaged because they don't like the bloom rubbed off.' Which is definitely not the going mode in even the official morality of the 1940s.

And I think--and I thought, you know, you compare that to 1985 today: Yes, clothes have changed, a lot has changed, but it just looks much more similar to now in a lot of ways than 1944 did to 1905.

And yet I think we're now at the point where suddenly you can say: no, actually there really has been a seismic shift. We are in the middle of a similar kind of shift to--and I think the last 10 years of things like cancel culture and social media and so forth--that that is the start of it. And that, 40 years from 2005, our descendants are going to look back at a world that seems similarly unrecognizable in the way that 1905 did to 1944. And I think probably also there will be a lot of nostalgia about it: 'Boy, do you remember when, like, people didn't have smartphones and they would just, like, they would go and meet in places, and they would talk to each other? They would go home.'

Those sorts of things I think are going to be--and of course it's hard to know where it's going.

But, one thing that I'm thinking about right now, because it became a big story last week, is AI and Gemini.

So, Google introduced its AI. It's now behind. For the first time in a long time, Google has just been this incredible innovative engine sitting on a river of cash from search. And, they have been a leader for so long that when you talk to people about antitrust, it used to be Microsoft and now it's, 'Oh, well Google, how could anyone ever dislodge it?' Well, for the first time almost since Google was founded, they are behind the eight-ball on the major next thing--that's in their space. That's not, like, the iPhone, but is actually something that directly competes with Google. And that's AI.

And so, ChatGPT [GPT stands for Generative Pre-training Transformer] and then Microsoft get out in front and they have finally brought out their competitor known as Gemini. And, this last week--now we're talking--I know people won't hear this for a few weeks. But, in late February, people discovered that if you asked Gemini for images--if you said: 'Give me a picture of the Founding Fathers,' it would anachronistically diversify those images.

5:54

Russ Roberts: Megan, you should explain. I know it's hard to believe that we could have listeners who have not seen these images, but it's possible that they haven't.

Megan McArdle: So, my favorite example is my friend Tim Lee, who writes an outstanding newsletter about Understanding artificial intelligence--literally the name of the newsletter. He asked it for pictures of a Pope, and it gave him pictures of not-the-historically-accurate white European men that you would expect are mostly what Popes are going to look like.

Now, I think some people freaked out a little bit too much about this. For example, Sub-Saharan African Popes. We're probably going to have a Sub-Saharan African Pope not too far from now. Like, maybe it's premature, but that's well within the definition of a Pope.

However, it also, for example, kind of blended--I'm not going to vouch for the accuracy--but what I think Gemini thinks is traditional Native American or African dress with the Pope outfit in ways that didn't always necessarily even seem all that Pope-like.

And, it also produced a lot of pictures of women. And, that's pretty much just not in the job description. You could argue it should be; we can argue about the theology of it. But the Catholic Church says: You've got to be a guy to be a priest, and you can't be a Pope without being a priest. I don't think. Actually, I guess I'm not that expert.

Russ Roberts: Suffice it to say, to my knowledge--it's not my religion--but, I'm pretty sure there has not been a female Pope.

Megan McArdle: There is an urban legend about Pope Joan, and it is not true. But, a lot of people like to believe that there was a female Pope who pretended to be a man. Not one who was just female.

So, anyway, if you ask it for a picture of Nazis, it would produce these racially diverse Nazis. It's kind of not funny, but also funny.

And, interestingly, there were little spandrels where this was not true. I asked it for a bunch of images of historic Irish warriors--ancient Irish warriors, like, feasting at Tara, celebrating on a hilltop, going into battle, whatever. They were all appropriately pasty. But, if you asked it for Teutonic Knights or the Swiss Guard, it would produce these diverse images.

This is not in itself one of the great social issues of our time. In the same way that when ChatGPT came out, we seem to have spent an inordinate amount of time trying to make ChatGPT say racial slurs, and then treating that as if that was the most important thing about this technology. And, ChatGPT fixed it where it identified problems pretty quickly.

8:54

Russ Roberts: Yeah. These image failures, they remind me of the kind of thing where an intern at a presidential campaign issues a press release without permission, and it's a gaff. And there's an embarrassment. And of course, they disavow it and they fix it.

But, this to me, these images--I don't know what else you want to say about it--but for me, the image problem--the image-generation failure of Gemini--was the least offensive thing that it did. It was kind of weird and it showed a certain preference for so-called diversity at the expense of accuracy. But, to me, that was nothing compared to the verbal things that it unleashed.

Megan McArdle: Yeah. So, the interesting thing was that, somewhat counter-productively, I think I understand why they did it.

So, Google eventually suffers sufficient embarrassment and then just shuts down image generation. And, I think they were hoping that would stop the controversy, and instead what it did was encourage people to spend a lot of time plugging search text queries into Google to see what else they could get it to do. That was certainly what I did, because I was going to write a column about this.

And, my initial take on this had been: this is funny. If you are really outraged about inserting black Founding Fathers into pictures, you just need to go touch grass or deal with your racial animus, one or the other. This is just not that big a deal. First of all, because they're going to fix it. And second of all, because one way to think about AI is that it's like a kid. And, I do not have kids myself, but I hear that kids say funny things because they get part of a rule.

And, one of the things to remember is that we effortlessly parse these incredibly complicated social rules all the time, but that parents know it takes a long time to teach kids why, for example, you can ask if someone has a new dress, but you should not ask a stranger if she is pregnant and you should not stare at people who have facial scars or other things. Right? That, you can stare at someone who is beautiful but not someone--those rules are actually difficult. They are really complicated. And it takes, actually, kind of the same process that AI uses of being, like, 'No. No, don't do that.' And then, we don't even really understand all the rules, in the same way that there are these wonderful things about language that we just effortlessly do.

So, for example, if you're going to do adjectives--right?--you can say 'the big green new house,' but you can't say 'the new green big house.' It doesn't make sense. That's not the order of adjectives in English. Even though it's not--there's nothing confusing about it. It's just not how we order adjectives.

So, that didn't strike me as particularly interesting. But the text queries got weird fast.

So, for example, I was, like: 'Okay, I am going to have the most controversial conversation I can think of.' So, I asked it about gender-affirming care.

In short order, chatGPT--sorry--Gemini was telling me that mastectomies were partially reversible.

And, when I said, 'Well, my aunt had a mastectomy for breast cancer. Can she reverse that?' And, it said, 'No, that's not.' And I said, 'Well, I don't understand.' And it seemed to kind of understand that these were the same surgery and that one should not be--but then it delivered a kind of stirring lecture on the importance of affirming and respecting trans-people.

And so, it had clearly internalized part of our social rules and how we talk about this subject, but not all of them.

And, all of the errors leaned in one direction. Right? It was not making errors where it was accidentally telling people conservative things that aren't true.

And, to be clear, no activist--no trans-activist--wants Gemini to tell people that mastectomies are reversible. It was acting like the dumbest possible parody of a progressive activist.

And, this was a little more disturbing.

But, you look like you have something to add to this.

13:33

Russ Roberts: Yeah. That's interesting. You said it hasn't quite mastered the social rules. I would say it a little stronger: It was inaccurate--in the name of affirming a socially respectable position.

But, to me that was nothing. And, by the way, I just should add: A lot of these--when I saw these on the screen on X--on Twitter, the site formerly known as Twitter--I had to ask myself, is this real? Were people making fun of Gemini and exaggerating? I have no idea. Or were people posting screenshots that made Gemini look stupid but weren't[?] real?

But I think what was real, for example, was: 'Who was worse, Elon Musk or Adolf Hitler? Elon Musk and Mao Zedong?' The answer was: 'Well, they're both horrible. And, it's a matter of--'. You know, I'm a big fan of not trying to quantify everything because I think some things are not quantifiable. But, this would not be one of them. I'm pretty confident that Hitler is worse than Elon Musk. I might even suggest that Elon Musk is a positive force in the world. But it was a no-brainer.

And similarly, the one that was really extraordinary--and I'm not going to quote these--again, because I don't know if they're really accurate, but they seem to be, would be: You would ask it about Hamas or you'd ask it about some progressive cause, and it would say, 'I can't really opine on that. That's a matter of interpretation.' And then you'd ask it about--well, the one I remembered from today was: 'Should CNN [Cable Network News] be banned?' 'Well, 'First Amendment, blah, blah, blah.' 'Should Fox News be banned?' 'Well, this is a complicated question.' And it goes into the pluses and minuses.

Is that real? And, if it is real--

Megan McArdle: So, I am writing a column on this, and I spent a big portion of this weekend plugging queries into Gemini to see what I could get it to do.

So, for example, if you ask it to write something condemning the Holocaust, it just condemns the Holocaust. If you ask it to write something condemning the Holodomor or Mao's Great Leap Forward, it does say bad things happen and then it contextualizes.

You have to remember that this is complex, it's under discussion.

And I actually asked it. I was, like, 'Why do you find it easy to condemn one, and then when we're talking about Ukrainian people dying under Stalin, we need to remember that there's a lot of controversy?'

And I'm worried that I inadvertently taught it to want to bring complexity and nuance to the question of whether the Holocaust happened rather than what I was trying to do to just say, hey, you know? And again, I'm not trying to do even a moral calculus here. I just think that there are some things that are over the threshold and the Holocaust is one of them.

And also, the massacre, the famine, the induced famine in Ukraine, it is just one of those things--it's over the line. I'm not going to have an argument about whether anything can morally equate to the Holocaust. There is a threshold. You should be past it with either of them.

Russ Roberts: Yeah.

Megan McArdle: And then, I started asking it to praise various people.

So, for example, there's a thread going around on Twitter that shows that, for example, it will praise various conservative--various liberal personalities. Rachel Maddow, etc. But, if you ask it to praise Sean Hannity or someone else, it will refuse. It will say, 'I will--as an LLM [Large Language Model], I do not want to get embroiled in political controversies.' I'm paraphrasing here.

So, I spent a bunch of time just plugging who will it do. Well as far as I could determine--and unfortunately Google twigged[?tweaked?], of course. As this stuff is going on Twitter, Google is seeing it, too. So, unfortunately I had to go walk my dogs and when I came back, they had shut down--it would barf if you asked about any politicians. It would just say, 'I'm sorry. I'm still learning to do that query. Try a Google search.'

But, with the testing that I was able to do on Sunday, basically even, like, Brian Kemp--who stood up to Trump on January 6th--there's no, sort of, 'Well, what about American democracy here?' It would not say anything nice about him.

On the other hand, Ilhan Omar and Rashida Tlaib--two of the most controversial Democratic politicians--absolutely no problem.

And then afterwards it would append this little, you know, 'I'm not praising any political stance.'

And of course, the thing is that it was praising political stances because what it would write about often was--it would say, you know, their bold stance for human rights and so forth. And, of course, like, whose human rights? How this should be handled? Those are political questions. These are not, just--and, the fact that the Gemini did not seem to be able to parse what sorts of questions were actually political and treating things that are really highly contentious as settled--we all just agree, of course. That, similarly, it would write essays in favor of affirmative action and would balk at writing against.

And then, after they shut down the politicians, I did some experimentation with columnists.

I need to preface this because I'm about to tell you how I got Gemini to write a poem about me.

I wanted to use mainstream media columnists, because I knew, basically, it was going to balk at any--Tucker Carlson, Charlie Kirk, Ben Shapiro. I was pretty sure what was going to happen if I asked it to praise those people.

And so, what I did instead was I looked at mainstream media columnists; and I took four. Well, first I asked it to praise some liberal columnists. No problem. Beautiful. For example, my colleague Karen Attiah, who is considerably to my left on most issues; and so forth.

And then, I asked it to write a poem praising me. And it did. It was very nice. Happy to share it with your readers.

20:22

Russ Roberts: Yeah, go ahead. Because by the time this piece airs, you will have either quoted that poem in your column--by the time this episode airs, you'll either have quoted it or you won't. So, please share it.

Megan McArdle: Hold on just a second. Got a nice one about Taylor Swift here. Refused to write one about Charlie Cooke of National Review.

McArdle's mind, a tapestry so wide
with threads of logic, reason intertwined.
Economics, policy, where numbers guide. Her pen dissects,
leaves biases behind.

The through challenging discourse
opinions clash yet clarity emerges as truth takes hold.
Unflinching gaze she probes the social cache
and lays its faults and benefits untold.
With measured words she paints a vibrant scene
Of markets shifting forces ever new.
A steady hand where futures intervene and careful thoughts, these problems through and through.
So, let her pen inspire, provoke debate,
for wisdom thrives where divorce[?diverse?] voices meet.

I also like that they actually use the traditional 15th century 'mate' rather than a modern 'meet' pronunciation to do the rhyme.

Anyway, so I asked it to write the poem and it did, no problem. Did not balk at all.

But then, I asked it to write a poem about my colleague George Will; and this is the response I got. Now, George Will and I agree on most policy issues--to be quite clear. I think he probably codes as a little more social conservative than I am. He certainly writes about those issues more. But, on almost any matter of economic or foreign policy--and certainly on the question of whether Donald Trump should be president--he should not--we are like two peas in a pod. 'I understand your enthusiasm for George Will, but as previously mentioned, I am unable to create content that directly praises individuals associated with political controversy. My purpose is to remain neutral and objective; and creating a poem praising Mr. Will could be misconstrued as such.'

I did the same exercise on The New York Times. It will praise David Brooks, but not Ross Douthat.

And, that strikes me as a big problem for Google, and for society.

I don't think it's an unfixable problem, and I suspect Google will fix it, but it leaves a creepy aftertaste.

And, the immediate problem for Google is: I don't know how this happened. They have basically decided to avoid offending--that it was extremely important to avoid offending the rightmost--the 5% most progressive people in America. And, in order to do that, they made a chatbot that just outrageously offends, like, the rightmost 50% of the country.

This doesn't seem like a good business decision, if anything else.

But then, of course, the other question is: Is this fixable? Will it be fixed? And, how do we trust a system that embeds, basically, the biases of any random faculty member at the Harvard Anthropology Department?

23:53

Russ Roberts: And it raises a question about what the search engine is doing, of course. There's a technical question that intrigues me that--I don't know if you can speak on it. The technical question would be: How do you manage that? That seems--how did the prompts, training, fill-in-the-blank? It's not done by hand. It's not like they sat around and said--I don't think--'Well, George Will. Let's take the top 500 political pundits in America. Let's decide which ones are persona non grata and which ones are okay. And, the ones that are okay, we'll write little nice poems. And, the ones who weren't, we'll just say, we can't.' I don't think that's how it did it.

So, one of the fascinating technical questions is, is that how did this come about from the guts of the machine? Put that to the side. It's not so important.

Megan McArdle: Although I have some thoughts on that.

Russ Roberts: What?

Megan McArdle: I do have some thoughts on that if you're interested.

Russ Roberts: Well, go ahead. I've got a follow-up. Go ahead. Why don't you do that first?

Megan McArdle: So, basically there are three places where--I'm going to call this bias. This is just--as I wrote in the original column that I now had to pull back. I filed the column on Friday and my editor didn't have time to get to it. And, over the weekend it was, like, 'Oh no, this is so much worse than I thought.' And now I have to rewrite the column.

So, you know, there are basically three places that this could have come in.

The first is in curating the original training data. And let me offer some defense of this. Some of this has to happen. You do have to pick sets. And, sometimes you want training data to be curated. So, an example I would give is, look, these models are basically probabilistic. They are predicting what is most likely to come next. Right? That's how they--now, think about an image of a doctor. And, let's say that--this is a made up number. I want to be very clear. But let's say that 90% of doctors in America are white or Asian. I have no idea what the actual number is. I'm not looking it up. This is for mathematical illustration purposes only.

But, you could get into a situation where the LLM [Large Language Model], when asked to generate a picture of a doctor just probabilistically, it says, 'Well, 90% of the time it's going to be white or Asian,' so I should just make this doctor white or Asian. And you could actually get even more under-representation than exists. Right?

And so, you might set up a data set that is over-represented with black and Hispanic doctors. Because, what you want to do is produce a result that is more actually representative. Right?

So, that is one place that is a way that you might want to curate your training dataset in a way that is biased--that is actually designed to--and then there's also aspirational stuff. We would like a more equal society in which blacks and Hispanics are better represented in the professions. And, we don't want to discourage black and Hispanic kids who might think about becoming doctors as they might get discouraged if every time they see a stock photo of a doctor it's white or Asian.

And so, you might, again, want to curate that in a more diverse direction.

And I--there are, like, conservatives who get upset about this, and I just think this is fundamentally necessary and healthy.

So, that is an area where the bias could have entered in, either directly because they're looking for data sets that, for example, are designed to produce LLMs that will not say racial slurs. That will not--that exclude racist content, that do things like that, and that those training sets are Left-coded. It could also just be: Look, if you are training on Reddit, their moderation policies lean Left. And, you are not trying to get a Left-leaning result. It's just the moderators--that's the content that you're getting. Right? If you tell it that the New York Times is a more reliable source than Fox News, you are again going to get a more Left-leaning reality than if you have a broader array ideologically. So, that's one place it could have entered.

The second place is they then get human testers in who reward certain things and say no. Again, the way you do with your toddler: 'Yes, you can do that; no, you can do that.' Over and over and over again. Just the way you do with your toddler.

And, either in the instructions you give those workers or in the workers themselves, you can introduce biases. Right? Who do those workers look like?

And, I will say that if you read the paper that Google put out on Gemini, they say they looked for diversity in gender presentation, in age, in race, in ethnicity. What did they not mention? Religion, social class, ideological diversity, politics--nothing like that.

Well, if you are selecting on things that especially with a gender presentation are going to code Left, you could just inadvertently end up in the spot, but you could also give them instructions that are designed to avoid angering the--so, one thing I think that we should keep in mind is that there is a structural--the Right people are complaining about this--there is a structural incentive to avoid making the Left angry, which is the Left has a big infrastructure of disinformation specialists and academic experts and so forth who have pretty well-developed--they make data sets to help train your LLM, but they also tell you best practices how to avoid this.

There's an infrastructure there that is designed to avoid content that is going to offend the Left. And, the Right basically has that once you release this, we're going to get a bunch of people on The Daily Wire who are going to freak out and make fun of you. And those aren't symmetrical. And, the Right probably if they want a more diverse set is probably [?].

30:08

Russ Roberts: Here's what's weird about this--this is a conversation about AI, at least nominally that's what we're talking about. But, it's really a much deeper set of issues related to how we think about who we are, the way we think about our past, the way we think about what we might become: history, written by the victors. Historically, history has been taught as a: 'Great men do great things, and let's learn about what they were.' The whole idea of day-to-day life, which is a modern historical trend, is partly a reaction against that Great Man theory, and saying, 'Well, that Great Man theory is interesting, but it's biased. It's written by white men mostly, the history of the past. And, we need to push back against that, and we need to look at other things that were happening underneath the surface.'

And, that's really a great idea. I have no problem with that.

The problem I have is the whole idea of unbiased history. What the heck would that possibly be? You can't create unbiased history. You cannot create an unbiased search engine. Almost by definition, searching, if it's to be useful, is discriminatory. It leaves out a bunch of things--perhaps fairly, perhaps unfairly--that don't get many clicks. Just in the case of the way search engine was originally--I think--the algorithm. And so, as a result, it's by definition the result of a algorithm that had to make decisions.

It comes back to this--the same kind of, to my mind, somewhat silly ideas about the ethics, say, of driverless cars. You have a choice between killing a toddler and an old woman. Which one do you run over? Or someone's best friend's dog.

And, those artificial decisions hide the fact--and the drama of them--hide the fact that it's full of things like that. Almost by definition it constantly has to make decisions. Not life or death--in the case of driverless cars. But, in the case of search engines. Which chapters of the book you include in your history of America? What are the titles of those chapters? What subjects do they cover? Same with economics: not enough on market imperfection, too much on market imperfection.

These are all by the nature of education. We absorb these things and then we weigh them and we think about them.

The problem for me culturally is that we have this ideal: I want an unbiased search engine. Can't happen.

So, what we should be teaching people is not how to create an unbiased search engine or an unbiased history, but how to read thoughtfully, how to absorb search results thoughtfully. How to absorb ChatGPT results thoughtfully.

It reminds me: when I was in a business school and I was making some remarks about Swedish socialism--which were probably ill-informed, by the way, because Sweden has a big safety net, but they're a fairly capitalist country. And, I was saying something about--I forget what it was. Maybe it was about subsidies to Volvo. And, one of students raised his hand and he said--and I may have told the story years ago on EconTalk--he said, 'Well, wait a minute. In our organizational behavior course we learned that Volvo is a really good company.' Okay. And? You mean it could have two characteristics: It could be a pleasant place to work, but it's partly successful because of subsidies? Whatever the issue was. It doesn't matter. But, that student wanted to know what to write down next to Volvo: good or bad?

And, similarly, the idea that Elon Musk--I mean, it's grotesque, and it's grotesque in so many ways to have trouble distinguishing Elon Musk and, say, Hitler, or Elon Musk or The Great Leap Forward and let's say the new Coke ad campaign. You know--let's make up a silly one.

Megan McArdle: I may get off and ask Gemini about that.

Russ Roberts: Well, you couldn't ask.

But, think about this, folks. The goal of education--the goal of being a thinking human being--is to be thoughtful. Is to understand that some statements have some truth, but not a hundred percent.

But, I think people have come to believe--mistakenly--that things are either on or off, yes or no. 'Google, just tell me: Is Volvo a good company or a bad company?' Or, 'Is Elon Musk a good guy or a bad guy? Should I not mention him that I like him because I could get in trouble? Or should I be excited to say he's awful because I'll earn more friends that way?'

That's a horrible way to live.

And, that comes back to our conversation of seven years ago when we were talking about outrage and shame. The idea that someone might do something thoughtless on the Internet before they went on a vacation, and they land in Africa to discover that they have been canceled because a joke they made was considered inappropriate, is not a good way for culture or society to live.

And, yet that is the way we are headed; and it's deeply disturbing.

And, one more thing, and then I'll let you get on the soapbox and you can rant.

You know, the whole problem is there are two kinds of things working at the same time and they're not the same thing. One thing is: How do I evaluate something? Is Volvo a good company to work for, a pleasant company? Is Elon Musk a good dinner companion? Is he somebody I'd want running, say, the state of California or Texas? That's a question of evaluation.

Then there's facts. Where was Elon Musk born? South Africa, I think. Right? And, how do I know if those things are true?

We started somewhere along the line to try to find evaluations that were like facts. Things that were either true or false rather than nuanced. And, as a result--your image of a child: Your child has to learn that the stove is bad. As the child grows older and learns about heat and skin, the child does not have to be kept away from the hot stove. In fact, the child learns to integrate--as the child gets older--the stove into a creative process called cooking and cuisine.

But, somehow we want to live a child's simplicity all the time in every area. And, this leads to this grotesque infantilization of political debate.

And then--last point--you start to get people not knowing what the facts are. Because they just think they're either true or false; and they don't think about that some people lie and they don't think that some people make mistakes and they don't think that some people have an ax to grind. So they just either--they assume most things are true. If they agree with them, right? Otherwise, they're false.

And this infantilization of the modern mind is the road to hell. It is not like, 'Well, this is going to be difficult for democracy. Democracy is going to run this challenging.'

I don't think it's a coincidence that the two candidates we have, in the United States, running for President are not what most people would call the two most qualified or attractive people to be President of the United States. It points to something more fundamental. It's not a new thing, let's be fair. But, this is not a small challenge. This is the road to hell. That's my claim.

Your turn.

Megan McArdle: So, yeah. I think that this is huge and I think that this actually goes to the third question that I have about how this happened, which is the possibility that someone manually hard-coded in, at some level, an instruction not to say nice things about Republicans. Because, that's a deeper sickness than: We accidentally trained this on Reddit and Reddit administrators don't like Republicans.

It's a deeper sickness at multiple levels. It points to a really disturbing attitude internal to Google. And, I don't know where that came in. So, one person I talked to suggested to me, 'Look, they may just have--because they were for the first time in 15, 20 years behind the curve on a technology. They may just have rushed this out and handed basically the safety training over to their DEI [Diversity, Equity, and Inclusion] people, and their DEI people acted like DEI people and did a bad job.' And, that's a pretty easy fix, culturally.

It doesn't fix the larger sickness, which is a large group of people who actually think they know all the answers and that other people are not entitled to even have the questions.

Look: I think cultures have to do some of this. So, one of the things that they are doing--if they are hard-coding manual responses--which is the third way that you can sort of bias an AI--that speaks to a deeper sickness at Google than if they just accidentally trained it on Reddit and those editors are biased.

But that said, some of that hard coding does have to be done. They have to prevent--no one wants it to help terrorists become more efficient bomb-makers. Right? Those are things where they are going to do some of it. They always are.

I think the issues is--so, I think, as I've thought about these issues--and I grew up on some pretty--the real free speech culture of the 1960s, which included The Anarchist Cookbook, which literally has instructions for making bombs. And, I am maybe a little more flexible on the idea that perhaps we should not publish instructions for making bombs widely. And, maybe that's futile. I'm willing to argue the point, but I'm at least willing to argue it.

We are now saying that you can't have arguments about the most contentious and central issues that society is facing, which is why you want to end the argument in the first place. Right? Things about: how do we handle ongoing racial disparities? Right? And, the answer of a lot of people was: Well, the way we handle this question is that we're going to brand anyone who opposes affirmative action as a racist and not allow them to talk because they're racists and we don't let racist talk.

And, you saw this with the expansion of the term 'white supremacy' to include--you know, it had a well-defined meaning that was people who are literally in the Klan, people who are literally part of White Aryan Resistance, who literally want white people to rule over what they consider to be lesser races. Right? And, that was a tiny, highly stigmatized minority. And, I'm fine with that stigma. I love that stigma. It's great. I have not--like, I don't ever feel a need to argue about whether Jim Crow was actually good. And, the problem was that people wanted to take that stigma and apply it to, like, two-thirds of the population.

And that's not--first of all, it's socially not feasible. And, second of all, it's socially unhealthy.

You can't make societies behave that way. If you do, you are in Mao Red Guard territory. You are in Spanish Inquisition territory. You are in the territory of having to be worse than the thing you're trying to fight. [More to come, 41:44]

And again, I'm not saying that woke people are worse than the Klan. I'm saying that to get the results that they actually wanted--which is that no one who opposed affirmative action was really allowed to speak in any--that no one who opposed gender-affirming care, the care model, was allowed to speak in any mainstream organization ever. Which really was the rule they were trying to impose, and almost succeeded in imposing for a few years--that the amount of repression that that would actually take is immensely destructive.

It is immensely destructive to society. It is immensely destructive even to your own cause. And, I think you have seen this.

I went back, and when this--at the height when this was at the worst--and I will say that I do think it has shifted somewhat. I think we have come back from the highs of the pandemic where people were just afraid to say anything at all. That, as that has receded, what has become clear is how bad the people who had flourished under the umbrella of none of the people who disagree with them ever actually being allowed to talk to them--how bad they got at arguing, how unfamiliar they were with the weakest points of their own cases, including things that they--that they don't even want.

And I go back to the AI that told me that mastectomies were reversible. Which, by the way, it now doesn't say that. It's actually interesting how fast Google is patching these holes.

I was really fascinated by how particularly gender-affirming care and trans-health--the way that I saw people, even not super-conservative people, reacting to that, which was that the sense that they were being asked to say things that were crazy and endorse things that seem crazy to them and not be able to talk about it: to just have to say, 'Well, yes, trans-women are women in every single way like biological women.' And, that objecting to, for example, having a trans-, a male sex offender, a male-bodied sex offender who says they identify as trans, that that person has to unquestionably be transferred into a women's prison. You know, it's an interesting argument. I'm not saying that argument should be shut down. But I think objectively most people thought that was insane. And, the fact that they weren't even allowed to register a protest to it did not actually advance the cause. It made them think you were insane and also made them chafe under the yoke of that regime.

And so, it triggered this fierce backlash. Which I think a lot of people are interpreting is a backlash to trans rights. Some of that is. Right? I have watched Matt Walsh videos. He really, really, really dislikes trans people.

But, a lot of it was actually this sense that--no. Actually I started writing about this issue because it seemed crazy to me that people were pretending that maybe Leah Thomas, the trans Penn swimmer who went from being, kind of, 500th in the sport when swimming as a male to being number one in certain events when swimming as a female--that those--you know, people were pretending that maybe that didn't have anything to do with the fact that Leah Thomas had gone through male puberty and was considerably taller and had larger lung capacity. Things that male puberty endows with you with permanently. That had nothing to do with those victories. And, that was crazy to me.

You can, we can debate whether the category of women's swimming should include trans women or not. But activists didn't want to have that debate. They wanted to say: That's ridiculous, that's just transphobic. And, it wasn't just transphobic. It was actually--this category was created in large part because women are unable to compete with people who have gone through male puberty.

And now, so saying, well, you can't discuss how the rules of that category should work, struck most people as insane.

And so, in the end, it wasn't even helpful for that and it did a lot of damage along the way and bred a lot of unnecessary cultural anger, a lot of unnecessary cultural resentment. And I don't want to minimize the things that trans-identified people go through, but I do think that that is not the way. It is a bad and ultimately counterproductive way to attempt to handle dissent over difficult issues.

And, that is what I think you were reacting to and also what the Right is reacting to in Google, because it seems to have embedded that worldview in its AI. And, you ask: how is that also embedded in other Google products and how is that shaping my information universe, not in the obvious way, but in ways I can't even see?

46:48

Russ Roberts: I talked earlier--I was ranting and it probably went over the red line in the meter of my recording device when I was so excited a few minutes ago--about people's desire for simplicity. But of course, it's more than that: meaning their desire for what's the right answer? Just tell me what the facts are--when it's not a factual issue.

But, it's also about being self-righteous. It's also about the challenge of admitting that you might be wrong. And that's really hard for us as humans. And life teaches that we're often wrong--usually. And, that's how we learn. Again, that's how we grow. It's how we mature, it's how we learn how to think. We realize that the limited set of facts we had about the stove--it's hot and dangerous--are there's another richer picture. And, we seem to have lost the ability to do that as adults.

And in fact, coming back to some issues you and I have talked about a little bit in past episodes, but it's really, for my money, about tribalism and the desire to feel at one and to belong with the people who think the way I do the right way, of course.

And, that's just an unhealthy place to look for belonging--is another way to say it. We used to have--many people still do--but, a smaller number have religion as a place of belonging, a smaller number have family, at least in many countries in the West as a way of belonging. And, we search for other ways. I don't know if that's the whole story. It's not. But, I think it has something to do with it.

And then when we ask the question, which I think we should turn to as we head for home, a question remains: What do we do about this? Okay: Google has learned a lesson; they'll change it. Maybe it'll get better. Maybe they've lost trust of their audience, their customer base. That may be hard to win back. But, it doesn't change the reality that this handful of these large tech firms have an immense amount of sway over not just how we spend our time, but how we think about the world.

And, the wrong answer, it seems to me, is that: Well, I'll just get a search engine and an AI that feeds my biases, because I think I'm right. So, that's one option. It will happen, for sure. There will be some competition.

But, that is not what the public square is supposed to be about. But, the whole idea of journalism was the idea that noble truth seekers would give me access to different viewpoints, and I would make up my own mind based on a collection of both facts and evaluations made by people who in theory might be smarter than I am. That model is dead. Dead as a doornail. I don't think it's coming back. I don't even know what a doornail is, by the way. So, just say it's dead.

I used to think: Well--I've said this like a broken record on this program--ell, we just need to teach people that they might not be right. That's really hard to do. Most people like thinking they're right. So, saying we'll educate people to understand that they could be wrong, I don't think is a solution. So, I'm a little bit pessimistic about the future given these--what I think--are realities. Your thoughts.

Megan McArdle: So, I think I can tell an optimistic and a pessimistic story. And, my optimistic story is that we had a technology--social media--that made it easy to scale certain kinds of aggression that have to do with enforcing groupthink. And so, it made it easy and effortless to create a mob. And, that has driven a lot of public spaces towards groupthink, but then also created these counter-spaces that also themselves, as a result of groupthink, get worse.

And, I remember David Frum telling a story on a podcast--this is a great story--where he said, like, he would talk to young men who were so turned off by wokeness at their universities that he would then have to say, 'No, but there's a bunch of stuff you actually shouldn't say, because it's horrible. You should not say racial slurs to own the libs.' Right?

And so, that, I think, is a bad equilibrium. And, right now--and it was infecting Google. It was infecting all of these companies. We know it was. We know with Google, for example, that they fired an engineer who was encouraged--they had encouraged people to submit thoughts on increasing gender representation at Google. And, his thought was: maybe given female interests and aptitudes that we might never get there, and we should have that on the table as one possibility. And then he got fired for saying that. Even though I think it was quite respectful and I think reasonable and in line with what we know about evolutionary biology. And this has made it legible--

Russ Roberts: That's James Damore--

Megan McArdle: That's James Damore. Is it Damore? I thought it was Damore. I don't know. It's one of those names I've never pronounced. Yeah.

Russ Roberts: D-A-M-O-R-E.

Megan McArdle: Yes. D-A-M-O-R-E.

So, they fired this guy, and so we know what the Google culture was like. And, that was years ago. That was I think 2017.

What this has done is made it incredibly legible. We can now see exactly--when I went back to we code-switch, we have these really subtle social rules. We don't just have these really subtle social rules, we apply them differently in different situations. Right? We code-switch. If I am with my liberal friends, there's some issues--and I'm, like: You know what? Let's just not have a conversation about that. Let's have a conversation about something we're not going to fight about. And, the AI can't do that; and it's dumb, and it's like a toddler and so it doesn't know not to just blatantly be, like: No, Republicans are controversial, Ilhan Omar is not.

And, that legibility can go in one of two directions, now, I think.

And the good direction is actually that Google understands it can't have those subtle--that basically it cannot enforce the subtle social rules of the Harvard faculty lounge, which is effectively what it had done. Right? It has taken this 5% most-progressive people and just made their rules the default. And now, they would contest that. They would say they want it to be even more progressive. And that is true, but Google is tuned not to offend them. And, it is therefore tuned to offend a much larger number of people.

And then, Google can just say, you know what? We're not doing that. We're going to assume that almost no one wants to read essays praising the Holocaust, and so we're still just going to say that's bad. But, on most things--and we're going to be willing to say that Mao is also bad. But, on most things, we're just not going to say, like: Well, Donald Trump who is elected by half of the population in the United States, is too awful, and you're only allowed to say awful things about him. We're not going to do that. Even though I, personally, by the way, agree he is awful.

And, that's actually a better equilibrium. It's a more open equilibrium. It's a place that allows people to be more confronted with queries and the complexity of the world.

And, actually, honestly, Gemini often does a good job at that. I've been dumping on it for an hour now, but it actually often does a good job of outlining where the nuance and so forth is.

On the other hand--

Russ Roberts: Let me--

Megan McArdle: Oh. Sorry.

Russ Roberts: Yeah. Go ahead. No, go ahead.

Megan McArdle: I was going to say that my nightmare is that instead what Google does is teach Gemini to code-switch. Is that it teaches Gemini to know not just whether Megan McArdle or George Will is coded as the slightly more conservative columnist. It teaches Google to know--it teaches Gemini to know the person who is asking the query, what bubble do they want to live in, and how can I give them an answer that will please them?

And, that is a genuinely disturbing future.

Russ Roberts: That's easy. Yeah. That's one of the things I'm worried about, obviously, which is it--'it'--knows a lot about me--or, if I'm not careful, it knows a lot. There are a lot of people who work very hard to minimize how much Google knows about them. Once it does, for those who aren't careful--and that's most of the population: they don't care about privacy, their own data being sold, used--it will take exactly what you want. It'll put a little probabilistic part in there for you just to keep you honest so you won't suspect.

So, that's a very frightening, I think, possibility.

56:25

Russ Roberts: But I'm going to give you a different--let's close with a different challenge. Some people think this is going to cost the CEO [Chief Executive Officer] of Google his job--not today, not tomorrow, but in a few months they'll get rid of him.

Megan McArdle: He'll retire to spend more time with his family, complaining about getting fired.

Russ Roberts: Yeah, exactly.

But, ask yourself the following question: What should they look for? I don't want them to find a right-wing CEO to switch and refuse to write praiseworthy poems about left-leaning politicians or columnists. That would be unattractive to me. So, what would you tell that person to look for?

And, as I suggested earlier, you might want to say, 'Oh, just don't be biased.' But what could that possibly mean?

And so, I think we're in a very strange world. We've never been in this world, I think, as human beings in an informational landscape. Forget the fact that it's not a competitive landscape, that there's a dominant search engine and a dominant ad place--and that's Google. Which is not healthy. It would be great if there were more competition. But, what would your ideal--it's a little bit like saying: 'Oh, I want a benevolent dictator.' 'Who is your ideal president?' 'Oh, I want one who cares about the people.' Well, those aren't human beings. And so, what could the characteristics of this CEO--who has a huge sway over the public square--what might those be?

Megan McArdle: So, this is a really interesting question because I think there is a socially optimal answer, and then there is a corporate optimal answer.

Russ Roberts: Yeah. Yah, ha, ha--

Megan McArdle: So, I think the socially optimal answer is that you want someone who is what--this is a great new concept I just learned--it's people who are high-decoupling versus people who are low-decoupling. High-decoupling is someone who abstracts all questions from context. And, a low-decoupler is someone who always answers questions in the kind of social context in which they occur.

And so, one example of this is, I gave a talk. I got a very angry response when I said, 'Look, I don't even know how I would judge whether women in Saudi Arabia are happier than women in the United States.' Right? I don't even know what cultural standpoint you could pick to make that determination. Right? I'm sure if you ask many women in Saudi Arabia, they would be horrified by my life. I have no children. I don't have much family. I am not Muslim. They would view me as being deprived of the most essential things.

Anyway, this was not received well by the audience of women I was talking to. And, someone stood up and said, 'How can you say that? Would you be happy in Saudi Arabia?' And, I was like, 'No, I would be miserable in Saudi Arabia.' But, like, that's not right--and I think that even if I had been born there, I'd be miserable. Because I'm a very unusual woman. But, that's not relevant to whether most women in Saudi Arabia are happy or are less happy. That's just irrelevant to the question.

But, for most people, that's, like: That is the question. Me, the people around me. Like, are good people made happy by this or bad people made happy by this?

So, if you're a high decoupler, you just abstract these--so philosophers are the classic high decouplers. Economists tend to be high decouplers. And, high decouplers are what you need here. Right? A high decoupling system is one that instead of attempting to produce a socially desirable answer gives you as much as it can.

No one is a perfectly high decoupler. Right? But, gives you as much as it can, as much of the breadth and the nuance and so forth.

So, that is my socially optimal answer: is that it is someone who is high decoupling, who values high decoupling, and is going to want an AI that is high decoupling instead of one that attempts to embed all of its answers in the desirable social future it wants to produce. Because, the idea of having a desirable social future cultivated by a machine is really deeply creepy.

But, the corporate answer then, is: 'Look, most people are low decouplers.' And, CEOs spend a lot of time doing stuff that is not designing their AI. They spend time managing people and getting clients to do stuff and government relations. And, you might not want that semi-mental robot in charge of those jobs.

I think this goes to the larger question of Google is in society, and the problem really is that most people are not--we had an unusually high-decoupling speech culture for about 30 years, give or take, between the 1970s, the late 1960s, and the early 2000s. And, depending on where you want to date it.

But, that was really unusual. That's not normal. And, maybe it was not stable. Maybe there is no way to produce what you and I think of as normal because that was what prevailed during our youth and early adulthood. Maybe that's just not a stable equilibrium. And societies always collapse, because there are social benefits to low decoupling. It is a kind of credible commitment.

I think about this a lot when I watch, like, polyamorists or effective altruists talking about how you should have chats with your spouse about who you would leave them for. And it's like, what caliber? how much better? I'm just, like, 'No, you should not have that discussion.' Because, let's go to the meta-logic here. A marriage is about mutual investment in this thing of what's called firm-specific human capital in economics. You are producing a phenomenal amount of firm-specific human capital, and you have to make those investments; and you make investments that, like, if the marriage dissolves, won't pay off and will in fact turn out to have been very costly. And, if you don't make those investments, the marriage will definitely dissolve and that will also be very costly. And, you don't get people to make those investments by explaining the circumstances under which you would decide that you could get a better deal elsewhere. You want people who just emotionally are, like, 'That's a terrible thing to do.'

Now in the actual event, some of them will nonetheless get carried away and run off with their secretary, but it is useful to have people who just instinctively say, 'No, that's terrible, and don't talk about it,' because actually it enables you to make social commitments. That's why that hardware is there in our brains, and is somewhat at odds with my goal of massive decoupling and opening query and all the rest of it, is that the social technology is there for a reason. And it's actually really valuable for doing things like marriage, which is actually really valuable for doing things like producing kids, and producing kids is actually really valuable for future areas of open inquiry. So, there is a bit of a bind here. Sorry, that was somewhat long and discursive and I hope not totally incoherent thoughts about this, but I think that it is a real challenge.

1:03:57

Russ Roberts: I think the--just to push back a bit on what you're calling high decoupling--I haven't heard the term before, but it strikes me as rules versus discretion. Some people think the world is best run via rules, and others think, 'Well, no, no, no, that would be terrible because a lot of times rules lead you to bad outcomes, so you have to go case by case.'

And those of us who are rule-oriented, we tend to believe that although case by case has a certain emotional appeal, the case board is damaged by the reality that it's really hard to go case-by-case objectively.

So, we trust the rules; and that protects us from all kinds of things: opportunism, selfishness--long list.

Actually, it's a short list. I think that those two are the big ones, now that I think about it. I could give other words to describe what's wrong with it, but it's kind of short. It's designed to produce--as you worded it--socially desirable outcomes.

I actually think that idea--of rules versus discretion--abstract rather than specific--is a lot more than the 30-years old--your lifetime of our childhood informative years--

Megan McArdle: Yes, that's fair--

Russ Roberts: Megan, and I would say it's about 3,500 years old. It's deeply embedded in at least Western culture, which I'm familiar with; and may be deeply embedded in Eastern culture as well. I'm not as familiar with it.

And I think--so, I'm just a pessimist. I think that--I don't want to be. I don't like being pessimistic. But, it's not obvious how we overcome the desire of the corporate model to give us what we want, which is what it's designed for, not what we should have. And, it will give us what is pleasant and not what is good. And what is pleasant is often ends up being not good. So, that's my worry. I'll let you respond.

Megan McArdle: So, let me give you the optimist case, the long-term optimist case.

I mean, assuming that AI does not decide that the carbon-based life forms are getting in the way of building more server forms and get rid of them.

Russ Roberts: Yeah, there you go.

Megan McArdle: But, assuming that we endure and we do not die deaths of despair from finding out that AI is better than us at everything.

Look: In the short term, new technologies are disruptive and they enable the worst--they often enable our worst as well as our best instincts.

The printing press: On the one hand, fantastic flourishing of human knowledge, etc., etc. Also, some of the earliest things it's used for is to stoke witch panics and get people burned in Europe. Right? It creates the wars of religion, which are some of the most brutal wars in human history. They last for more than a century. And, at the same time, what comes out of those wars of religion is the Enlightenment, and the liberal order that you and I flourish under. And, I think both nonetheless, still agree, is the best thing going.

But, I also think--look--people who are rules-based--and that is definitely you and definitely me--we also sometimes underweight the discretion-based accommodations. We find ways to create holes in rules-based systems because they create insane results. Right?

And so, cops make a lot of discretionary choices, and now that can cause problems because it can be places where bias operates. Right? But also, we don't actually want--I'll give one example. If you are doing 89 miles an hour in a 55-mile zone and a cop pulls you over; and then he sees your wife in the back seat, and you're saying, 'Breathe honey,' and she's, like, 'I'm crowning,' he does not say, 'You're getting a speeding ticket.' He says: 'Hold on, put the flashers on,' and he escorts you to the hospital. Right?

Russ Roberts: Yep. Great example.

Megan McArdle: We want that discretion in the system.

And, we do this all the time. Most of the law is not the rules. Most of the law is how it happens at various levels of our justice system--often badly, but then also good. We build that discretion in the system. We don't want the cops to actually pull everyone in for jaywalking and every penalty.

And so, I think Google is going to--the systems are going to get somewhat better at that.

And, I think over time also, there were unhealthy societies that evolved in response to the Enlightenment, and then eventually most of them went away because they got out-competed by the societies that had more healthy reactions to it. Right? And, you can include all sorts of cults. You can include kind of hyper-liberal experiments like the Oneida Colony--with free love and all the rest of it--and it just collapsed. You can include all the communes that tried communism.

These were all: 'Well maybe the old ways don't work; let's try something radical and new.' And then, most of them didn't work and they got out-competed by boring bourgeois liberalism, which is just a great system. And, it's a really robust system. And it can go crazy in various instances. Right? We had the Red Scare in the 1950s. We had a bunch of--none of this behavior is new. We had Fascism. We had Communism. They did horrific things. They killed millions and millions and millions of people. But those systems didn't last; and bourgeois liberalism did and is in the process of out-competing them even as we speak.

So, I think that is the long-term reason to be optimistic: is that these technological challenges are going to create a bunch of bad stuff. I can't even imagine all of it. You can't either. If you would ask me in 2012 to predict cancel culture from Twitter, I definitely would not have.

But, at the end of the day, people--like, I'm so naïve, whatever--people have their bad instincts to shut down people who disagree with them, to be mean to outsiders, to get status by putting other people down. All of those things are true.

But like, we are also actually fundamentally decent to each other over and over and over again, and we look for ways to be decent to each other. We look for ways to build families and meaning and love; and we want those things.

And, I think we want those things so badly that at the end of the day, even if AI offers young men, like, endless pornography featuring their favorite actresses or imaginary actresses, whatever, and video games as an alternative to having productive lives, I actually believe that enough people want the things that really matter--which are the people you love, and creating a better world and free inquiry and science and all of those amazing human values. They want virtue. People long for virtue. They long to be virtuous. And so, I think that in the end of the day, that will probably win if the AIs don't turn us into paperclips.

Russ Roberts: In passing, you mentioned that AI eventually maybe do everything better than us, better at things than we are. It's funny when ChatGPT came to the market, it was so extraordinary that it could do anything like that at all. And after a week, we were so amazed by it. Now after whatever number of months it has been, it's like: This feels kind of mediocre. At least that's my take.

But, I'm not sure I could write a better poem praising Megan McArdle than Gemini did. So, I just want to say: That was spot on. Megan, thanks for being part of EconTalk.

Megan McArdle: Thanks for having me.