Jacob Howland on the Hidden Human Costs of AI
Jun 26 2023

AI-helping-children-300x268.jpg In the early 1900s, the philosopher Henry Adams expressed concern about the rapid rate of social change ushered in by new technologies, from the railways to the telegraph and ultimately airplanes. If we transpose Adams's concerns onto the power of artificial intelligence--a power whose rate of acceleration would have exceeded his wildest dreams--you might feel a bit uneasy. Listen as philosopher Jacob Howland of UATX speaks with EconTalk's Russ Roberts about why too much leisure is at best a mixed blessing, and how technology can lead to intellectual atrophy. They also speak about the role of AI in education and its implications for that most human of traits: curiosity. Finally, they discuss Howland's biggest concern when it comes to outsourcing our tasks, and our thinking, to machines: that we'll ultimately end up surrendering our own liberty.

Ian Leslie on Being Human in the Age of AI
When OpenAI launched its conversational chatbot this past November, author Ian Leslie was struck by the humanness of the computer's dialogue. Then he realized that he had it exactly backward: In an age that favors the formulaic and generic to...
Tyler Cowen on the Risks and Impact of Artificial Intelligence
Economist Tyler Cowen of George Mason University talks with EconTalk's Russ Roberts about the benefits and dangers of artificial intelligence. Cowen argues that the worriers--those who think that artificial intelligence will destroy mankind--need to make a more convincing case for...
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.


Shalom Freedman
Jun 26 2023 at 12:28pm

Instead of looking at human development in terms of ‘atrophy’ of powers it is possible to think that there are various peaks, special times when human powers develop in the highest way possible in certain areas. Will there ever be a Shakespearean era in terms of dramatic writing, or an artistic flourishing like the Italian Renaissance? Perhaps the most complex literary form the novel whose life extended from Cervantes until now reached its apogee in late nineteenth century Russia with Tolstoy and Dostoevsky or perhaps in the early twentieth century with Joyce Proust Kafka Mann?


Of course, one can point to other areas in which there seems to be an unending development in increasing power and capability, for good and for ill, such as in Science and Technology?


In all these areas it is creative minorities who do the work, and the great majority of humanity is outside even to the point of being affected by the work but not understanding it


Still, it seems to me that in the whole question of the future of humanity in relation to accelerating development of Artificial Intelligence no one really knows the kind of strange new worlds which are to come. Humanity’s great creative adaptability may lead to all kinds of new hybrid creative combinations with AI. Even now there are no doubt all kinds of new developments which it is impossible for any single person to keep up with. And the wisdom of reserving judgment on whether this is primarily for the good or not seems correct if not very useful or insightful.


The question of too much leisure time and largely idle humanity feeling its own meaningless seems to me a most real one. How long can one engage at last in one’s beloved ideal activity only to discover one is mediocre in it? Not having a work even one which one may not love but still gives one something one must do, is not a good option for most people.


I see a lot of escaping into fantasy worlds among young people and find it disheartening and meaningless. But other people know enjoy understand engage in activities this old fogey does not and will never understand.

This was a very rich and suggestive conversation which I greatly enjoyed. But I wish some kind of artificial intelligence would not be writing some of these last words for me



Keivan MK
Jun 26 2023 at 3:30pm

Russ raised several interesting questions regarding many ways in which AI can potentially disrupt our current way of life. The atrophy question was for me the most interesting one.

As a mathematician, I sometimes imagine a world in which proving new and at the same time interesting mathematical results is something that computer would be able to do better than us, and wonder what would that mean for mathematics as a field of human inquiry.

For a long time, human brain was considered superior to any computational algorithm created by humans, making us believe in a qualitative difference between its capacities and those of machines. Already now, maintaining this belief requires adding many qualifiers.

Ben Service
Jun 26 2023 at 4:36pm

It’s interesting to think about rolling back time ~2500 years and replacing AI with the word bible.  It is similar in that this mysterious thing came into being and was made by humans but it seemed to have some divine intelligence in that if you read it the right way gave you some insights.  It too was synthesised by a lot of the human knowledge at the time.

I’m not sure where I am going with this thought though.

Jun 27 2023 at 1:30pm

I was surprised that at 15:34 Russ said:

There have been a lot of trends–social trends–that have scared people about whether jobs were going to disappear. Outsourcing was the most dramatic one before: This outsourcing, the sending manufacturing abroad, is going to destroy X million jobs in America.” And that didn’t happen.

Maybe this just lacks nuance, but I’d argue it is factually wrong, and I learned it was wrong listening to Econtalk (see David Autor). In 2000 before trade with China opened up there were 20M manufacturing jobs, by 2010 it had dropped to 11.5M, and is now 12.9M. Perhaps the argument is that these jobs weren’t “destroyed”, but it shuffled the labor market and displaced manufacturing workers were off to better jobs. I’d encourage you to judge that with your own eyes.

If AI is as disruptive as outsourcing, we are in deep trouble.

Ben Service
Jun 28 2023 at 3:14am

I took it to mean that yes they lost their jobs but they soon found other ones, were they “better” I am not sure.

I am not sure if that is a) what he meant or b) if it is true, maybe there were a lot of people who lost their jobs, were capable of continuing to work and never worked again.

Russ Roberts
Jun 29 2023 at 10:42am

I was thinking of this much-discussed piece by Princeton University’s Alan Blinder where he said 30 million jobs in the US were at risk from outsourcing/offshoring. He did not say how many jobs would actually be lost. This study, and others made people very nervous.

In 2007, there were 14 million manufacturing employees. Today, there are 13 million.

The total number of employees in the US in March 2007 was 138 million. Today there are 156 million people working.

So while many jobs may have been “lost” to outsourcing, trade, and technology, the total number has grown dramatically.

Jun 29 2023 at 10:24am

I’ve wanted to read the Wealth of Nations for a long time, but I’ve never been able to maintain my focus beyond the first five or six chapters. I’ve have a similar relationship with things like the Federalist Papers.

Now with GPT-4, I’ve been reading the Wealth of Nations and the Federalist papers in an AI-assisted way. Essentially, I paste the original text in Column A of a Google Sheet, then have a AI-rewritten version of text in Column B. I mostly read Column B, but I enjoy jumping over to Column A on occasion to engage with the original.

This process is FUN. Am I engaging as deeply with the original text as I might have without AI? Maybe no. Although my past experience suggests I never would have made it through the original without a little help from AI.

GPT-4 is allowing me to make a tradeoff. Do I want to get good at parsing Adam Smith’s eloquent but difficult-to-read paragraphs, or do I actually want to understand his ideas. I’ve chosen the latter, and I think most people would do the same.

Most tools work this way. They give people a new and better way of accomplishing a task. Old skills atrophy. New more productive ones get stronger. It’s not obvious to me that AI is or will be any different.

Comments are closed.


Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:

* As an Amazon Associate, Econlib earns from qualifying purchases.

TimePodcast Episode Highlights

Intro. [Recording date: May 31, 2023.]

Russ Roberts: Today is May 31st, 2023, and my guest is philosopher Jacob Howland. He is Provost and Director of the Intellectual Foundations Program at UATX [University of Austin, Texas], commonly known as the University of Austin. His latest book is Glaucon's Fate: History, Myth, and Character in Plato's Republic. Jacob, welcome to EconTalk.

Jacob Howland: Thank you, Russ. It's great to be on your show.


Russ Roberts: Our topic for today is the impact of artificial intelligence, AI, on our humanity--on the human experience--based on essay you wrote on the website Unherd. We've done a number of episodes recently on whether AI is going to destroy life on earth. An important question. For the record, I am concerned but not panicked. I'm not sure that's the right position. I reserve the right to become panicked in the future.

But, today we're going to talk about a different aspect of AI. We're going to assume it doesn't kill us off in the extinction sense, but we're going to look at the question of whether it's good for us or not. So, let's start with what you're worried about. What's wrong with AI, and with having humans use it extensively? It seems like a great thing.

Jacob Howland: Well, AI certainly has its uses, and I mean, I know many people who consult ChatGPT [Chat Generative Pre-trained Transformer] if they want, for example, to generate a syllabus quickly on, let's say, depletion of nutrients from the soil, environmental impacts of certain human practices--you know, things like this. It will gather information and put it together in a tidy, neat way.

Of course, there is the case now of lawyers sort of cheating on their preparation for cases and asking ChatGPT to pursue, produce legal briefs. And of course, one of the problems with ChatGPT is that it fictionalizes--it makes things up.

But, my concerns are really quite broad. Let me start with this social concern. I recently have been studying Henry Adams' book, The Education of Henry Adams, and Adams, in the last brilliant chapters of this book, lays out what he calls a dynamics theory of history in which he explains that human beings--who are a kind of force of nature: we have certain capacities and powers--are shaped by and shape the forces with which they interact.

And, Adams, during his lifetime--he was born in 1838--noticed a very sort of disturbing acceleration of social change. I mean, between 1838 and 1900, right? You had the introduction of railways, telegraphs, telephones, airplanes, for goodness' sake, ultimately: all kinds of technological inventions and so forth.

And he began to reflect on this and he set forth a hypothesis, which is that: The amount of power or force at the disposal of human beings doubles or has doubled every decade since around 1800. And, already by 1900, he felt that if you sort of think of that--the rate of acceleration is the same, but the curve goes up--he began to be concerned about the effects on society, on sort of the destruction of organic communities and the dislocation of human beings and so forth.

So, if we think about artificial intelligence, the rate of acceleration seems to be even greater in terms of the forces at our disposal than Adams understood it to be.

And, one of my concerns is the way that AI is going to put loads of people out of work. Right? There are all sorts of jobs--computer programming, for example; I mentioned lawyers earlier, perhaps lawyers. Education is going to be transformed radically. That's something we can talk about a bit because students are using things like ChatGPT to write their papers, and no doubt professors are using them to write their lectures and so forth.

And that's going to present a huge problem. It's going to present the problem of enforced leisure, if you like. Our lives are structured around meaningful activities, and if you sort of think of it from Aristotle's perspective, happiness is an activity of the soul, as he says, right? You're engaged in some kind of work, some kind of activities that have significance to you. If we take away employment from a large number of individuals, they've lost one of the great sources of meaning in their lives. So, that's just one thing: What are people going to do when they have this free time?

Now, you recall from discussions during the year or the last few years when we were all shut down by COVID, and I remember reading some articles saying, 'Well, this is great,' right? 'Because people now have time to do oil paintings and to listen to music,' and so forth. But, that raises another problem, and that is that we haven't been trained, as John Maynard Keynes points out in a famous essay called "Economic Possibilities for our Grandchildren"--we haven't been trained for leisure.

In fact, that's a very old problem. Aristotle points it out. But it even goes before Aristotle. Adam and Eve couldn't handle light gardening. Right? Aristotle says--and he has a critique of the Spartans, but it extends to the Athenians as well--says in the Politics, 'War is for the sake of peace, and business which you conduct in peace is for the sake of leisure.' But, the Spartans don't know how to be at leisure. Not only they, but not even the other Greeks. Aristotle deplores the fact that when they have leisure time, they sit around and drink lots of wine and tell myths.


Russ Roberts: Well, let's start with that. I mean, I think, I know you have other things to say about AI and other tools; but this is a very old worry--the worry that technology will get rid of, eliminate jobs. It hasn't. If anything, our jobs have gotten more pleasant, say, over the last a 100 years. A hundred years ago, the dangers of a lot of the workplace were quite high. There was farming, which was very dangerous, manufacturing, which is very dangerous. Those jobs have been reduced greatly in the West as a source of income or a source of meaning, and in theory, they've been replaced by more meaningful jobs; in theory, jobs like you and I have--jobs that use a different set of skills than our manual labor or physical strength. In theory, jobs that enhance what is human about us and make us that we're not so much beast of burden in the workplace the way we were in the past.

I don't know whether that's been good for humanity or not. I would argue there's a lot more leisure in our life in all kinds of ways. Certainly outside of the workplace, there's a lot--there's leisure by definition to the extent we avoid the workplace when we're not physically at work. But, even at work, when we are on the job in the office, we often are free to do things that would normally be called leisure--surf the Internet, do other things like that.

And I'm agnostic about this issue of whether leisure is good or bad for human beings. I agree with you that work is an important source of meaning for many people--not all. Again, I think I'm very lucky and you are as well. But I assume that leisure is good. Now, I concede that not all of us, including myself, are good at using it, but do you want to argue that we should stop technologies that make it easier to take leisure?

Jacob Howland: Well, I actually have a lot to say about leisure in connection with idolatry. I think we'll come to that later, but let me just make a couple of observations here. Yes, work is safer by all kinds of measures. Even, I was just looking the other day about, oh, deaths per a hundred thousands of teenagers or children in the 1970s, and it's much, much lower now than it was--because we used to ride around without bike helmets and stuff like this.

But, I would point to a couple of things here. Farming is dangerous work, but it's very interesting work, and it's work that engages a human being across sort of a whole spectrum of capacities. So, if you're going to be a farmer, you have to understand how to help a cow give birth. You have to be able to build fences. You have to understand planting fields and different kinds of grains and when to harvest, and you're very much in touch with nature.

It turns out--and I remember reading this book by Harry Braverman, the title of which I can't remember now. But he was a Marxist economist who started out as a kind of metal cutter, and he made the argument that what's happening with the advance of technology is a kind of--to use Marx's terms--alienation of the worker from his product. Right? And, he talks about the kind of managerial regime that began in sort of the late 19th century. So, it used to be that you'd have craftsmen--right?--who would craft an object carefully and put themselves into it. Right? Under a sort of managerial regime, you're following cut-and-paste orders that are sort of given to you by these managers. I remember, and I believe he cites in his book an American government organization that was listing skilled and unskilled jobs, and they listed farming as unskilled, which he thought was outrageous because you require lots of skills to farm. Whereas he points out, or one could point out, that flipping burgers at a fast food joint is semi-skilled labor because in some instances you push a button and the machine times it or flips it over or something like that.

I would also point to the fact that there's not a whole lot of job satisfaction, Russ. Now, I haven't read the statistics recently, but my recollection is that surveys suggest that most people aren't really happy with their jobs. It's not as if people go to work and come back and say, 'I have a vocation, and it's very exciting,' and so forth. Now, you and I--I think you're right: we're lucky, because we're academics and we get to read and write and think and so forth.

Another thing I would just point out here is this: With regard to leisure--and let me just be absolutely clear, I think leisure is absolutely essential. That's separate from the claim that we don't know how to use our leisure. So, I'm going to come back to the essential character of leisure in a bit.


Russ Roberts: Okay. But, let me ask you about this issue about satisfaction. I don't trust most of those studies: I think they're often done with an axe to be ground--that they come with an agenda.

But, I think the more basic idea would be, I don't want to work on a farm, and most farmers don't want to sit in an office all day and read. We're all different. We choose the things that make our heart sing, that put food on the table for our family. And, definitely there's trade-offs, often, between those two things. I worry about the nature of the workplace, but I'm not sure it's plausible to argue that the alienation of people from their work product is the source of our spiritual or personal malaise is that afflict us in the west.

I will tell listeners: I have an upcoming episode with a sheep farmer, so we'll get to hear his perspective. He has chosen--he's Oxford-educated but has chosen to stay on the farm for many of the reasons I think you would applaud.

But, most people, it's not appealing. It's not what they want to do; it doesn't speak to them; and they're happy to lose some of the meaningfulness of work to have less of it. The fact that the modern work week is creeping downward in certain measures--not all--in certain measures, or lifetime hours are creeping downward as they have for a century or so, is: most people think that's a good deal. Now, whether they can use that time well, that's a separate question, but I'm going to have trouble with that; but that's where I think we should move on to next, unless you want to talk about this issue of meaningfulness on the job.

Jacob Howland: Well, let me just say this. Our conversation has made me realize that I have a somewhat complicated thesis, and it's this: Work is not particularly meaningful for a lot of people, but it's essential for their lives. And, I don't just mean in terms of putting food on the table, but psychologically. Take the case of the lottery winner. Right? In fact, my son had an eighth grade teacher who was making a film about people who have won the lottery. What happens when you win the lottery? Okay, let's say you are a custodian in a building, doing janitorial work. You win the lottery. What do you do? First thing, quit your job, move somewhere else, right? Buy a new house.

And, all of a sudden, the structure of your life is gone--like, the day-to-day structure. Now you have to regenerate or reproduce that, but the point of the film was that lottery winners are not happy often because they sort of veer off. Right?

So, that's one concern. But, what I really want to get to is kind of the fundamental importance of leisure and the way in which AI very curiously cuts off the opportunities for leisure in a kind of foundational way, while at the same time throwing people into a condition where they've got to fill their time.


Russ Roberts: Yeah, let's talk about that. But I just want to add--and we talked about it in a recent episode with Tyler Cowen--I'm not convinced that ChatGPT is going to eliminate jobs. The driverless car was the rage eight or so years ago, and that was going to change the workplace--which it would have if it were viable. It would've put millions of taxi cab drivers and truck drivers out of work overnight if it had fulfilled its promise and been a viable technology. It may still be--I remain skeptical--but if it did come, it would have a dramatic effect on the lives of millions of people, and that transition might be very unpleasant. I don't know whether we would want public policy to reflect that unpleasantness to try to slow it down, but I just want to say it's not clear to me that AI per se will reduce the number of jobs. It's just kind of interesting.

There have been a lot of trends--social trends--that have scared people about whether jobs were going to disappear. Outsourcing was the most dramatic one before: "This outsourcing, the sending manufacturing abroad, is going to destroy X million jobs in America." And that didn't happen. So, I think one of the lessons, possibly--it may be different this time--but one of the lessons is that new activities come along because these technologies make things less expensive, conserve resources, and so on.

So, let's put that to the side. I think there's a general question about the use of leisure that I can see because this device that I hold in my hand--my smartphone--I see what it's done to my attention span and my ability to be a focused friend at times or family member. And, I am concerned about it. But I also recognize that it's a new technology: norms that might come along that would help us deal with that may still come into place and maintain our humanity. So, do you want to say anything about that on leisure or anything else? You can go to something else if you want.

Jacob Howland: Yeah. Sure. So, I mean, as I said at the outset, I think that there are a whole range of problems that are raised by the incredibly rapid development of AI. And, let me just say for the record, I would put human extinction--like, physical extinction of human beings--sort of lower on the list, probably, than many. But, one thing, and you've already pointed to it is: human capacities tend to atrophy in disuse. So, we all use GPS [global positioning system]. I'm quite convinced that back in the day when we had to actually figure out where we were going and maybe read a map and so forth, we had better navigational skills.

A lot of creative activity is going to be, and already is being, sort of handed over to AI. I was speaking with someone the other day whose mother, I think, works in, like, fashion design; and he said she's going to be put out of business. Because, you don't actually--you can generate images, you can take models, or you can maybe even construct them because now AI can do that and put them anywhere in the world against any backdrop, under any lighting, and so forth.

So, let's just take writing and reading. I was speaking to an academic director of a consortium of high schools recently, and it was kind of an unsettling conversation because he said--I said, 'What do you do about ChatGPT?' He said, 'Well, we told the kids in the schools they can't use ChatGPT. Then it turned out they were using ChatGPT. So, now we have assignments; we say: Go ahead and use ChatGPT, but our writing assignments are edits, right? Like: Edit what's coming up on ChatGPT.'

And, then he said to me--and the guy is maybe 40 years old--'Look, I use ChatGPT all the time. I run my articles through it; it gives me suggestions. I take maybe half of them.' And I said, 'But here's the thing. You learned how to write before ChatGPT. If you reduce writing classes for kids who are in eighth grade or something or 10th grade to looking at generated content and then reflecting on it and trying to figure out how to make it better, they're not actually going to learn how to write.'

So, ceding these intellectual capabilities and creative capabilities to AI, it seems to me, is a very bad idea. And, in my article, I suggest that we might even see moral capabilities. Like, AI can make judgments for us: not just where to drive, but what to do.


Russ Roberts: Now, I think that atrophy thing is a very deep question. Let's talk about that for a bit. One argument would be: Who cares? Right? We lived in a world until recently--well, for most of human history, being able to write was irrelevant. We added an era of, I don't know, around 1800--I don't know when it would've started--a very short era perhaps of when being able to communicate in writing was very useful. And, that era is now--it will still exist. It'll just be ChatGPT will be doing the writing and communicating for me in a digital form, which is really no different. True, I can't do my own anymore, but why should I care? I mean, I don't really believe that, Jacob. It does make--alarms me greatly. But I wonder if I'm right. Tell me, why should I care?

Jacob Howland: It's not just writing, but it's the whole question of logos--of the word--the spoken word as well as the written word. So, we begin the Hebrew scriptures with God creating the universe by speech. God said, 'Let there be light,' and so forth. And then, one of the first things, if not the first thing that we see a human being doing--the first human being--is naming the animals they brought before Adam [?], the first human being, and that individual names the animals. Or, we can also go to the Gospel of John: 'In the beginning was the word,' the logos.

There's something both human and divine about the power of speech or logos. And, again, I'm using the Greek word, which because it can mean thought, reason, reflection, speech, etc.

And, education, it seems to me, is--let's sort of break it down to a twofold process: Opening the soul to what is and allowing it to be receptive. Receptive, perhaps uniquely among species, although I don't know, to the whole, right? And, taking those experiences and impressions in, that's one part of it.

And then the other thing is communicating; and that means putting into words or maybe paintings or music and sculpture and so forth--all of which, by the way, are augmented by words because you say, what is this painting about? What does this sculpture depict? And, sharing your individual perceptions with others, I think that's very fundamental to humanity. What's going to happen if we rely on ChatGPT--or not ChatGPT, let's say advanced AI, because it's going to keep going--to do our talking, our writing? That ultimately means to do our thinking for us.

And, it seems to me that from the point of view of an educator, education is about taking young men and women as they are with the peculiar capacities and abilities that they bring--which they acquire through nature and circumstance--and developing them. And, it's focused on the individual, the individual human being who, the Bible tells us, has a kind of divine spark. Is that divine spark going to reside only in the ether in sort of the digital world?

And, one other thing I'd say, Russ, is that if you ask ChatGPT to do your writing for you, what does ChatGPT do? It goes to the information encoded on the Internet, which is not necessarily high quality--some of it is--and kind of scoops it up, regurgitates it, hands it back. Is that going to be a source of new and fresh ideas of the sort that human beings value?

Russ Roberts: I don't know. It does some interesting art. It does some interesting music in it's very primitive form.

Now, I think I want to come back to the atrophy question, though, because I think that is the deep one. I've noticed that it's harder for me to express myself in English because I'm working on my Hebrew. And, if I did that more intensely--the Hebrew part--certainly I can't, I can't be who I am in Hebrew. Right? It's not an atrophy: it's just I've never developed it sufficiently. As I try to develop it, I pull back some of my ability to think, quote, "in English." And, it's an essential part of who I am, how I express myself, either in speech or writing. And so, one way to take--to say--what I hear you saying is that if we cede, C-E-D-E, if we cede our capacity to communicate to technology, we lose the ability to express ourselves.

Jacob Howland: So, let's just talk about Twitter for a second, because here we have a little exemplary case, let's say, of what artificial intelligence in a very broad sense might do, or let's say the kind of digital--sort of development of digital devices, etc. It conditions people to generating texts. Now, when you say the word 'text,' or I say the word 'text,' we might be thinking of the Gilgamesh epic or something; but now we're talking about a little short thing, right? Now, the texts come along, and then this is a binary technology. Right? We respond in binary ways--thumbs up, thumbs down. Right? So, we're already being conditioned to sort of behave like machines. Right?

And, if you kind of expand that out--again, if you're not thinking and you're not writing, and you're not developing your skills and language and so forth; and then you're ceding that, right?, to these machines--are you going to lose the capacity to judge what is put before you? Will your skills of judgment kind of erode?

And, not just with regard to judgment, like, 'Wow! this is really insightful,' or 'This is a good book,' or something like that. But, judgment with regard to questions like, 'Is this true? How should I understand these things?' Right?

Now, that's a whole 'nother thing about AI. One of the things I'm very concerned with is the potential for artificial intelligence to not only surveil us and gather all kinds of information about us and so forth, but to manipulate us--very fundamentally. You know Plato's cave image, right? You got people sitting on the bottom of the cave and they're looking at shadows on the wall cast by puppeteers behind them.

Well, what's already happened with digital technology is: We live in a bunch of caves. Right? Sometimes caves tailored to us individually. I mean, we've all had the experience of searching for something on the Internet or purchasing something, and then thereafter, up comes that product, right? Or different versions of it. We already know that information that is gathered about individuals who are listening to certain things or reading certain articles: The algorithm then generates more of the same. Right? Which kind of cuts us down and puts us in our own cave, as I'm saying. Right?

Now, what if--oh, and we also have a problem, as you know, telling deep fake videos and photographs and so forth from the real thing. We also have a problem, coincidentally, on the ideological side of media, basically propagandizing, right? So, making certain kinds of judgements and emphasizing certain facts, neglecting dimension[?], others, and so forth.

So, in our culture, there's an issue that's rather very, very serious, which is: It's hard to know what the truth is. And not only that, even the truth about facts. I'm sure you've been in conversations where people will simply deny that something is a fact that you are quite convinced is a fact.

And then, you know how the conversation goes. So, they'll say, 'Well, where did you learn that fact?' And, one side might say, 'I learned it in the New York Times,' and the other side might say, 'That's untrustworthy. Where did you learn your fact?' 'In Fox News.' Right? 'Well, I'm not going to listen to that.'

Now, once you get ChatGPT--which has already shown a tendency, by the way, to fictionalize--I mean, people are suing it for libel because it just makes up stuff, makes up legal cases that don't exist and so forth. It does that perhaps unintentionally for sure. I mean, it does it unintentionally because of its algorithms.

But, what if you have intentional feeding, which is designed based on your psychological profile for the sake of, let's say, manipulating you to vote for a certain candidate or to take a certain action? Where will the truth lie? How will we know? What if somebody says, 'Here's a video, here's Vladimir Putin conceding,' or something like that?

Russ Roberts: Yeah, well, I'm worried about all that. I think we've already got that problem without ChatGPT, and ChatGPT, I think just accelerates it.

Jacob Howland: Yes.

Russ Roberts: And that has deeply disturbing implications for democracy, that--an institution that is not very healthy anyway, right now, in my view.


Russ Roberts: I want to come back to this, to the educational point you made. So, I'm going to reframe your argument and see if you agree with this reframing. We talked about--I know you're a reader of Homer, and I forget what episode it came up in, but we were talking about, I think the Odyssey on the program at some point. And, a listener wrote me and said, 'Well, I don't need to read it because I've read the comic book and I know what happens.' And, I think it was a serious comment. I'm not a 100% sure. But, we could--at some level, I would call it bad/poor education--we could test students on whether they read Homer by asking them, 'What's the name of the one-eyed monster in the cave that Odysseus and his men encounter?' Answer: Cyclops. a) Cyclops; b) Shrek; c) King Kong' d) whatever. So, one level of reading a great work would be: Did you do it? And, in doing it, did you understand it at the most cursory, narrative level?

So, that's not education. I could tell you what's a comic book; I could tell you--I could tell you the plot of the Odyssey. That is not what is the value of reading. You don't read the Odyssey to find out what happened. You might be pulled along, but it's not why we assign it here at Shalem College. It's not why I'm sure students at UATX will read it. You read it to learn something about the human experience and yourself. And, that learning takes place through the arduous task of wrestling with the text.

ChatGPT--you can feed Homer into it and it'll summarize it beautifully, by the way, do a really good job. It's really good at that.

And, I think my worry would be that if education stays on its current course--which is somewhat spit back and parroting--that, ChatGPT will be a very powerful way to look smart. And, the skills of reading that are quite challenging will not be acquired.

That's the atrophy--a different version of the atrophy argument.

And, we will lose the ability to read--to read thoughtfully, to read carefully, to read skeptically.

In theory, that should change how we teach, and that could be good. We should change how we teach both high school and college, in my view.

So, is there any grounds for optimism there that this will force us--along the lines of a recent episode we did with Ian Leslie--that it's true that ChatGPT is pretty good at entertaining humans? That's because we become somewhat machine-like. Once we are forced to deal with this, maybe we'll become more human.

Jacob Howland: So, when you say change how we teach, Russ, do you mean going back to opening The Odyssey and saying, here's Odysseus crawling out of the sea and running into Nausicaa, the Princess, and reading the text and trying to understand what's going on? Do you mean going back to the old way of doing things?

Russ Roberts: Yeah. Because the regurgitate large lecture hall can't grade a 150, 300, 500 essays--although ChatGPT will try, of course; professors will use it. But, the whole idea of multiple choice exams will be essentially impossible, or--excuse me, will not be interesting. You won't learn anything from whether people can memorize the basic facts about a text, because they won't have to read it to do that.

So, we want our students to actually read the books. We're going to have to ask them different kind of questions than what happened in Chapter Seven, and we're going to force them--I think we should--to grapple with what the text means for themselves, their lives, and the people around them. And, that would be a good thing. Maybe it won't happen, but it would be a good thing, I think.

Jacob Howland: Yeah. Well, I don't--I mean, I don't know whether ChatGPT will prove to be so awful, so conspicuously awful to teachers and students, that we will throw up our hands and say, 'Goodness sakes, we can't proceed this way anymore. Let's go back to basics.' Maybe.

But, I mean, there are several problems here. First of all, in order to teach students how to read, you need to have people who know how to read. Now, let's just take it as a hypothesis: You and I know how to read because we're of a certain age. But already, I mean, we've got people coming through, students now, some of them, or maybe many of them--it's hard for me to figure it out. I know 2012 is the date that Jonathan Haidt and others have said, you know, anxiety and mental health and so forth all went downhill because kids were using smartphones, etc.

But, how long is it going to take for this awakening to occur? And, will there still be sufficient teachers who actually know how to read?

But, let me go back to something else you were saying about the importance of reading. It's true that if we're just generating tweets and we're sort of reading stuff that comes across our social media, what's happening is that we're using words in a very different way than, say, Homer, right?

Words have, first of all, lost their specificity, because if you're not used to reading good books, you're not going to be very attuned to the differences--the nuances--in different terms. And, I'm sure you're like me: When I'm writing a painstaking--and I'm thinking, 'Here's a word, it does the job, but there's something else that might be better,' and so forth. Right? So, you sort of lose that capability.

In many cases, words become simply personal expression, right? Or, you're going on LinkedIn--I don't even know the names of all these things; Instagram, okay--let's say to communicate something about yourself. 'Here's a news update,' right? Or they're sort of political tokens, right? Because we're now distinguished by what language we're using and so forth. What are the words of The Odyssey? What are the words of the Hebrew scripture?

Well, these are deep things, and they're currents flowing deep in the water. Right? And, when you learn to read, you think about them. And you run into passages that are opaque. And so, you cast your line into the water and you're sort of fishing, and all of a sudden, you feel a tug, right? And it's this strike of meaning, and you haul it up.

That's what real reading is. That's when you begin to understand the depth that is contained in speech and in the word.

That, I think, is not an experience that as many people today have as, let's say, a 100, 200 years ago when books were much scarcer.

But I dare say they were better on the whole--like, Kings James Bible or something like that. Lincoln could give addresses in which he's referring to Biblical things, and he knows that pretty much everyone who is going to hear this thing knows what he is referring to. Right? Because this is a part of the fabric of our existence.

That's not being replaced by Twitter and Instagram and LinkedIn. And, now I'm out of words because I don't know any of those other platforms.


Russ Roberts: Yeah, I don't know how to think about this. I think it's easy to be worried about it. I think people have been worrying about it for a long time. There's a remarkable essay by Mark Helprin--I think he wrote it in 2000, although the one internet version I found said it was 1999--but I think it was either December 31st, 1999 or January 1st, 2000--was the pub [publication] date. It's called "The Acceleration of Tranquility." And, it's a fabulous essay. We won't link to it, I suspect, because we try to keep copyright laws here, but listeners can find it if they Google it.

And, he tries to get at the fact that life in, say, 1900 compared to today, was slower, more thoughtful, deeper, richer. The only problem is--and I love Mark Helprin, and I love the essay--is that that wasn't true for most people in 1900. Most people in 1900 had hard lives.

So, we've certainly lost something through the material acceleration and the technological acceleration that has taken place over the last hundred years.

I think the challenge of life is to exist in that world--in that environment, in that atmosphere--and still find ways to maintain your humanity.

Now, we may never write as well as we once did, although there's plenty of bad writing in the past, even by great writers. So, I'm not sure that's a fair point. But, I keep the Jewish Sabbath. That's an obvious way--other people keep a technological Sabbath who aren't Jewish, aren't religious, don't attach it to God. They put restraints on their phone. They do all kinds of things--we do all kinds of things--to make sure that we stay human.

I think that's the challenge. I don't think we can stop ChatGPT and AI. I think we ought to be fighting a different battle.

I think we ought to be encouraging people to be aware of what it might be doing to them. Although, we might be wrong about it. But if indeed it is causing atrophy, sometimes that atrophy is okay. I don't really mind that I can't find my way on without GPS. I love GPS. I love that I can think about other things. But other places I might want to retain those skills; and I have to work at them.

I mean, just to make an obvious one: My handwriting is horrific. It's absolutely horrific. It scares me. It looks horrible. I have trouble reading it sometimes. I don't need it very often. I like the idea of it, but I don't need it very often.

But, I think modern human beings are going to confront the reality. We have to decide where we make our stand. Where do we maintain those skills of, I think conversation, communication, emotional connection? Those are all being discouraged by the smartphone. They will be discouraged, I think, even further by artificial intelligence. We're going to make a stand, decide, this is where I'm going to be. Some people will choose a one day in seven, which is one way to do it. Some will do like the Amish do, and we will try to limit to a great extent their interaction with technology. And, others will say it,' I like it. I'm going to enjoy it,' and I think that's okay, too.

Jacob Howland: Yeah. Well, I love the fact that you mentioned the Sabbath, because that is a day of leisure. And, to sort of come to the punchline here about leisure, I'm very persuaded by Josef Pieper's famous essay, "Leisure: The Basis of Culture." And, he explains what he means by leisure, and it's fundamentally a kind of religious and philosophical openness. I'm using those terms, but what I'm trying to suggest is this kind of openness to the world, and in particular to the gift of the world. That is, it's a receptivity to what is outside of ourselves. And, Pieper thinks that that's the basis of culture because we are informed by--and in a sense, we are given the necessary beginnings for a human life--by being open to the peace and order of the world, I mean, Pieper was a Christian. Right? So, we can take this from a Jewish perspective. To the ultimate beginning, which is God.

We can put it in a philosophical way, too, by the way. I mean, Plato's account of the good as the sun. This is a source of light and life.

So, to things that are outside of us that inform us and that we receive--preferably with an attitude of gratitude--one of the things that's happening that I think affects our capacity for leisure is what people regards as a kind of totalization of life. That is we focus on the useful--on utility--but we don't ask the question: What is it useful for? What is the good that this is serving? Or, to put it in more modern terms, what's the meaning of all this? At the end of the day, what am I doing?

And, I think what you're pointing to is the possibility--I would even say the necessity--of human beings kind of rediscovering leisure in that sense--that is, relating to the natural world, the given world.

One of the things that happens with AI is a kind of closing in, or at least that's my fear: the generation of virtual realities or even the loss of the sense of what real reality is. So, you mentioned the loss of connection. I understand that younger people don't even answer the phone. You have to text them and so on and so forth. So, now, everything is mediated by sort of electronic mediation. We don't have face-to-face contact.

This was already the case years ago. I remember having a former student and his wife who were telling me that anybody they have over who is under 40 is generally incapable of sitting around the dinner table and having a real conversation, because they're not used to it because it's all mediated by the smartphone.

So, I do think we have to get back to that. And, I do think that we have to rediscover the skills of contemplation and meditation, not just in relation to the world, but in relation, again, to great books, works of art, things like this that can nourish us.


Russ Roberts: So, I mentioned on the show before, I think, that I saw an extraordinary performance of "Next to Normal" at the Kennedy Center a few years ago with Rachel Bay Jones, who is a musical performer. And, the night I saw the program--the show--she had a cold, I think. I could sense some challenges in her vocalization or singing. But, she was magnificent. She poured her heart into this performance. "Next to Normal" is about a damaged woman, mentally damaged woman, and it requires an unbelievable vulnerability on stage. And so, I watched her go through this, and it was just an extraordinary experience. I think I've mentioned before: at the intermission, there was total silence in the theater. I mean, I was sitting next to my wife, we didn't talk. She went up and went to the restroom. She said in the ladies' room, there were probably a 100 women. It was totally silent. People were so emotionally overwhelmed by this.

And, one of the reasons they were overwhelmed is that they saw another human being inhabiting a part. And, through the suspension of disbelief, we were on a journey with this fictional character. It's a credible part of the human experience.

Now, I can imagine there will be a day--and it's not very far away--where I could say: 'Eh, I don't really want to go all the way out to the Kennedy Center, and it's raining; and, you know, Rachel Bay Jones is good, but I want to see this with a different performer.' So, I'll be able to have any cast I want. They're already talking about this with movies. Now, you'll be able to cast your own cast. You'll be able to put yourself in the movie. There'll be all these fabulous, emotionally rich, fake digital opportunities for us in the next 25 years. Maybe the next three years. It's coming--fairly soon, I think.

Is anything going to be lost from that? It feels like there is, but I wonder if our kids and our grandkids will sense any loss there; and for them, it will be normal.

So, when you think of different levels: I could have watched--I've mentioned this before also--if I watch a great performer on stage filmed, it's not the same as watching them on stage live. But maybe it's almost as good. Maybe they'll find some ways to make it better. We'll live in these virtual worlds. We'll have all these great, rich experiences that we conjure with our imagination. Should I be upset about that?

Jacob Howland: Well, I mean, I think you've answered your own question to some extent, because the experience that you and your wife had at the performance you mentioned is not going to be available if people are sitting in their own cave-like homes, watching this stuff on some screen.

This is a very interesting question because we have to think about what the meaning of those kinds of communal events is.

Now, we have this, for example, in sports. Although--I mean, I've never been to professional football game, but I've heard plenty of stories about people going to games and how unruly the crowds are. And so, maybe watching large men bang into each other and compete in this way with lots of people drinking beer and stuff is maybe not the best case of, like, our coming together and having a meaningful experience. But, watching an amazingly powerful. dramatic performance is.

We've lost--I mean, as you know, people like Nietzsche, have written about Greek tragedy and the significance of it, and this kind of sort of almost sacred space where people come together and grieve for the characters; they mourn. And they share something.

It's not easy to say exactly what that sharing means, but we know it means something. It affects us very, very deeply. And it brings us together as human beings. Or, perhaps in the case of Athenian drama as Athenians, watching, let's say Aeschylus' Persians, and which everybody in the theater had experienced those eight years after the Second Persian War, and they're remembering the loss and so forth, and now they're sympathizing with the Persians who are on stage learning the horrible news of their defeat, etc.

There's something deeply human about those bonds. And, again, I'm concerned about the sort of disaggregation of communities. [More to come, 49:36]

More EconTalk Episodes