Tyler Cowen on the Risks and Impact of Artificial Intelligence
May 15 2023

Ai-cost-benefit-300x300.jpg Economist Tyler Cowen of George Mason University talks with EconTalk's Russ Roberts about the benefits and dangers of artificial intelligence. Cowen argues that the worriers--those who think that artificial intelligence will destroy mankind--need to make a more convincing case for their concerns. He also believes that the worriers are too willing to reduce freedom and empower the state in the name of reducing a risk that is far from certain. Along the way, Cowen and Roberts discuss how AI might change various parts of the economy and the job market.

RELATED EPISODE
Eliezer Yudkowsky on the Dangers of AI
Eliezer Yudkowsky insists that once artificial intelligence becomes smarter than people, everyone on earth will die. Listen as Yudkokwsky speaks with EconTalk's Russ Roberts on why we should be very, very afraid and why we're not prepared or able to manage the...
EXPLORE MORE
Related EPISODE
Robin Hanson on the Technological Singularity
Robin Hanson of GMU talks with EconTalk host Russ Roberts about the idea of a technological singularity--a sudden, large increase in the rate of growth due to technological change. Hanson argues that it is plausible that a change in technology...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Jon Lachman
May 15 2023 at 10:38am

Dr. Roberts,

We have at least two well studied stories predicting the path of the development of an intelligence embodied external to its C(c)reator:  Bereshit and Frankenstein (Ms. Shelley’s).

Shavua tov,

Jon

Ethan
May 15 2023 at 7:32pm

Former guest, Vinay Prasad, had a great conversation on the ChatGPT paper Tyler referenced comparing ChatGPT and actual doctors.

Dr Golabki
May 15 2023 at 11:08pm

To be sympathetic to Team Eliezer here, I think there’s a relatively conservative version of his story that is still pretty scary. Here are 2 claims that I actually don’t think are too controversial –

(1) General AI has a significant chance of being an existential threat to the human race.

(2) We are profoundly ignorant of how GAIs work, what makes it more or less of a threat, and how to control it if it is a threat.

Taken together those two points make it seem like we are inventing the atomic bomb, but without understanding enough physics to know what detonates them off, or how big the explosions will be. That strikes me as scary enough.

So what’s the relevant historical model or data you could bring to bear on a model? I think Eliezer would say the most relevant historical event is the evolution of humans, but 1-10 million times faster. I think it’s hard to model something where reasonable people can disagree on whether it’s more like the invention of the printing press, or more like The Great Oxidation Event.

Caillin Langmann
May 16 2023 at 12:04am

Hi Russ,

I’m a physician and I suspect AI is coming for my job in some manner. Or it might make me better. who knows.

One thing is was thinking about after a recent case is physician Gestalt. Sometimes the tests are inconclusive or even wrong and yet the physician has a feeling that something is going on.

 

For instance recently I had a case where a young woman had abdominal pain on the left side. The ultrasound showed a ovarian cyst but no sign of twisted ovary and normal blood flow.

However my clinical gestalt said to me that her ovary was twisted and the ultrasound was wrong.

I was right and surgical exploration revealed a serious lot compromised ovary that could have died leading to fertility issues. How will AI do in these cases where all the tests are saying everything is ok?

Maybe it will learn how to have clinical gestalt…

Shalom Freedman
May 16 2023 at 3:34am

AI can imitate a human creator’s style, create with human creators on common projects, combine the human creator’s work with that of others and create new content- perhaps destroy the idea of the individual human creation and creator.

John Hays
May 16 2023 at 10:37am

It would be good to have a program, or part of a program, on just what is AI, Chat GPT.  The recent discussions on these issues sound a bit like inside baseball.  What exactly are you talking about, and how does it differ from “normal” computer programming?

David Gossett
May 16 2023 at 6:48pm

I thought Tyler was absolutely brilliant. There is a ton of clickbait written by people who have zero business commenting on AI. Podcast moderators will have a guest on to talk about ChatGPT and the first question they ask is, “Have you had time to play around with ChatGPT?” This is like having a doctor on and asking if s/he is familiar with a stethoscope. How do you get invited as an expert with no knowledge of the topic?

Econtalk has recently had guests that should not be commenting on the perils of AI or on AI at all. Funny enough, Cowen is the perfect example of someone who should never be invited to speak about the future of AI > https://en.wikipedia.org/wiki/Tyler_Cowen

And he didn’t, which blew me away. He basically said no one is qualified, including the CEO of OpenAI to predict the future of AI. Cowen reminds me of Sapolsky and his emergence and complexity work. No one knows what is coming. The pattern breaks by the time you listen to this episode.

 

Physecon
May 16 2023 at 11:16pm

I’d really like it if Tyler would publish some of his unedited ChatGPT transcripts. Every time I try to use it to do anything of value it is full of errors and omissions. I have found it’s coding often has glaring holes that will prevent it from working. On top of this its inability to not make up facts and refernces makes it dangerous.

Ok, it wrote a form letter for you…is that really that impressive?

 

Maybe I’m just bad at prompting it.

Chris Madson
May 17 2023 at 10:27am

The fallacies in Cowen’s arguments are so overwhelming that it’s difficult to respond to them all. Using the impact of the printing press as an analog to the potential impact of AI? Really?!!? The potential power and ubiquity of AI in this era are unparalleled— they are qualititatively different from any other breakthrough in human affairs. Not even the splitting of the atom is comparable. The only other potentially similar line of technological development at the moment is gene editing, which has risks of a similar magnitude.

I’m deeply concerned about the impact of AI, but Cowen is ready to dismiss my concerns out of hand, unless I develop a “model” of potential impact. Has he developed a model? If he were present when Gutenberg produced the first printed page, could he have produced a predictive model of any utility at all? The model concerned people are offering is in the rhetoric they produce, both written and oral. Cowen sneers at this model, while offering nothing different. And, how like an economist— the way we should debate this is with competing “models.”

Dr. Roberts and Cowen both speak casually about people having to adapt to a world with ChatBot or even AGI. They seem to believe that their particular vocations as educators and writers will remain untouched by these developments. At the current rate of AI development, I strongly suspect that their services will no longer be required in these fields or in most others. The last vocations to be rendered unnecessary will probably be jobs like auto repair that require a wide range of practical knowledge along with the ability to manipulate tools in physically demanding situations, but even they will yield, all too soon, to the economics, robotics, and superior analytical abilities of advanced AI.

I doubt that the machines will “come for us.” What concerns me is a world in which a human has no function, no vocation. I suspect that, in such a world of pampered welfare and lack of any sort of meaningful work, most people will simply slip into insanity.

Cowen’s lack of imagination is startling, but what really worries me is his feeling of casual superiority. It’s the kind of arrogance that Aeschylus wrote about at the dawn of civilization, the hubris that has caused so much suffering over the millennia. As our tools grow ever more powerful, that hubris grows ever more dangerous.

Earl Rodd
May 17 2023 at 6:12pm

I thought this was the most edifying of the econtalk episdoes on AI – not because it had the answers but because it had the questions. I thought Tyler Cowan (with whom I often don’t agree) did an excellent job of defining what we do know about AI and then what questions to ask.

While listening to the discussion, I had a thought. Russ and Tyler were talking about how AI creators thought they were near an end point in digesting all the information available to use for training. So consider this scenario: as time goes on, more and more of the content available on the Internet is itself AI generated. The tech press reported recently on a study of 40 some new “news” sites that were obviously AI generated – the goal of the sites was to get search engines to index them and offer click bait to get ad revenue.  So if an ever increasing portion of future training data comes from AI – including all its errors – then future AIs will have those errors plus news ones and so forth. This is the opposite of the assumed “increase” in intelligence like an AI machine is a living thing.

Dr. Duru
May 18 2023 at 8:13pm

Well THAT was a refreshing take on the promises and the risks of generative AI. I particularly liked that Cowen gave a solid framework for approaching this topic in a way that allows for decision-making and, importantly, a path for maintaining our freedoms while addressing the challenges.

Since Cowen compared this moment to the inventions of the printing press and electricity, I am wondering whether he is now much more optimistic about the prospects for innovation? In prior podcasts, he has deeply lamented this era as being devoid of innovation. (For example, see “Tyler Cowen on the Great Stagnation”, Feb 14 2011)

L. Burke Files
May 21 2023 at 11:52am

Excellent. A few gimlet-eyed observations on AI.

• AI is a tool, much like a hammer. It can be used to build a building or bop someone on the head. The danger comes when AI chooses which it prefers.

• Regulation, in some ways, makes AI more dangerous. The hope that it is regulated lulls us into dropping our guard. Somewhere, someplace – a person or group or nation will push the boundary too far. Further, some proposed regulations already give me chills. Part of China’s draft requirements for AI is that AI – “reflect the Socialist Core Values.” Content that contributes to “subversion of state power” would be banned. The state is the supreme being, not the individual human.

• Any program in AI that engenders the program’s survival or the mission priority could easily slide sideways. “Hello Hal, do you read me?” The error, my guess, would originate in the most innocent of programs, such as programs that monitor financial transactions, wastewater, or anticipate our color selection choice when buying paint. Other AI programs sense that program’s robustness against perturbations and adopt some of the code to become more robust.

• As a defense, civilization should consider some default settings that protect the sanctity of life and that AI is to serve, not to hurt civilization. Then we get into what does “serve” and “hurt” mean.

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:37

Intro. [Recording date: April 19, 2023.]

Russ Roberts: Today is April 19th, 2023, and my guest is Tyler Cowen of George Mason University. With Alex Tabarrok, he blogs at Marginal Revolution. His podcast is Conversations with Tyler. This is his 17th appearance on the program. He was last here in August of 2022, talking about his book with Daniel Gross titled Talent.

Today we're going to talk about artificial intelligence [AI], and this is maybe the ninth episode on EconTalk about the topic. I think the first one was December of 2014 with Nicholas Bostrom. This may be the last one for a while, or not. It is perhaps the most interesting development of our time, and Tyler is a great person to pull a lot of this together, as well as to provide a more optimistic perspective relative to some of our recent guests. Tyler, welcome back to EconTalk.

Tyler Cowen: Happy to be here, Russ.

1:25

Russ Roberts: We're going to get your thoughts in a little while on whether our existence is at risk, a worry that a number of people have raised. Before we do that, let's assume that human beings survive and that we merely have ChatGPT-5 [Generative Pre-Trained Transformer] and whatever comes next to change the world. What do you see as some of the biggest impacts on the economy and elsewhere?

Tyler Cowen: I think the closest historical analogies are probably the printing press and electricity. So, the printing press enabled a much greater circulation of ideas, considerable advances in science. It gave voices to many more people. It really quite changed how we organize, store, and transmit knowledge.

Now, most people would recognize the printing press was very much a good thing, but if you look at the broader history of the printing press, it is at least connected to a lot of developments that are highly disruptive. That could include the Protestant Reformation, possibly wars of religion, just all the bad books that have come out between now and then, right, are in some way connected to the printing press.

So, major technological advances do tend to be disruptive. They bring highly significant benefits. The question is how do you face up to them?

Electricity would be another example. It has allowed people to produce greater destructive power, but again, the positive side of electricity is highly evident and it was very disruptive. It also put a fair number of people out of work. And, nonetheless, we have to make a decision. Are we willing to tolerate major disruptions which have benefits much higher than costs, but the costs can be fairly high?

Russ Roberts: And, this assumes that we survive--

Tyler Cowen: Correct--

Russ Roberts: which would be a big cost if it's not true. But, just starting with that, and what we've seen in the last shockingly few months--we're not talking about the first five or 10 years of this innovation--where do you see its impact being the largest?

Tyler Cowen: These would be my guesses, and I stress that word 'guesses.' So, every young person in the world who can afford the connection will have or has already access to an incredible interactive tutor to teach them almost anything, especially with the math plugins. That's just phenomenal. I think we genuinely don't know how many people will use it. It's a question of human discipline and conscientiousness, but it has to be millions of people, especially in poorer countries, and that is a very major impact.

I think in the short to medium run, a lot of routine back-office work will in essence be done by GPT models one way or another. And then medium term, I think a lot of organizations will find new ways of unsiloing their information, new ways of organizing, storing and access their information. It will be a bit like the world of classic Star Trek, where Spock just goes to the computer, talks to it, and it tells him whatever he wants to know. Imagine if your university could do something like that.

So, that will be significant. Not that it will boost GDP [Gross Domestic Product] to 30% growth a year, but it will be a very nice benefit that will make many institutions much more efficient. So, in the shorter run, those are what I see as the major impacts.

Russ Roberts: I'll give you a few that I've been thinking about, and you can agree or disagree.

Tyler Cowen: Oh, I would add coding also, but this we already know, right? But, sorry, go on.

Russ Roberts: Yeah, coding was my first one, and I base that on the astounded tweets that coders are tweeting where they say, 'I've been using ChatGPT now for two weeks, and I'm two to three times as productive.'

I don't know if that's accurate. Let's presume what they mean is a lot more productive. And, by that I assume they mean 'I can solve problems that used to take me two or three times longer in a shorter period of time.' And, of course, that means, at least in one dimension, fewer coders. Because you don't need as many. But, it might mean more, because it can do some things that are harder to do or were too expensive to do before, and now there'll be auxiliary activities surrounding it. So, do you have any feel for how accurate that transformation is? Is it really true that it's a game changer?

Tyler Cowen: I've heard from many coders analyses very much like what you just cited to me. They also make the point it allows for more creative coding. So, if a GPT model is doing the routine work, you can play around a lot more with new ideas. That leads to at least the possibility the demand for coders will go up, though coders of a very particular kind.

Think of this as entering a world where everyone has a thousand free research assistants. Now, plenty of people are not good at using that, and some number of people are, and some coders will be. Some economists will be. But, it will really change quite a bit who does well and who does not do well.

6:33

Russ Roberts: It's funny: I find this whole topic fascinating, as listeners probably have come to realize. It's probably the case that there are listeners to this conversation who have not tried ChatGPT yet. Just for those of you who haven't, in its current formation, in its current version that I have--I have the unpaid version from OpenAI--there's just a field where I put a query, a question, a comment.

I want to give a couple examples for listeners, to give them a feel for what it's capable of doing outside of coding. I wrote a poem recently about what it was like to take a 14-hour flight with a lot of small infants screaming and try to put a positive spin on it. I was pretty proud of the poem. I liked it. And I posted it on Twitter.

I asked ChatGPT to write a poem in the style of Dr. Seuss--mine was not--but in the style of Dr. Seuss about this issue. It was quite beautiful.

Then I asked it to make it a little more intense. And, it made a few mistakes I didn't like in language, but it got a little bit better in other ways.

And then for fun, I asked it to write a poem about someone who is really annoyed at the baby. I wasn't annoyed: I thought I tried to put a positive spin on the crying. And, it was really good at that.

And, of course, you could argue that it takes away some of my humanity to outsource my poetry writing to this third party. But that's one thing it's really good at, is writing doggerel. Rhyming, pretty entertaining, and sometimes-funny poetry.

The other thing it's really good at is composing emails--requests for a job interview, a condolence note.

I asked it to write a condolence note, just to see what it would come up with. 'A friend of mine has lost a loved one. Write me a condolence note.' It writes me three paragraphs. It's quite good. Not maybe what I would have written exactly, but it took three seconds. So, I really appreciated it.

Then I said, 'Make it more emotional.' And, it did. And, then I said, 'Take it up a notch.' And it did. And it's really extraordinary.

So, one of the aspects of this, I think, that's important--I don't know how transformative it will be--but for people whose native language is not English--and I assume it will eventually, maybe it already does talk in other languages, I use it in English--it's extremely helpful to avoid embarrassment, as long as you're careful to realize it does make stuff up. So, you have to be careful in that area.

I am under the impression it's going to be very powerful in medicine in terms of diagnoses. And, we thought this before when we were talking about, say, radiology. There was this few that radiologists in the United States would lose work because radiologists in India, say, could read the X-rays. That hasn't, as far as I know, taken off. But, I have a feeling that ChatGPT as a medical diagnostic tool is going to be not unimportant.

The next thing I would mention, and I'll let you comment, the next thing I would mention is all kinds of various kinds of writing, which are the condolence note or the job interview request as just an example.

I met a technical writer recently who said, 'I assume my job's going to be gone in a few months. I'm playing with how ChatGPT might make me a better technical writer, because otherwise I think I'm going to be in trouble.'

And, of course, then there's content creations, something we talked about at some length with Erik Hoel. Content creation in general on the web, especially for businesses, is going to get a lot less expensive. It's not going to be very interesting in the short run. We'll see what it's capable of in the medium term, but the ability to create content has now exploded. And, those of us who try to specialize in creating content may be a little less valuable, or we'll have to try different things. What are your thoughts on those issues?

Tyler Cowen: Just a few points. First, I have heard it can already handle at least 50 languages, presumably with more to come. One of many uses for this is just to preserve languages that may be dying, or histories, or to create simulated economies of ways of life that are vanishing.

There's a recent paper out on medical diagnosis where they ask human doctors and then GPT--they give it a bunch of symptoms reported from a patient, and then there's a GPT answer and a human doctor answer. And, the human doctors do the grading, and GPT does slightly better. And, that's right now. You could imagine a lot more specialized training on additional databases that could make it better yet.

So, we tend to think about America, or in your case, also Israel, but think about all the doctor-poor parts of the world--including China, which is now of course, wealthier but really has a pretty small number of doctors, very weak healthcare infrastructure. Obviously many parts of Africa. It's really a game changer to have a diagnostic instrument that seems to be at least as good as human doctors in the United States. So, the possibilities on the positive side really are phenomenal.

Oh, by the way, you must get the paid version. It's better than the free version. It's only $20 a month.

Russ Roberts: Yeah, I've thought about it.

Tyler Cowen: That's the best [?] that you can make.

Russ Roberts: I thought about it, except I didn't want to advance the destruction of humanity yet. I wanted to think about it a few more episodes. So, maybe at the end of our conversation, Tyler, I'll upgrade.

The other thing to say about diagnostics, of course, is that what happens now when you don't feel well depends where you live and how badly you feel--how poorly you're feeling. So, here in Israel, I can get an appointment anytime. I don't pay. It's already included in everything. I can get a phone appointment, I can get a time to see my doctor. And, it's not a long wait, at least for me so far. In America, there were a lot of times I thought, 'I'd like to see a doctor about this, but I think it's probably nothing. And so, I'm going to just hope it's okay, because I don't want to pay the fees.'

And, I get on the web, I poke around, and of course, most of us have symptoms every day that are correlated with all kinds of horrific conditions. So, people who are hypochondriacs are constantly anxious. And, their main role of their doctor--and this is me sometimes--is to say, 'You're fine.' We pay lots for that. It's a wonderful, not unimportant thing. If ChatGPT or some form of AI diagnostic could reassure us that what we have is indigestion and not a heart attack, because it's not just looking at the symptoms and looking at simple correlations but knows what questions to ask the way a doctor would, and the follow-ups, and can really do a much better job, that is a game changer for personal comfort.

And especially, as you point out, for places where you don't have access to doctors for whatever reason, in easier, inexpensive form. I have a friend in England who says, I was telling them about some issue they're having and I say, 'Have you gone to the doctor?' 'What's the point? They're just going to say: Come back, until it's an open wound or until you pass out.'

But, if you think about that, this is not an unimportant part of the human condition in 2023 is anxiety about one's health. And, I think the potential to have a doc in your poc, a doctor in your pocket, is really extraordinary.

Tyler Cowen: Yes. But, as you know, legal and regulatory issues have arisen already, and we have to be willing to tolerate imperfection, realizing it's certainly better than Google for medical queries. And, it can be better than a human doctor, and especially for people who don't have access. And, how we will handle those imperfections is obviously a major open question.

Russ Roberts: That's an excellent point. But, I would add one more thing that I think's quite important, and this is something you learn as your parents get older. And, they go to the doctor, you tell them what they should ask the doctor, and they forget or they don't know how to follow up. And so, even if some of these apps that we might imagine will not be easily approved, the ability to have access to something that helps you ask good questions, which I think ChatGPT would be quite good at--'I have these symptoms. What should I ask my doctor? What else should I be thinking about?'--just gloriously wonderful.

Tyler Cowen: Absolutely.

14:54

Russ Roberts: What do you think about this issue of content creation? And, do you think there's any chance that we're going to be able to require or culturally find ways to identify whether something is ChatGPT or not?

Tyler Cowen: Oh, I think GPT models will be--already are--sufficiently good: that if you ask them for content the right way, it's very hard to identify where it came from. If you just ask it straight up questions, 'What causes inflation?' you can spot a GPT-derived answer even without software. But, again, with smarter prompts, we're already at the point where--you know, I sometimes say the age of homework is over. We need to get used to that.

And, for a school system, some of the East Asian systems that put even more stress on homework than the U.S. system, they're going to have to reorganize in some fundamental way. And, being good at a certain kind of rote learning will be worth much less in labor markets. So, the idea that that's what you're testing for will no longer be as--I'm not sure how meritocratic it ever was, but it can be fairly meritocratic. But, it won't be any more, and it will be hard for many nations to adjust to that.

Russ Roberts: Yeah, I view that as a lot of people are anxious about the impact on teaching and grading essays, exams. I think it's fabulous.

Tyler Cowen: I agree.

Russ Roberts: I think--the Age of Homework is a bad Age. So, if you're right, I think that's a pretty much unalloyed benefit. Other than math. I think that it may end up that we do our math homework in class, where we can't secretly use ChatGPT to help us answer and we use home for something else. Something like that.

Tyler Cowen: But, our educational institutions are often the slowest to adapt. And, you as president of a university, you must be facing real decisions, right?

Russ Roberts: Oh, yes, Tyler. It's such a burden. No, we're not, because we're a small seminar place. We don't have lectures. There's no way that the papers and essays that our students write could be done by ChatGPT, at least at anything remotely like the current level.

Tyler Cowen: Oh, I wouldn't be so sure about that. It's not that GPT can write the whole paper, but in my law and literature class now, I've required my students to write one paper with GPT. But, then they augment, they edit, they shape. Those have been very good papers. So, you're going to get work like that now and have to--

Russ Roberts: Yeah, that's true. And, I don't have any problem. What?

Tyler Cowen: You're going to have to make a decision. Do you allow it? Do you recognize it? What are the transparency requirements? Do you grade it differently? This is now, this is not next year.

Russ Roberts: Yeah, no, my view on this--and it sounds like it's very similar to yours--let's start with the condolence note. Okay? So, I write a friend a condolence note. And, by the way, people have talked about putting a watermark on ChatGPT. That's not useful. I'll just recopy it. It's silly in this kind of setting. Maybe in a 40-page paper, maybe. So, I write a condolence note to a friend, say, and I go through various iterations that I mentioned earlier. And, I pick the one that I think sounds most like me. Is there anything wrong with that?

Tyler Cowen: I think it's fine, but to the extent that intersects with institutions for certifying people, ranking people, assigning different slots in universities and awards to people, it does mean a lot of other practices are going to have to change. And, they'll have to change from institutions that are typically pretty sticky.

Russ Roberts: But, surely, whether my friends think I'm an actually empathetic person might even be more important than whether I certify someone as skilled in economics. I think there is something lost when I outsource a condolence note. I've mentioned it briefly elsewhere, I don't think on the program, but the play here, Cyrano de Bergerac, by Edmond Rostand, that's what that's about. It's about a person who is gorgeous, a young man who is gorgeous, who falls in love with a beautiful woman; and he's inarticulate. And, he gets a very unattractive person to whisper the sweet nothings into his ear that he can pass on as if they were his own. And, that turns out not to have the best set of outcomes. A beautiful play, by the way. If you haven't seen it, it's been adapted in movie form in various ways. And, they're all pretty good.

Tyler Cowen: How are you [?] in real life? Sorry, go on.

Russ Roberts: Say that again?

Tyler Cowen: How you behave in real life might matter more. So, how you behave in textual life, anyone can now fake. So, your charisma, your looks, how well you express empathy, probably those premia will rise. And, again, that will require a lot of social adjustment.

Russ Roberts: That's very well said. I think, yeah, the fact that you probably get to a point where we can adjust those, too: the way my eyes look and how much I smile, and who knows. But, certainly for a while, there will be a premium put on authentic face-to-face interaction that can't be faked. And, of course, when you write a book or an essay, forget being graded. When you write a book, I don't know about you, Tyler, I ask friends for comments. And, you know what? I take them sometime, and I thank them just like people I think will for a while maybe thank ChatGPT. But, is it that much different that you run your draft through a ChatGPT version and then augment it, change it?

Tyler Cowen: ChatGPT gives me good comments as well. But, again, I do think there's a genuine issue of transparency. If someone is hiring you to write something, what are they getting? What requirements are they imposing on you? What is it you need to tell them? I don't use GPT to write, say, columns. It just seems wrong to me even though it might work fine. I think I shouldn't do it. That readers are reading for, like, 'The' Tyler Cowen. And, well, there's all these other inputs from books, other people's blog posts. And, the input from GPT is for the moment, somehow different. That's arbitrary, but that's the world we're living in.

Russ Roberts: Well, I'm not going to name the columnist, but one of columnists recently wrote a piece I thought could have been written by ChatGPT. It read like a parody of this person's normal writing. And, of course, while I am interested in the real Tyler Cowen, sometimes the real Tyler Cowen is actually doing natural ChatGPT on his old columns. Not you personally, of course, Tyler. But I think a lot of columnists get in a rut. And, it's a lot to see what happens there.

Tyler Cowen: I have the mental habit now when I read a column, I think to myself, 'What GPT level is that column written at?' Like, 'Oh, that's a 3.5' or 'Oh, that's a 4.0.' Occasionally, so maybe 'That's a six or a seven.' But a lot of it is below a 4, frankly--even if I agree with it and it's entirely correct. It's like, 'Eh, 3.4 for that one.'

Russ Roberts: Yeah, well, that's why there will be a premium, I think for some time, on novelty, creativity to the extent that ChatGPT struggles with that. It's somewhat sterile still right now. So, we'll see. It's going to get better at some point. It may be very soon. We'll talk about that, too, in a little bit.

22:32

Russ Roberts: Let's turn to--is there anything in what we've talked about so far that you would regulate? Try to stop, slow down? Or we just say, 'Full steam ahead'?

Tyler Cowen: I think that's too broad a question. So, I think we need regulatory responses in particular areas, but I don't think we should set up, like, 'The' regulatory body for AI, that to regulate a thing doesn't work well. Modular regulation as the world's changes, that in turn needs to change.

So if, say, a GPT model is prescribing medicines--which is not the case now, not legally--that needs to be regulated in some manner. We may not know how to do it, but the thing to do is to change the regulations for prescribing medicines, however you might wish to change those. That, to me, makes more sense than some meta-body regulating GPT. So, I think the questions have to be narrowed down to talk about them.

Russ Roberts: Do you think there's any role for norms? Now, you just confessed to a norm that you would feel guilty--and I'm trusting you on this, Tyler. For all I know, you've written your last 18 columns with ChatGPT. But, is there any role for norms to emerge that constrain AI in various imaginable ways?

I can imagine someone saying, 'Well, I could do that with ChatGPT, but it probably isn't right, so I won't do it.' And, that would be one way in which--and not just that--but I could develop a version of ChatGPT that could do X, Y, Z, but I don't think humanity is ready for that. That seems a little bit harder for people to do. Do you think there'll be some norms around this that will constrain it in some way?

Tyler Cowen: Oh, there's so many norms already. And, to be clear, I've told my editor in writing that I don't use GPT to write my columns, just to make that clear.

Here's one example. There are people using dating apps where the texting or the content passed back and forth is generated by GPT. I'm not aware of any law against that. It's hard to believe there could be one since GPT models are so new for this purpose. But it seems, to me, wrong. There are norms against it, that when you meet the partner you've been texting with, they'll figure this out. They ought to hold it against you. I hope that norm stays strong enough that most people don't do this, but of course there's going to be slippage--getting back to Cyrano, right?

Russ Roberts: Yeah, yeah. It's like people being honest about what their age is online. There seems to be a norm that it's okay to not tell the truth, but I don't know: when you uncover that, it's a pretty unpleasant surprise, I think, for some people.

25:15

Russ Roberts: Well, let's turn to the issue of so-called alignment and safety. We recently had Eliezer Yudkowsky on the program. He is very worried, as I'm sure you know, about AI. You seem to be less so. Why do you think that is?

Tyler Cowen: Well, let me first start with the terminological matter. Everyone uses the phrase 'alignment,' and sometimes I use the word as well; but to me it suggests a social welfare function approach to the problem. That, there's one idea of social good. As if you might take that from Benthamite utilitarianism. And that you want the programs--the machines--all aligned with that notion of social good. Now, I know full well that if you read LessWrong, Effective Altruism Alignment forums, plenty of people will recognize that is not the case.

But, I'm worried that we're embodying in our linguistic practices as a norm, this word that points people in the Kenneth Arrow, Jeremy Bentham direction: 'Oh, everything needs to be aligned with some notion of the good.'

Instead, it's about decentralization, checks and balances, mobilizing decentralized knowledge. That, Hayek and Polanyi should be at the center of the discussion. And, they're all about 'What are the incentives?' It's not about information and knowledge controlling everything, but again, it's about how the incentives of decentralized agents are changed. And, too much of the discourse now is not in that framework.

But, I mean, here would be my initial response to Eliezer.

I've been inviting people who share his view simply to join the discourse. So, they have the sense, 'Oh, we've been writing up these concerns for 20 years and no one listens to us.' My view is quite different. I put out a call and asked a lot of people I know, well-informed people, 'Is there any actual mathematical model of this process of how the world is supposed to end?'

So, if you look, say, at COVID [corona virus disease] or climate change fears, in both cases, there are many models you can look at, including--and then models with data. I'm not saying you have to like those models. But the point is: there's something you look at and then you make up your mind whether or not you like those models; and then they're tested against data. So, when it comes to AGI [artificial general intelligence] and existential risk, it turns out as best I can ascertain, in the 20 years or so we've been talking about this seriously, there isn't a single model done. Period. Flat out.

So, I don't think any idea should be dismissed. I've just been inviting those individuals to actually join the discourse of science. 'Show us your models. Let us see their assumptions and let's talk about those.' The practice, instead, is to write these very long pieces online, which just stack arguments vertically and raise the level of anxiety. It's a bad practice in virtually any theory of risk communication.

And then, for some individuals, at the end of it all, you scream, 'The world is going to end.' Other people come away, 'Oh, the chance is 30% that the world will end.' 'The chance is 80% that the world will end.' A lot of people have come out and basically wanted to get rid of the U.S. Constitution: 'I'll get rid of free speech, get rid of provisions against unreasonable search and seizure without a warrant,' based on something that hasn't even been modeled yet.

So, their mental model is so much: 'We're the insiders, we're the experts.' No one is talking us out of their fears.

My mental model is: There's a thing, science. Try to publish this stuff in journals. Try to model it. Put it out there, we'll talk to you. I don't want to dismiss anyone's worries, but when I talk to people, say, who work in governments who are well aware of the very pessimistic arguments, they're just flat out not convinced for the most part. And, I don't think the worriers are taking seriously the fact they haven't really joined the dialogue yet.

Now on top of that, I would add the point: I think they're radically overestimating the value of intelligence. If we go back, as I mentioned before, to Hayek and Polanyi, pure intelligence is not worth as much as many people think. There's this philosophy of scientism that Hayek criticized. And, the people who are most worried, as I see them, they tend to be hyper-rationalistic. They tend to be scientists. They tend not to be very Hayekian or Smithian. They emphasize sheer brain power over prudence. And, I think if you take this more Adam Smith-like/Hayekian worldview, you will be less worried.

But, we still ought to recognize the costs of major technological transitions as we observe them in history. They are, indeed, very high. I do not want to have a Pollyanna-ish attitude about this.

Russ Roberts: Well, that's very well said. I want to start with the point you made about modeling. I don't demand a mathematical model, I don't think--

Tyler Cowen: I do, to be clear. But, go on.

Russ Roberts: You do or you don't?

Tyler Cowen: I do. And, again, I'm not saying the model will be good--I don't know. But, if it's not good, that's one of the things I want to know.

So, the COVID models, I would say they weren't very good. But I'm delighted people produced them, because we all got to see that. Again, opinions may differ, but that's part of the point of modeling. Not that models explain everything--that's the bad defense of modeling. The good defense is: 'Let's see how this is going to go.'

Russ Roberts: Well, let me put on my--I want to channel my inner Eliezer Yudkowsky, which may not be easy for me, but I'll do my best. I think his argument--he has a few arguments--but one of them, the one I find most interesting--I did not find it completely compelling and I would not call it a model. I would call it a story.

And, I find stories interesting. They can help you understand things. They can lead you astray--just like a model can, that's mathematical.

But, his story is, is that in the course of what you and I would call emergent processes--the example he uses is the creation, of, say, a flint axe, a hand axe--that, natural selection works inexorably on the fact that people who make better ones are going to leave more genes behind.

And so, that relentlessly, and for that technology, pushes it to improve.

And, no one is asking the technology to improve. There's no designer other than perhaps God, but there's no human force or will to push that process. It's the natural incentives of an emergent process.

His claim, then, is that out of that will come other desires--akin to our desire to go to the moon, which has nothing to do directly with leaving more genetic copies behind or improving our fitness. It's just something that comes along for the ride.

And, his claim is that as intelligence grows in these models, these--and this is what I think is called the orthogonality thesis--I like the word orthogonal. I can say it. Orthogonality is much harder--but the orthogonality thesis is that you're going to get these emergent goals for these intelligences. Whether they're conscious or sentient is irrelevant.

Now that's a story. It's not a model. It would be really hard to model it. It'd be really hard to model it in a way that would be, I think, compelling for a skeptic, but I think it's worth taking seriously. Do you disagree?

Tyler Cowen: No, we absolutely should take it seriously. I think we should try to build those models. Think of an ecological population model where species co-evolve. It could end up like me and my dog where we get along great. Humans select for dogs that do their bidding. At the same time, the dog is pretty good at manipulating you, right? You feed it, you take it for walks, you spend time petting it. I'm not convinced that's the outcome, but it seems to me at least as likely an outcome. There are checks and balances in the system.

In the short run at the very least, maybe the medium- and long-term as well, there's a lot of pressures for humans to recommend systems that make them happy. It could be the real danger is a kind of Aldous Huxley world, where you're just entranced by your AI because it's so nice to you.

Again, I'm not sure that's the case, but that seems to me more likely than the scenario of it killing us.

I would just make a general point about risk communications. The way you get better outcomes when you talk about issues is to give people positive, constructive steps they can do in the short run, not panic them, and not turn an issue into something that will be polarized. A lot of the doomers, it seems to me, they do the opposite of that. They should be much more constructive. Teach us how to flex our regulatory muscles in sound ways consistent with the U.S. Constitution. I would be very enthusiastic about that.

But also, I mean, maybe this argument is a bit of an ad hominem, but I do take seriously the fact that when I ask these individuals questions like, 'Have you maxed out on your credit cards?' Or, 'Are you short the market? Are you long volatility?' Very rarely are they.

So, I think when we're framing this discourse, I think some of it can maybe be better understood as a particular kind of religious discourse in a secular age, filling in for religious worries that are no longer seen as fully legitimate to talk about. And, I think that perspective also has to be on the table to deconstruct the discourse and understand it is something that doesn't actually seem that geared to producing constructive action or even credibly consistent action on the part of its proponents.

Another general point I would make--here's another thing we need to consider. If AI is truly very powerful, is the chance that it will save us from other existential risks higher than the chance it will kill us?

Now, I genuinely don't have a good idea how to even go about answering that question, but the doomers are so convinced the chance it will kill us is much higher than the chance it will, say, save us from an asteroid. Again, that just feels to me like the pessimistic doom is something they want to build into the view. I don't think the substantive or even modeled or empirical arguments for seeing one is larger than the other are very strong.

So, I just think we need to step back and say to people, 'Wake up.' Don't be entranced by the emotionality and mood affiliation of this discourse. We need to look for sane, sober, constructive, step-by-step things we can do in the short term, not polarize the issue, and think more constructively.

Russ Roberts: Well, I'm going to come back to that probability it'll save us as well as possibly harm us. We'll return to that in a little bit.

36:37

Russ Roberts: But, couple thoughts. One of my favorite moments really in the history of EconTalk is when I realized on air--I think it was on air; I certainly said it on air--that Nicholas Bostrom's view of artificial intelligence is no different than the medieval vision of God--the Aristotelian vision--all-powerful, omnipotent, omniscient. Will know what you're going to think next, can manipulate you in ways that make you do its bidding.

He didn't like--I asked if anyone ever mentioned that. He said no. He was rather taken aback by it, in my memory. We can go back to listen to that 2014 episode. I'm not accusing anyone of looking for alternatives to religion, but I do think there's a theological aspect to this, akin to religion, for how people, the role we can think about this in our lives. It certainly is a end-of-days kind of discussion. I'm not accusing anyone of any kind of irrational thinking at all, but I do think it has things in common with it, as you're alluding to.

Other thing, a couple things I'd mention. One is--this is a G-rated program--but when you mention that people may find it comforting to talk to ChatGPT, I do think that application is going to be quite pronounced and extensive, that people will turn to it for solace of all kinds.

And finally, when you mention the dog analogy, I think it's an opportunity for OpenAI or others instead of naming their things like Bing and Sydney and Bard, they should name it Fido and Rex and Lassie. That will encourage people to think they have a sort of, I don't know, cooperative relationship with their virtual pet.

Tyler Cowen: I think as a general point of view, I found it very interesting to speak with clergy, priests, rabbis and also national security people about AGI existential risk. They're both groups of people that have a lot of practice thinking about existential risk, and they take it seriously. They don't dismiss it. In a sense, you could say it's their profession. But, they've also heard a lot over long periods of time about existential risk.

And, I find they frame it in saner ways. They definitely do not have Pollyanna-ish worldviews, but they are able to step off--down--from the ledger of doom, and recognize these are arguments that have recurred throughout human history on a very regular basis. There are these very serious risks, but there's also a Millenarian tendency in human thought that tends to rise in volatile times. And, we have to think about how to deal with those Millenarian tendencies. They are themselves a form of risk: that our reaction to an event can be worse than the event itself, even if the event involves some very high costs.

So, when I see commentators arguing, 'Well, we need to have the government take control and monitor everyone's computer drive and have the option of shutting it down,'--that was in a recent blog post--based on something that hasn't even been modeled yet. It's not just that I disagree, but I think there's something about how the discourse has proceeded where you need someone to just raise his hand and shout a bit to everyone, 'Something has gone wrong here. You all need to wake up.' And, my own role in this debate, I think, largely has been that.

Russ Roberts: Well, that's also what the other side is yelling: 'Something's going wrong here. You need to wake up.'

And, you're saying, 'Why don't you make the case a little more persuasively for me? I'm not an idiot. Do a better job convincing me. And, do it in ways that we would,' as you say, 'follow the norms of discourse that have been productive in the past for improving our safety, our security, our wellbeing, and so on.' Right?

Tyler Cowen: Right. So, if someone says, 'Well, here's my proposal for how we should let or not let GPT systems prescribe medicine'--one proposal for the United States; maybe a different proposal for a poorer country--I mean, I'm absolutely all ears. And, that will actually help our ability to address the larger issues. If you just start off with, 'Oh, here's this big AGI thing, and in 17 years it's going to kill us all,' that is one of the best ways you can imagine not to make progress on a problem. In fact, it's moving backwards.

Russ Roberts: Yeah, it certainly induces apathy rather than zealousness in trying to cope with it. But, it's going to be a fascinating time.

I would mention, I think it was yesterday, kind of spoiling all the fun, Sam Altman of Open AI announced that, 'Well, we've kind of exhausted all the benefits from this larger dataset strategy.' In everybody's mind, it's 'Wow, look how much better ChatGPT'--we need a better name for it, by the way--'ChatGPT-4 is better than 3. And then, 5 is so much better than 4. Oh my gosh. Think what 14's going to be like.' And, the answer is, 'Well, we're stuck at 5 for a while, folks. We're going to have to find some new techniques.'

So, there is a tendency to think that the rocket taking off, the pace of improvement is therefore, like Moore's law, to be just extrapolated endlessly into the future, when not only will it write a great condolence note, but fill in the blank. It may take a little longer than people were thinking.

Tyler Cowen: As you know, iPhones, at least for a while, have plateaued. That was some while ago. I don't personally know whether that's the case with GPT models and LLM [Large Language Models] models more generally. There is the advantage now that they're good enough in essence to grade themselves. But, some of the issues are: Well, if right now it gets a hundred or 99th percentile on a math test, with plugins, well, it can't get 110%, right?

So, then you bump into the question, 'Well, how creative can they be?' And, they are somewhat creative already, but that may be a very different problem than 'Can they get a hundred percent on the test?'

So, I don't know. But, I would just say whatever anyone says from a company, I wouldn't take it face value. They may not know. Their competitors may do something they haven't foreseen. This is the market process. There is competition in place, in a very real way. And, I certainly don't rule out very rapid and very startling advances, even from our current place.

43:18

Russ Roberts: Do you have any ideas about the profit motive in this process, or does that give you comfort?

Tyler Cowen: Well, I'm not sure it always is the profit motive, for one thing. So, I don't want to offer comments on particular companies, but very large companies often behave as bureaucracies, and they're not profit-maximizing. And they face regulatory constraints, which limit often the good and also the bad sides of profit-maximizing.

So, we're not seeing profit-maximizing right now. And on top of that, OpenAI is a nonprofit. Now it's affiliated with a for-profit. It's a very complicated structure. I don't pretend to understand how it works, but I would just say I'm not sure how to model the current AI competition in the United States. And, I think it's a very complex problem people should work on much more.

Now, if you ask what do I think of the regulatory process, who or what exactly in government? And, I do mean exactly: I would like to hear much more on this question from the doomsters. Who is capable of making the problem better rather than worse?

Is it the FTC [Federal Trade Commission]? In my opinion, no. All their doomsaying about concentration in social media clearly has been shown to be wrong. And they're still at it, trying to take down those companies.

One Biden proposal gave some authority to the Commerce Department. They may be good in the sense they might not do that much, but their expertise in this area seems to me highly limited. They tend to be quite mercantilist, looking for national champions. That might be good if you're relatively positive on these developments, but it doesn't actually seem like a good match if something substantive needed to be done.

So, I think we should focus much, much more on the question: Who exactly should be doing whatever. And, there's not such a good answer when you look at the details.

Russ Roberts: It'd be interesting to challenge the industry to create an advisory council of people from within the industry, people outside--I would put you on it, Tyler, in a second; I might even put an Eliezer Yudkowsky on it in a minute because it would be good to have that voice--where some of this conversation could take place maybe in an organized fashion and consider guidelines.

But, of course, even that model leads sometimes to the worst kind of collusion and other things. Restraints on innovation that might actually be harmful, so--

Tyler Cowen: But, I would favor an antitrust exemption to allow that, if one were legally required. And I can see quite possibly that it might be.

In the meantime, I would like the people working on this issue as outside commentators to just state outright that they favor the First Amendment, that they favor Constitutional protections against unreasonable search and seizure of your property. And, just to see who in the debate is willing to make that move, and who is really looking to undercut the U.S. Constitution, because they've so talked themselves into a theory that is not validated that they're willing to do quite extreme things.

And, a group like the Effective Altruist people who I think have been an enormous benefit to discourse in almost every way, but we're now facing an issue where they have to choose. I mean, is their loyalty to the U.S. Constitution, or is it to a Benthamite utilitarian notion, as would come out of their discussion boards? And, I hope they choose the right way.

So, I'm willing to say, I think the U.S. Constitution has proven more robust than the rationalistic arguments you see in LessWrong blog posts. It has held us back from many errors we might have committed out of excess enthusiasms or excess rationalism. And, I would just like to see that reaffirmed as a framing of the debate. So far, I'm seeing the opposite.

Russ Roberts: Well, if I remember correctly, I think it was in our conversation or in a blog post, Eliezer--and I'm sure he is not alone--has suggested that we would have to monitor GPU [Graphics Processing Unit]--that is computing capacity--in places that exceeded that capacity, which would be suggesting that they're training models on larger and larger amounts of data that this kind of process requires, would be subject to a war. Very concerning.

Tyler Cowen: Don't forget bombing [?] data centers. So, it's a violation of international law as I understand it.

But again, I think that's illustrating the danger of people talking themselves into things on the basis of very long, vertically stacked arguments that are actually not resonating with most other people in the world. And again, we just need a wake-up call to have a reframing, not let emotion and mood affiliation get the best of us. Look for positive, constructive steps.

Russ Roberts: Just to make it clear, AGI, which you have mentioned, is Artificial General Intelligence. And, LLM are Large--

Tyler Cowen: Language Models--

Russ Roberts: Large Language Models. Correct. Referring to the large data sets of trillions of words that these projects have been trained on.

Tell our listeners what you mean by 'mood affiliation.'

Tyler Cowen: Mood affiliation occurs when you become attached to a particular mood rather than a substantive conclusion based on models, data, and analysis.

So, there are people who just are pessimists and they look for pessimistic interpretations of phenomena. There are people who are just optimists. That's also a fallacy. You're not choosing across moods. Your mood should be a conclusion. But, people, when they're in moods, they find it hard to get out of those moods. So, not to do your thinking based on a mood-first approach, which everyone will say they don't do. But, in this debate, I'm seeing quite a good deal of it.

49:23

Russ Roberts: Let's close and talk about intelligence. It is, in my view, a suitcase word--a word meaning that people cram a lot of different things into it. And when they use the word, they don't always mean it the same way as someone else uses it. It can mean 'Knows a lot of things.' It can mean 'Can solve problems.' It can mean 'Can take knowledge I've learned in one context and apply it in a different one.' It can mean 'Solves all problems effortlessly.'

And, for me, when people say they treat intelligence like a scalar--a single number--IQ [Intelligence Quotient] would be one measure that is a scalar. It's just a number: that, if your IQ is higher than mine, you are, quote, "smarter" than I am. Which is probably true, Tyler. But, I don't think that's the measure--any of those measures. First of all, they're all different. And, I don't think having more intelligence in the way that ChatGPT-, say, 5 has than 3, solves all problems.

Most of the problems of the human experience are not solvable. They involve trade-offs. I come back to our classic--the dictum of our profession--'No solutions. Only trade-offs.'

And, trade-offs therefore require judgment. And, ChatGPT will never provide that, unless you believe in a social welfare kind of approach that you alluded to earlier.

And so, I think the belief that, quote, "smarter and smarter" computer tools will help us solve more and more problems is simply incorrect. It will solve many problems, and some of them will be quite important, potentially avoiding an asteroid. But, many of the problems of the human experience are not due to a lack of intelligence. And, I think that's an understanding that you and I are trained in our bones from being economists, as both students and teachers, over the years. And, I think it's very alien to the computer science community.

Tyler Cowen: Absolutely.

Russ Roberts: What's your reaction?

Tyler Cowen: Absolutely. Everything is coded into a program in one worldview. In what you might call the Austrian Economics worldview, decentralized and often inarticulable knowledge, as outlined by Michael Polanyi, is critically important for just about everything that happens. GPT models do not in any direct way access that. They're not trained on it.

There is a sense in which they digest a version of the outputs from the process based on decentralized and inarticulable knowledge. So, they have a good deal of cognitive oomph.

But, the notion of AGI, to me, is not entirely well-defined. Getting back to this point of multi-dimensionality, it's not just prudence or wisdom or judgment, but even very crude notions. How you realize your intelligence through physical actions in the world in a way that, say, LeBron James does. A phenomenally smart human being in both the intellectual sense, and also the physical sense, and integrating the two. That's yet another dimension we haven't considered.

So, this idea when people say, 'Oh, AGI is X number of years away,' that doesn't make sense to me. If your notion of AGI is a particular kind of smarts, you could argue we're already there. I mean, GPT-4 has been measured at an IQ of 130. GRE [Graduate Record Examination] scores very high, sometimes close or at perfect. But it's a device--and, again, we need to keep the broader perspective. And, I'm hoping that you, in particular, with your training in Hayek, Smith, don't let the doomers sell you the emotion.

The correct attitude here is to take lessons from history. History is one form of knowledge. It is not always generalizable, but there's a lot you can learn from it.

And, one thing we learn from it is that the options and possibilities for progress are true and real, and can make our lives much better.

But, the disruptions are very true and real as well.

Typically, the world doesn't end. The real problems of mankind tend to be foreign conquest, pandemics, war, environmental issues. That's likely to remain the case over the future.

You have to ask, especially in a world with hostile powers, is American artificial intelligence likely to help with those or hurt with those? I think to have the intuition--you know, again, subject to critical scrutiny and discourse--that it's more likely to help with them is a better starting point than a lot of what I'm hearing. So, just a lot more historical reasoning in the debate, based on these broader frameworks that are not just computer science.

54:24

Russ Roberts: Well, I think it's a little bit tricky. I'm not sure history is--I think we should learn from history, and we should certainly invoke history, and we should certainly understand emergent things. But, it is a little more complicated than that.

I don't think that my new-grown sympathy to the concerns here is an emotional response, but rather, actually, the role that emergence plays in some of the doomsdayer stories. It's intriguing to me. It's not persuasive. I find it thought-provoking.

But, I want to say something about your GRE point, and try to avoid making it too personal for any players in this conversation beyond you and me. If I were stuck on a desert island with my survival at stake, put into a hostile environment, say, in a place where, say, my kind--whatever kind that--is not smiled upon, I think one of the last things I'd want is a high-GRE person.

I just--using that as a measure of power, brain power, intellectual power, problem-solving power just strikes me as absolutely wrong. Absurd. It is a incredibly uninteresting measure. Sorry: There's an interesting aspect to it. It's kind of a dog-on-its-hind-legs kind-of-thing: that it can be done. What a computer program can pass a bar exam or get a good score on Bryan Caplan's econ exam.

But, it really is not so useful in the kind of context that human beings find themselves--other than applying to college, or do well in college.

And so, part of that is your point about the physical world. I don't just want something to tell me smart things. I want to be able to act in those settings.

But, we understand that in 2023, much of life takes place outside of those physical settings. It takes place in our virtual world, in our imagination, online, in various different forms. But, even there, I do not think the ability to do well on a standardized test or an IQ test--of any kind--is remotely--excuse me--is remotely connected to intelligence. I just find that bizarre. Do you agree?

Tyler Cowen: One of my predictions is that GPT models will raise the relative--and indeed--absolute wages of carpenters. So, if you're a very young person or if you have kids, and they're thinking, 'Well, what should I do for a career now?' You face some very serious decisions. Now it might be that you're a wonderful general/manager of GPT models, and you do incredibly well managing your thousand research assistants. If so, great. But, if you're just producing fairly routine, word-based content in whatever form, you probably need to give a fairly serious rethink to your career plans.

And, I think there'll be a lot more science, a lot more ideas, a lot more projects. And, like, very good gardeners, very good carpenters, people with synthetic abilities who can make things happen. The kind of person who in a lab actually helps build the fusion reactor, say, rather than just writing about it, is already becoming much more valuable.

And it's going to disrupt many things. So, you and I are broadly of the same generation. It won't change my life that much. Even for me, I'm not sure I'll write another book. My plan is to go around and actually give more talks. I think that will be more rewarding. It's not that I think GPT can write as good books right now, but I think people will be playing with their GPT models, rather than reading my book at some point. So--

Russ Roberts: And they'd love to look into your eyes and shake your hand after a talk, and chitchat for a minute and a half and have a human experience than read another paragraph of you. Possibly.

Tyler Cowen: We all need to rethink what it is we're doing, reevaluate our professional lives. It's a very serious matter. It will disrupt many of us, often in good ways, sometimes in bad ways.

But I would just here urge against complacency. Please take this seriously, give it a rethink. Look for things you can do now to make this a better future rather than a worse one.

58:59

Russ Roberts: But, I think it definitely puts a premium on human interaction, and that's going to be in people who are good at that--whatever form that takes--I think, will remain incredibly important.

Or, we're all going to retreat into our bedrooms and play on our phones all day. Which a lot of people are. But, there's also a backlash against it, even among young people. So, I think that part is going to be extremely interesting.

But, it is going to force us to think about what it is to be human. Which is not a bad thing. And, that's what I find myself thinking about when I think about that condolence note or interacting with an avatar of Tyler Cowen rather than the real thing. And, of course, I'm doing this over Zoom with you, Tyler. It's an inferior form of interaction, but it's extraordinary that we can do it at all across 6,000 miles or seven, whatever it is. So, I think that's pretty cool.

Tyler Cowen: It will be a fascinating future. Very weird in many ways. It may not feel weird to the people born into it, but we should all be ready for it. And, I'm very glad you're doing these episodes on AI to help get us ready.

Russ Roberts: Well, I appreciate your wisdom, which I'm very confident if I read the transcript is not ChatGPT. Tyler, thanks for being part of EconTalk.

Tyler Cowen: Thank you, Russ.