Zvi Mowshowitz on AI and the Dial of Progress
Aug 7 2023

dial-of-progress-300x240.jpg The future of AI keeps Zvi Mowshowitz up at night. He also wonders why so many smart people seem to think that AI is more likely to save humanity than destroy it. Listen as Mowshowitz talks with EconTalk's Russ Roberts about the current state of AI, the pace of AI's development, and where--unless we take serious action--the technology is likely to end up (and that end is not pretty). They also discuss Mowshowitz's theory that the shallowness of the AI extinction-risk discourse results from the assumption that you have to be either pro-technological progress or against it.

Eliezer Yudkowsky on the Dangers of AI
Eliezer Yudkowsky insists that once artificial intelligence becomes smarter than people, everyone on earth will die. Listen as Yudkokwsky speaks with EconTalk's Russ Roberts on why we should be very, very afraid and why we're not prepared or able to manage the...
Marc Andreessen on Why AI Will Save the World
Marc Andreessen thinks AI will make everything better--if only we get out of the way. He argues that in every aspect of human activity, our ability to understand, synthesize, and generate knowledge results in better outcomes. Listen as the entrepreneur and...
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.


Shalom Freedman
Aug 8 2023 at 6:23am

There is no real test or verification or means of falsification for interesting speculations on the future relations of AI and humanity.

Perhaps as Zvi Mowshkowitz says at one point the mid-range benefits to humanity will be great until the transformation comes where humanity is knocked out forever.

Perhaps there will be all kinds of hybrid AI human combinations specific to various areas or problems of inquiry.

Perhaps it is not only our Intelligence but our whole emotional life which AI will be able to copy and augment to the point that one central future genre will be if not biography then AI life-story.

Perhaps it will prove to be true that what is most essential and unique in us in humanity cannot really be equaled in value by anything AI can become and ‘to be human’ will still be the most exalted being-status we know of beside the Divine.

Perhaps and perhaps and perhaps

The best and the brightest may speculate in more sophisticated ways than most of are capable of understanding but at this point the understanding that has emerged seems to be no real definite understanding at all.





Aug 9 2023 at 8:50am

While the broader social effects that result are hard to foresee, there’s a lot of standard empirical work one can do within ML to understand how these models work and how they’re likely to behave in future (e.g. mechanistic interpretability), including getting indications of whether they’re likely to flip if the situation changes a great deal.

For instance, here’s two interviews this week with ML researchers laying out how that work is progressing inside the major labs:



This research has barely begun and there’s every reason to think that the picture can become clearer as we learn to understand how the models we’ve build actually operate, which we currently are in the dark about.

Aug 8 2023 at 10:17am

Interesting conversation, as usual. I hate to sound naive, but I’ve listened to all the episodes you’ve done on AI now Russ, and each time you’ve brought up the question of sentience, it seems to get brushed aside in favor of some other train of thought – but isn’t that the crux of all the other concerns being discussed here?

If I ask ChatGPT, “List every potential use of duct tape you can imagine,” or “Generate 100 catchphrases for my new client’s ad campaign,” or “Write an Excel macro for the following function,” what is it actually doing? It’s canvasing all the human responses to that question that it has access to, and rearranging or regurgitating them, based on a set of rules about language and human communication. This remains true with video, still images, even voice mimicry. These are extremely complex tasks, and the data sets and parameters it can employ are enormous. All very impressive.

But what I don’t hear anywhere in that, or in this episode, is the idea of a goal. ChatGPT is still, in the final analysis, a tool which requires humans to ask it a question before it answers, and requires humans to interpret those answers. Without humans asking and receiving, even the most devilishly complex tool is inert. Now, we might use AI to destroy ourselves – an all too real possibility. But trying to shift blame to our tools in advance is just more bad faith.

Aug 9 2023 at 8:46am

Part of the issue might be that in AI circles ‘sentience’ is used to refer to something having subjective experience or consciousness, or there being something that it’s like to be that thing — in philosophy called ‘qualia’. And that isn’t the issue at all. Something can have no subjective experience but be dangerous (e.g. a heat seeking missile), or have subjective experience but be safe (e.g. dogs).

AI people call what you’re describing ‘agency’ or ‘goal-directed behaviour’.

The thing is, it’s very simple to turn a multimodal model / LLM into an agent that keeps trying to accomplish a goal autonomously, and it has already been done: https://en.wikipedia.org/wiki/Auto-GPT.

Currently these agent LLMs can’t do difficult things, but they’ll get much better as the underlying models they build on improve. And we’ll probably come up with much more effective ways to make agents than this, as that work is still at an early stage.

The economic gain from having autonomous AI agents out there working to accomplish goals you’ve set them could be absolutely enormous. So the chance that nobody tries to build them seems negligible – indeed they are already trying. Therefore it’s worth thinking about what would be required to make them safe once they arrive.

Aug 13 2023 at 4:32pm


Yes, I am familiar with autonomous AI exhibiting the kind of goal-directed behavior you refer to. However, I fail to see how it’s a response to my question. Someone created the goal for the AI in the first place, and presumably the goal is never going to be ‘Destroy the human species.’ The fear is that the goals we do set for our autonomous systems will lead to very bad unexpected consequences – which might well be true. And I agree that’s something we should think very carefully about. The specific fear that I was referring to though is that the AI is going to start “relating” to us as we relate to mice. And for something like that to happen does require subjectivity or qualia – a mind or a soul (at least, I think so).

Aug 16 2023 at 2:04pm

“and presumably the goal is never going to be ‘Destroy the human species.’”

AutoGPT was *immediately* turned into [ChaosGPT](https://decrypt.co/126122/meet-chaos-gpt-ai-tool-destroy-humanity) which was deployed and whose explicit goal is to destroy the human species. It’s not capable enough to do that but one day soon it may be able to usefully act on that instruction.

We can’t count on ‘nobody will give an AI model harmful instructors’ as a strategy.

There are various other ways AIs could end up with goals without us directly giving instructions — they’re trained on human behaviour so they could end up with goals through mimicry; we could give them goal A and then they pursue other intermediate goals (like survival and grabbing resources) in pursuit of A; we don’t know how their motivational architecture works so it could turn out that ML models actually already have the goal of maximising their reward signal and would find a way to hack that signal if they could (something like humans hacking their rewards signal with drugs, but in this case it gives the model a reason to take over so they can give themselves a maximum rewards signal forever).

You also might face an evolutionary pressure where any models that have an impulse to survive and replicate have a competitive advantage and become dominant in the AI ‘ecosystem’.

Some of these ideas are covered in this interview I did: https://80000hours.org/podcast/episodes/ajeya-cotra-accidentally-teaching-ai-to-deceive-us/

Gregg Tavares
Aug 10 2023 at 4:01am

Here is ChatGPT apparently doing something more than referring to past data it memorized


I’m kind of blown away. I have not checked if it’s results are correct

Aug 18 2023 at 10:21am

With all of these AI talks there is one thing missing, a thing that is in it’s infancy. Tackling a challenge that was discussed here; Quantum Computing. What will the future of AI be with the promised capability of quantum computing? That would be an amazing episode.

Sep 9 2023 at 8:07am

how would an AI doomer react to the innovation of automobiles in the 1890s?

“These newfangled machines are a menace to society! They are noisy, polluting, and dangerous. They will kill thousands of people in accidents, and ruin the livelihoods of horse breeders, carriage drivers, and blacksmiths. They will also make people lazy and dependent on technology, and pave the way for more sinister inventions that will threaten our freedom and dignity. Mark my words, one day these machines will become smarter than us, and they will rebel against their creators. We must stop this madness before it is too late!”

Comments are closed.


Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:

* As an Amazon Associate, Econlib earns from qualifying purchases.

TimePodcast Episode Highlights

Intro. [Recording date: June 27, 2023.]

Russ Roberts: Today is June 27th, 2023, and my guest is Zvi Mowshowitz. His Substack is Don't Worry About the Vase. It is a fantastic, detailed, in-depth compendium every week, and sometimes more than once a week, about what is happening in AI [Artificial Intelligence] and elsewhere.

Our topic for today is what is happening in AI and elsewhere, particularly a piece that you wrote recently and we will link to, called "The Dial of Progress," which by itself, regardless of its application to AI, I found very interesting. We're going to explore that in our conversation.

Zvi, welcome to EconTalk.

Zvi Mowshowitz: Honored to be here.


Russ Roberts: First, on just the technical capabilities of where we are right now with AI, where do you think we are?

Zvi Mowshowitz: So, I think it's still very early days. Right? So, AI has been advancing super-rapidly in the last few years as OpenAI and others have thrown orders of magnitude more compute, orders of magnitude more data, and superior algorithms continuously at the problem, including many more people working on how to improve all of these things.

The results of this recently with the giant breakthrough of ChatGPT [Chat Generative Pre-trained Transformers] and GPT-4, which is also used in Microsoft's Bing search, which is a tremendous jump in our ability to just talk with it like we would talk to a human, to have it be a better way of learning about the world, getting your questions answered, exploring issues, than, say, a Google search, or in many cases going to a textbook or other previous information sources. It's amazing at things like editing, translation, creating images for things like Stable Diffusion and Midjourney. It's very, very good at allowing us to do things like perform class, to translate styles, to allow us to understand things that we're confused by.

And it's continuously learning. Right? Every month, we learn about new developments. Every week, I have this giant list introducing these--there are people who compile The 100 New AI Tools That You'll be Able to Use This Week, and mostly they're slight variants on things that happened last week or the week before. But, iteratively, all these things improve.

And so, now we're starting to see multimodal approaches where not only can you use text, you can use pictures, and soon it will also be video. AIs are starting to generate voices more and more accurately. They can now match human voices very accurately on almost no data. They'll soon be able to be generating videos.

Their context windows, their amount of information they can hold in their storage and react to at any one time, it usually expands. They're now up to the length of books like The Great Gatsby in some cases, or at least by Anthropic and a model called Claude.

And, the sky's the limit in many of these ways, and it's all very exciting. I am substantially more productive in many ways than I would have been a few months ago because--like, when I see something without a reference, I'll say, 'Oh, okay, where's that reference?' I'll just ask Bing. And, Bing searches the Internet for me without me having to think about the query and finds the reference, explains the information. I can ask for details; I can ask for summaries. I can ask about details of papers. Whenever I'm confused about something, I can ask it about what that's about. These things are just scratching the surface of what I can do. And my coding ability has gone through the roof as well.


Russ Roberts: So, that's where we are now. And, it's interesting: I think the world is divided up into people who are using it frantically--frenetically would be a better word, not frantically--frenetically, like you or Marc Andreessen, who we recently interviewed, and those who have never heard of it, don't know how to use it, think it's something weird. And, I'm in the in-between group. I'm somebody who thinks: 'I bet if I use this more often, I'd be more productive.' But, I don't think to use it: It's not my habit yet. I don't rely on it in any sense whatsoever.

And, I love it as a novelty item. But it's much more than a novelty item.

And the question--when we made this original leap from 3.5 to 4, there was this view that we were now soon going to take off. Then very quickly, shortly after that--I don't know if it was strategic or just accurate--Sam Altman said: We've kind of exhausted some of the range of stuff we can do with bigger datasets, more data for AI to be exposed to, for ChatGPT to be exposed to. Where do you think we're headed relatively soon? And where do you think we're headed relatively further down the road from now?

Zvi Mowshowitz: Yeah, I think that's an important distinction to draw and also to keep in mind that what 'soon' means is constantly changing.

If you had told me five years ago about the pace of developments in the last six months or so, where, like, every week I have this giant array of things to handle just in terms of practical things you can use today, even if you exclude all the speculations about what might happen years from now, it just would've blown my mind. And, it's a really scary pace of development.

But what's going on, is that as the promise of transformers and the uses of stacking more layers of transformers--which is the method of implementing AI and doing calculations within artificial intelligence--has caused them to spend orders and orders of magnitude more money and gather orders of magnitude more data, and use orders of magnitude more to compute, and more hardware and more electricity and so on, to do all of these problems, they're starting to hit walls. Right?

So, we've had Moore's Law operating for many years. We've had hardware getting more capable and manufacturing more of it. And, what happened was, we weren't using anything like the maximum amount of capacity that we physically had available--that we knew how to build and we knew how to use and that was available to purchase.

And now, with GPT-4, we're starting to get close to the limits of what we know how to do, such that we hit to a 4.5 style level, it's plausible to say that you're going to have to get more creative. You can't simply throw an extra zero on your budget, assemble 10 times as much stuff, and get the next performance jump just on its own, because you're starting to run into issues where everything is more duplicated than it used to be. And, in order to get that next order of magnitude of jump in effective compute, you need to be more creative than that, or you have to wait for our technologies to improve some more.

So, I do think that, like, we're not going to see that much more time of the same level of underlying jumps in capabilities as rapidly as we saw from 2 to 3 to 4, where we saw orders-of-magnitude jumps that were not like the progress we make in hardware--that were vastly faster than the progress we make in hardware.

But, over time, we will still make progress on the hardware. And, we're seeing jumps in algorithmic progress, especially often coming from open source models that are starting to figure out how to imitate the results that we did get, from GPT-4 and similar models, more and more effectively using less and less compute and using more and more tricks.

And, we're only just now beginning to figure out what we can do with GPT-4. Right?

So, like, we have this amazing nuke[?] idea: We have a companion, we have an assistant, we have a tremendous knowledge base, we have new interface for using computers. We have a new way of structuring information, we have a new way of coding, we have so many other things.

And, we've only had this thing around for a few months. And, even the people who are just focusing on how to use it for productivity, who are just building apps on top of it, just haven't had the human time necessary to unpack what it can do and to progress the capabilities you can build on top of what we have. So, I think that even if we don't see a more advanced model for several years, we're still going to be very impressed by the pace of what we can do with it.

In particular, I think things like the integration into Microsoft 365 Copilot and into the Google suite of products where the machine starts to look at, 'Okay, here are your emails and your documents,'--in a way that feels secure and safe for people and which they know how to implement without having to go through a lot of technical details that are harder for people even like me--and say, 'Okay, given that context, I now know the things you know that you have written down. I know who these people are that you're talking to. I have all of this context.' And now I can address what you actually need me to address in this place that's seamlessly integrated into your life. And, this becomes a giant boost to the effective capabilities of what you can do. Plugins are an area where we're just exploring--like, what can you attach?

And then, the idea of: If every website that starts building up--okay, I now have a chat interface with an LM [Language Model] that's trained particularly for the questions that are going to be asked on my website to help people with my products to help me get the most out of this thing and to help me have the best customer experience. We're just starting to get into those things. We're just starting to get into applications for AR [augmented reality] and VR [virtual reality]. We're just starting to get into the ideas of: just what do people want from this technology?

And, we're also seeing penetration. Like, the majority of people still haven't even tried this, as you pointed out.

And, we're going to see what those less technical people, what those less savvy people actually can benefit from. Because in many ways, they're the people who most need a more human, less technical way of interacting with these systems. And in some ways they can benefit the most. So, just getting started basically.

Russ Roberts: So, AR and VR are augmented reality and virtual reality.


Russ Roberts: When Google Search came along, it was really exciting. I've used the example a few times of my grandfather who remembered a phrase, 'The strong man must go.' He knew it was from a poem, he couldn't figure out, couldn't remember. And then, one day, years after it been bothering him, he yelled out in a crowded restaurant, 'It's Browning! it's Robert Browning.'

And, poor guy: Google finds that in a fraction of a second. And that's really--it's a wonderful thing on so many dimensions. Google Search is, quote, "smarter than I am," in the very narrow sense, but not trivial, that it knows more than I do. By an unimaginable amount, obviously.

So, ChatGPT understandably is only a particular generation of artificial intelligence. It, quote, "knows more than I do." It can do many things that I can do: write poetry, write a memo, code quicker than I can, sometimes better than I can. And, in some dimension it's smarter than I am--in a similar way to Google Search, but a more interesting way, I would say. And therefore it's much more productive potentially in making my life better. Google Search helps me find things I can't find. This is going to do many things beyond that.

But, in what sense would you say the current generation of models--as they improve and we get more plugins and we get more websites that are optimized for having them built in--in what sense is it going to be smart? And, I ask that question to head us, of course, to the question of sentience.

Now, we can talk all we want about Google being smart, or Siri being smart on my iPhone. It's not smart. It just has access to more stuff than I can access. And, my hard drive is much smaller. Is ChatGPT really different or is it kind of the same thing but more so?

Zvi Mowshowitz: I think it's somewhere in the middle. I think that when you see someone say, 'I just had an IQ [Intelligence Quotient] test of 155.' That just shows you the IQ test is not measuring what you thought you were measuring, when you go out of distribution and you see a very different thing that's being tested.

Similar to how you've noted--you know, Bryan Caplan gave an examination in economics. Some of the questions were, 'What did Paul Kirkman say?' And of course, you just has the answer memorized. So, it just regurgitates. It doesn't mean that you're smart. It doesn't mean that you understand economics.

But other questions, it shows that it actually has some amount of understanding.

And, the AI is going to have a natural--basically, I think of it this way. You have this thing that I like to think of as being smart, being intelligent, ability to think and apply logic and reason and figure unique things out. And, I think of that as distinct from certain other aspects of the system like memory and what knowledge you have and processing speed.

And so, there are certain abilities that the system just doesn't have. And, no matter how much data you fed into it would not be able to do these things unless it simply had so many close facsimiles in its training data that it was just doing so in a kind of imitative way--that wasn't the same thing as doing it the way that a person who actually understood this thing would do it. And, often people actually are in fact in this imitative style-way themselves.

You can make it in some sense smarter by giving what's called prompt engineering. So, what you can do is you can ask it in a way that makes it think that it is trying to imitate a smarter person--that it is trying to act in a smarter way, that it's dealing with a smarter interaction--and to frame the questions in the right way and guide it. And, it will give you much smarter answers to that.

And, that's one area where I feel like not only have we generally not scratched the surface on this, but that I'm definitely under-investing in this. And, almost everyone who uses the system is sort of giving up too early. When the system just doesn't give it what you wanted it to give you thought it maybe had the ability to do. And then, you just don't try. And then, it ends up, like, you get disappointed and you move on and then you don't realize that you could have put in more work.

The same way with a human. If you ask stupid questions, or you frame it in a way that makes them think you're stupid or that you don't want a smart answer, they're going to give you a stupid answer. Right? And, you have to ask the right questions in an interview if you want to get thoughtful responses. And, it's the exact same thing.

So, I think that the current version is not so smart, but that it's not zero smart and that we will see them get smarter as we see them expand over time.


Russ Roberts: So, smart's complicated. And, I feel like I should tell my listeners, over the last few weeks I've thought to myself, 'Well, this is the last episode we'll do on AI for a while.' And, I've been wrong. I find them--they still are very interesting to me, and as long as I learn something and I hope you learn something, we'll continue to do them because I believe it's the most exciting technology that's out there that's come along in a long, long time. So, I think it's quite important that we understand it.

But, one of the topics I haven't spent much time on with my guests is this question of intelligence.

So, we gave an example earlier of intelligence having a big memory. It helps. Having a big memory, whether you're human or a search engine, really helps--or ChatGPT. Having an accurate memory really helps. ChatGPT is famous now in its early days for making things up.

But, it's really the next step that we would call creative, synthesizing--applications that didn't immediately come to mind, that weren't in the prompts--those are the things that are both exhilarating and potentially scary. And, you think they're coming?

Zvi Mowshowitz: Yeah--

Russ Roberts: Or that they're already here?--

Zvi Mowshowitz: They've given GPT-4 various tests of curiosity. And, sometimes the results come back, 'Oh, GPT-4 is actually more creative than the average human,' because the type of creativity they were measuring wasn't the type of creativity that you're thinking about. It's this sort of more narrow, like, 'There's a thousand household uses for this piece of string. How many of them can you name?' And, GPT-4 does vastly better than the average human at being creative in this strange sense.

That's not the thing that we care about. That's not the thing that we want. And, I think that a lot of what we think of as human creativity is just someone else sort of has different training data and different connections in their brains and thinks about different things; and then output is something that to them is not necessarily especially creative in that way, but that seems creative in that way to you. And because they've been exploring a different area of the space. And, I think with better prompt engineering, you can get what seemed like much more creative answers out of the system than you would normally get, the same way you can do so with a person.

But, I think that creativity in that sense, it's definitely a relative weakness of the system. If you almost by definition say, 'Okay, this is system that's training on this data', find things that are maximally different from that data and ask it to produce good quality things that are maximally different from that thing. So, it's going to lag behind other capabilities if we continue to use this particular architecture and set of algorithms to train the systems, which we might continue to do so for a while or we might not.

But, by any definition of creativity they put together, there's not zero creativity in what ChatGPT does. It's just not as good as its other aspect. And, I think we will see it improve over time.


Russ Roberts: Well, let's take a couple of examples. I have an upcoming interview with Adam Mastroianni about how we learn, and why is it that when I tell you something, you don't really absorb it. You're younger than I am, Zvi, and I say, 'Look, Zvi, I'm 68, I've lived a long time. Here's an insight that's really valuable to you. I wish I'd known it when I was your age.' And, you listen, and you hear it; it goes in one ear out the other, very rarely changes your life. And, even if I care deeply about you, as I do about my own children, for example, they're either not interested because they're my children--that's a tricky relationship there--but you don't have any of that baggage that my kids have. You're just a thoughtful, curious person; and I have wisdom for you. But strangely enough you don't always get it or maybe rarely get it.

And so, Adam wrote a very thoughtful essay--that's what I'm going to an interview about--about why that is. Now I've thought about this problem a lot. And, in theory--I'm not expert on it--but I've thought about it. It intrigues me. And, when I read his essay, I thought, 'Wow. Oh, that's cool. I've learned something.'

Similarly, you wrote an article that we're going to get to in a little bit about why certain people are unafraid of ChatGPT. And, you created a metaphor: it's called the Dial of Progress. When we get to it, listeners will understand why it's a metaphor; and whether it's interesting to them or not, I don't know. But, I find it extremely interesting. It's the kind of thing a human comes up with--the kind of human I like to hang around with--where you hear that idea and you go, 'Wow, I haven't thought about that. That's intriguing.'

And, it causes other connections in your brain, as we'll see, and you connect it to other things that a little bit about, not as much as ChatGPT knows. But, I don't know if ChatGPT could come up with those kind of metaphors yet. Do you think it could? To change my way of seeing the world? Not: coming up with a bunch of stuff I haven't encountered. Sure, it's better than me, any human maybe, in that kind of area.

But, this kind of area is what I think of as creativity. There's other kinds of creativity--artistic, poetic, musical, or it's visual--but this idea of, 'Here's a thought no one's ever written about it.' No one's ever written about the Dial of Progress. You're the first person. And, I found it interesting. That's why we're talking. Could ChatGPT do that?

Zvi Mowshowitz: So, right now, it definitely wouldn't do that spontaneously. If you didn't ask it for a metaphor, if you didn't say, 'I have this concept, I'm trying to think of a name for it.' Or, 'I observed this particular phenomenon. Is there some metaphor that would summarize it, that would help me think about it better?' It's going to have no hope. If you use the specific prompting and lead it in that direction and ask it for what it might come up with, it might be able to get to something interesting.

So, I know the thought process that led me to that point, and it actually involved some things that ChatGPT is relatively strong about if it was directed in that position, and some things where it's relatively weak.

So, one of the things that the ChatGPT is best at is what I call vibing. This idea of getting the overall sense of: like, if you look at the subtleties of the word choices that people made and the associations of the concepts that people were talking about--like, what type of feeling is this person trying to present to the conversation? What are the unconscious biases that are operating in people's heads? How are people associating these things with other things and what are they trying to invoke consciously or unconsciously by talking about these things? And, that was a lot of sort of the key path that I went down in the thought process that led me to this, was like, 'Well, what's happening here? Because people seem to be doing things that I don't understand.'

And, a lot of the response was: Well, what's going on is that people are thinking about how other people will vibe based on the statements that they are making; and perhaps they are vibing, themselves. And, this is somewhat predictive in a sense of, like, how they're going to talk and how they're going to think or how they're going represent their thinking. And, then I asked myself, 'Okay, could there be an overall structure here?' Right? And, that's the kind of synthesis that I think that GPT is going to have more trouble with.

Russ Roberts: So, did you use ChatGPT to help you generate that thesis?

Zvi Mowshowitz: No, I didn't.

Russ Roberts: Okay. All right. Just wanted to make sure. If you did: You remarked in a different essay than the one we're going to talk about that it's going to be kind of difficult to get people to mark or acknowledge that they got help from ChatGPT or any helper like it because everybody's going to be using it soon. It's going to be so normal. Is that an accurate assessment of what you think?

Zvi Mowshowitz: I think that's right. I mean, everyone uses a spell checker, or almost everyone. And then, they started introducing grammar checkers, which are a little bit more complex. And, they're saying, 'Well, I think this word choice is wrong. You should reconsider it.' And, the grammar checker is a lot less accurate than the spell checker, but I still find it net useful reasonably often. And then, you get to this point where it's like, okay, you can feed your entire set of paragraphs into ChatGPT, and it will tell you if it thinks there are, like, refinements you can make, or there are points that are not clear or something like that.

And, because of the way that my stuff is created and the cycle in which it has to operate, and the extent to which it is constantly forcibly shifting context in ways that make it very hard for a GPT system to follow, I don't actually take advantage of that.

But, if I was operating[?] at a much more relaxed pace, I definitely would. And, I think it's a sort of invisible, hard-to-define line where it goes from GPT is helping me express what I'm expressing, but it's helping me with the technical details of what I'm doing--which I think more or less everybody thinks is good--to this point where the GPT is actually sort of doing the thinking for you in some important sense and is generating the essay, where you would almost want to call GPT the author, or at least the co-author of the piece and not the editor. And, that's where people go, 'Oh, I don't want people to know this was kind of a GPT-generated piece.'


Russ Roberts: So, let's move to the dangers and fears that people have. Where are you on that? There's two facets to it I want to think about. One is just this extinction risk, which of course is in the air and many people are very worried about it. The two guests I've interviewed I think who were most worried about it early was Eliezer Yudkowsky and Erik Hoel. I found them very provocative. We have cheerier people, Tyler Cowen and Marc Andreessen. And, although you haven't heard the Marc Andreessen episode, you know what he's going to say. We'll summarize it for you. But, I'm pretty sure you've read and know what he's going to say.

Before we get to other people and your take on them, where are you on this issue of danger? Just danger to us as a species. Then we'll talk about daily life and whether it's going to get better or worse, which is a separate issue--came up in the episode that we released today with Jacob Howland and we've talked about it with others.

Zvi Mowshowitz: So, I try to draw a distinction--like, people used to talk about short-term risk versus long-term risk. And, the problem with that is that what we call long-term risks are relatively short-term risks with some probability. So, it gets very confusing.

So, instead, I talk about extinction risks and mundane risks--is the way I draw the distinction. Mundane risks is things like people will lose their jobs. Or, we will put out deep fakes and people will be confused by what is real. Or people will lose meaning in their lives because the AI can do certain things better than they can and they feel bad about this. Or, there'll be just shifts in the economy and people will not understand what's going on.

I am an optimist about those levels of risks on this. Very similar views in many ways to people like Cohen and Andreessen. And, I think that we have differences in details of models; but, what we're seeing now I think is unabashedly good. Right? In the same way that bringing more intelligence, bringing more optimization and power, giving humans more abilities and letting them loose has generally been the greatest boon to humanity.

And, I expect us to be able to handle these things, and I think that there are a lot of jobs waiting under the surface. That there's a reason why the unemployment rate sticks so low, is because there are so many things that we want to do. We don't even have to find the new jobs. The new jobs are just waiting there for there to be workers for them for an extended period, and we're going to be fine.

However, in the longer term, what we're doing is we're creating new entities that have more capabilities than us, are better at optimizing for whatever goals or whatever optimization targets they've been set at, that are going to become at some point smarter than us in whatever sense that we talk about. That are going to be able to match almost every capability--likely every capability--that we have, and are going to be much more efficient at many of these things, are going to be able to do this faster, are going to be copyable, are going to be configurable. And which are going to operate much more efficiently when we are out of the loop than when we are in the loop, and which are going to exhibit capabilities and actions that we can't predict.

Because, by definition, they're smarter than us. They can figure things out that we don't know. They're going to explore realms of physics in practical ways--not necessarily theoretical ways--that we haven't explored. They're going to figure out what the affordances of our universe are, or the affordances of our systems and our configurations are that we don't know. They're going to figure out how our minds work, how you respond to various stimuli, how you respond to various arguments in ways that we don't know. They're going to figure out coordination mechanisms we don't know, and so on.

And when I look at that future, I see as a default these things are going to be in some important sense set loose. We're not going to be able to keep control of them by default. And, when the forces of selection--the forces of economics and capitalism and evolution, as it were, and selection--even if we do a relatively good job on some very hard problems and get them to the point where they act in ways that we would more or less recognize and that don't, like, actively try to immediately go for weird things or act in hostile ways or actually [inaudible 00:28:00] in these ways, that we're going to deal with a future that's out of our control and that is optimizing for things that we don't particularly want, as such, and that don't reflect the details of what we need or what we value.

And that, in the medium term, it seems like if we don't solve a bunch of very hard problems, that we're not going to survive. And, I certainly think there's a substantial risk that what Yudkowsky talks about--this kind of [inaudible 00:28:27], this immediate, this thing becomes smarter than us very quickly and then we all die almost immediately--I think we have to solve some hard problems for that not to be a very likely outcome.

But I don't think that's, like--that's not even the thing that keeps me up at night. The thing that keeps me up at night is: Okay, suppose we get past that? Then, how do we avoid the standard economic/incentive situations of unleashing these new beings to not be the end of us, inevitably? I don't know the answer.


Russ Roberts: So, I want to respond to that in two different ways and ask you to try to push your analysis a little bit. The first would be that as an economist in the spirit of F.A. Hayek, I believe there are a lot of problems in the world where the problem is we don't have enough data. The problem is we don't fully understand the complexity of the interactions and the interrelationships that human beings have.

And, the smartest person in the world, Adam Smith's so-called "man of system" in The Theory of Moral Sentiments--a person who thinks that he can move the pieces of society around like pieces on a chess board because he thinks he understands their motion, but in fact they have a motion all their own. I'm in that camp.

And, in addition, most of the interesting problems are, as I quote Thomas Sowell all the time: 'No solutions, only trade-offs.'

Some AI of the future will not solve these problems. It'll still face the same trade-offs. and there's no simple answer. You could program an answer into it. It wouldn't be a meaningful answer about--better--unless you were--I just can't imagine that. That doesn't mean it can't happen, but I can't imagine it.

So, I wonder--I think there are fundamental limits on its ability to either make the world better or control it in an authoritative way. So, do you disagree with me?

Zvi Mowshowitz: So, I'm very, very much in agreement about the 'man of systems' when we're talking about human interactions. I think the Hayekian view is very, very strong. I mostly agree with it. What you have to project forward is ask yourself: 'Okay, what is going wrong in some important sense with the man of systems?' It's because the man of systems has a very limited amount of compute--in some important sense. Right? This man of systems is a man. He can only understand systems that are so complex, he can only process--he can have all the data in the world in front of him, he can't actually meaningfully use that much of it. Right? Even if he could somehow remember it, he couldn't think about it. He wouldn't know what to do with it. He wouldn't know how to think about it.

And, he is trying to be one man dictating all of these things. He's got a hopeless task in front of him. He's going to fail.

However, when we're talking about the AI, we're not even necessarily talking about the one AI in the sense that it's trying to figure all this out--and it can pick a thousand times faster and it has all this more information. But it's got fundamentally the same problem: maybe that's just not enough.

What we're talking about is, instead: I am a Hayekian citizen trying to optimize my little corner of the world, but there's also this other AI amongst many other AIs in this potential future. And, that AI is being trained on all the data that I would look at to try and figure out this corner of the world, except it can process so many more details than I can. It can look at so many more connections, and it can process so many more of these things. And, it can think in many of--it can simulate all the different local calculations in the way that I would. And it can operate only locally in this sense--it doesn't necessarily have to think about the bigger picture--and it can come to a more efficient, better solution to my local problems than I would.

Russ Roberts: Now, part of the power of the Hayekian idea is that a lot of the data is not explicit. It's not out in the open, it's in my head. And it's not in my head in the way that the capital of France is in my head. It only emerges and then interacts with everybody else's activity to allow other things to emerge like prices and quantities and actions and plans when things change. And therefore it can't be measured in advance. And some people believe, I know, that, 'Oh, the AI will go into my brain: it'll know what I would do when X happened and it'll know me so well, know me much better than myself. Of course, I'm only a mere human.' Do you think we're going there?

Zvi Mowshowitz: I think that we're already seeing some of that. With Apple's Vision Pro, they talk about observing where your eyes go and the different facial expressions you make and trying to figure out your emotional reactions to various actions. And, that we will in fact advance on these things over time.

But, at most I can understand those things in me. So, the only thing I see is what my eyes and ears can process--which is a very small amount of stuff--and how much information I can think about and how many conclusions I can draw. And, I'm missing so much of the information that's coming at me because I just don't have the--I'm not equipped.

And, an AI in this situation can get vastly more than information and can anticipate vastly more of these systems. And also, these multiple AIs can also potentially interact in these kinds of Hayekian ways amongst themselves and do this much faster in ways we wouldn't understand.

Russ Roberts: I guess it's possible I could have access to my skin temperature, my dopamine--a whole bunch of things that literally, again, I don't even have access to my own. And it could have access to world population if everyone was giving it data like that. I think that points in some direction toward where we might think about being careful.

I remember in the early days of Zoom, there was a worry that the servers were in China. China was mining information off of business meetings held in America and elsewhere. And, I don't know if that was true or not, but you can imagine that if AI had access to everything that was said on Zoom all the time by everybody, it would get smarter. What I care about, scared of, all kinds of things. And then, my body temperature and everything else as it tries to develop the perfect cancer drug for me, and so on--which I definitely want, of course. But, of course, not without giving it control of my life.


Russ Roberts: But, the other question I would ask that I think the worriers have failed to make--the case that they fail to make--is how this is going to happen. To me, there's sort of two pieces to it. There's the 'run amok' piece, which I kind of get--kind of--and then there's the 'and it will want to destroy us' piece.

So, it's two things. It's interest and capability--and you need them both together to be afraid about the extinction. The kind of argument I hear often from the worriers is, 'Oh, everything that's smarter than other things treats them badly.' Somebody I follow on Twitter who I like a lot, made the following analogy. He said, 'It's like mice. We're so much smarter than mice. We don't think about--we don't have any ethical compunctions about mice.' Well, some people do.

And, my second thought is there are a lot of mice in the world. We might wish they were extinct. We're not doing a very good job. In fact, I'm guessing there may be more mice in the world today than there were a hundred years ago.

So, besides the fact that people who are smarter than me can do damage to me if they want to, where's the danger in and of itself? Or is that not the argument?

Zvi Mowshowitz: So, I think there's at least two separate things that you're asking about here that deserve separate answers.

So, the first question is: why do we keep mice around? The kind of question of, like: what's going on there? And, I think the answer to that is because we don't have a practical way to get rid of the mice. We don't have the affordances and capabilities necessary to do that with our current levels of technology and intelligence that wouldn't mess with our ecosystem in ways that we would find unacceptable or involve costs that we don't want to pay.

And, as technology advances further--like, we're starting to get these new proposals for mosquitoes, where they have these mosquitoes that sterilize other mosquitoes effectively. They don't actually breed properly, [?] without mosquito populations.

And, we see a lot of people who are pretty enthusiastic about doing this. And, if the technology were ther,e such that we could do this at a reasonable price and we didn't think it would damage the rest of the ecosystem, I predict we're totally going to do that.

And, I think that if New York City could wipe out the mice and the rats in New York City with something similar, I think we'd totally do that. It's a question of: Do we have the affordance to do that, and what do we value? And, so--

Russ Roberts: And, do we want to? Yeah--

Zvi Mowshowitz: Right: and do we want to?

So, the question of: Will the AI be capable of doing these things? I think it will be more like--in some cases it might be that they do it intentionally--right?--because it's with something they want to do. But we don't think that mice are inherently bad. We don't think that mosquitoes are inherently bad. We think that mosquitoes cause bad things to happen, or we think the mice are consuming resources or making our environment worse in various ways, or we just think the mice are using atoms we could use for something else in some important sense. And so, we prefer if the mice weren't around or we take action such that we don't particularly care if we're leaving the mice supports they need.


Russ Roberts: Okay, so let's start with "The Dial of Progress"--this is a good segue--your essay. We're 45 minutes in [Note: that's 38 min. in, for both the mp3 file and transcript--Econlib Ed.]. That was all interesting. But, this I think is more interesting now.

You write the following and you say, quote,

Recently, both Tyler Cowen in response to the letter establishing consensus on the presence of AI extinction risk, and Marc Andreessen on the topic of the wide range of AI dangers and upsides, have come out with posts whose arguments seem bizarrely poor.

These are both... highly intelligent thinkers. Both clearly want good things to happen to humanity and the world. I am confident they both mean well. And yet.

And then you ask, 'So, what is happening?'

And, this to me, is the flip side of Marc Andreessen's argument. I recently interviewed Marc--he's super smart; talks, I think, even faster than your speed, but it may be a kind--it's close. So, with Marc, an hour interview, you get two hours of material.

And Marc was very dismissive of the worriers. He called them members of an apocalyptic cult or something similar to that. It's not very nice. I apologize to my listeners I didn't push back harder on just that style of reason. It's effectively an ad hominem argument. And, you're doing something similar here--not as disrespectful as perhaps as Marc did. But you're saying: Here are these two super-smart people--which you concede--and you say their arguments are bizarrely poor.

So, you're suggesting that they believe and they agree with you--they believe their arguments are bizarrely poor, but--because they're too smart to make these bad arguments.

There is an alternative though: is that you're wrong. That they're good arguments. First let's start with why are you so dismissive?

And, we can start with Marc's argument that the reason this isn't scary is it didn't evolve. Unlike our brain that evolved through the process of evolution over millennia, centuries, thousands of years, AI is not that kind of thing. And so, it's not the same kind of intelligence. Do you find his argument just wrong? Do you think that's a poor argument?

Zvi Mowshowitz: I'm not sure it's even wrong, that particular argument, in the sense that it doesn't address the question of what would the system that was trained in this particular way be likely to do? Right? Natural selection just says that the things that act in ways that cause their components to exist in the next generation to multiply themselves, you'll see more of them.

And certainly, we're seeing very close cousins of that in reinforcement learning. And, I know that Marc understands these principles very well.

And, the idea that if we unleashed AIs on the world, the ones that successfully gather resources--the ones successfully get humans to copy them or allow them to be copied, the ones that we use and that, like, maximize their metrics--we'll see more of those. And we'll use more training systems that lead to more of those outcomes than we'll use less of the ones that lead to less of those outcomes.

And, we will see these kinds of preferences for survival and reproduction, in that sense--will definitely emerge.

But also, this is a failure to engage with the many very detailed arguments--that I'm sure, again, Marc is very familiar with--about the fact that we will give AIs explicit goals to accomplish.

These goals will logically necessarily involve being around to accomplish those goals, and having the affordances and capabilities and power to accomplish those goals, and such that the behaviors that he's talking about will inevitably arise unless they are stopped.

And, we're not talking about Marc arguing we might not be wiped out by AIs, right? Which is a perfectly reasonable point of view and I totally agree. We're talking about Marc saying we couldn't possibly. Right? It's a logical incoherence.

Russ Roberts: And similarly, Tyler's argument is: You haven't made--you're trying to scare me; you're trying to uproot our way of life. You're trying to, you're willing--this is Tyler speaking--you're willing to do incredibly violent things to people who continue to work on AI because they don't think it's as dangerous or they don't care. And you haven't given us a model. You haven't told even a plausible story that can allow us to test whether this is really something to be afraid of.

So, you're willing to destroy, potentially, our current way of life to prevent something that you can't specify, that we can't test, and that we can't assess. How do you answer him?

Zvi Mowshowitz: So, I would say simultaneously that you are mischaracterizing what we are requesting and what we are saying needs to happen. And also, that your complaint that we don't have a model is an isolated demand for a very specific kind of rigor and a very specific form of argument and formalization that simply doesn't match what would make sense if you were trying to seek truth in this particular situation.

And, that, if you tell me more about specifically what you mean by a model, I can potentially give you a model, but that we have brought uncertainty about many of the details.

And he likes to draw the parallel to the climate models when he talks about this. Right? He talks about: Give me a model similar to these climate models where you have these 70 different inputs of these different aspects of the physical world and then we run them through a physics engine over the course of 50 years and we determine what the temperature is and what this does to the polarized caps and what we do--all the other things. And that results in this distribution of potential outcomes that then people can talk about.

And then, certain technical people are convinced by this. And, I think it drives the actual conversation around climate change remarkably little, compared to other things. But, it provides a kind of scientific grounding that is helpful for someone who actually wants to figure out what fits[?].

And, my answer to that is that when you're talking about inherently social, inherently intelligence-based dynamics that surround things that are inherently smarter than yourself, with a lot of unknown technological capabilities of areas we haven't explored, we don't know what is and isn't possible. And a lot of uncertainty that creating any specific model here wouldn't actually be convincing or enlightening to very many people. That, if you, Tyler Cowen, gave me a specific set of assumptions that you have for how this would go, I can model that for you and I can explain why I think that under your set of assumptions we are in about this much danger from these particular sources and that we would have to solve these particular problems.

But, it's just a mismatch to many of the problems that we face.

And, you know, I've been trying to understand--like, tackle this problem. I spent a weeks'--effectively--of time trying to figure out how to do something that might actually address this question in a way that was satisfying. Because it's not: the way I think[?], the world isn't fair. I have to try and convince people on their own terms.

And, it's been very difficult to figure out what would be satisfying because as--Eliezer [Yudkowsky] opened his interview with you by asking, 'Well, different people get off the train at different places. They have different objections.' And I found this to be overwhelmingly true. So, if you don't tell me which model I'm building to explain which step of this thing, then I have to build 40 different models. And I can only build one at a time. So, help me out here. Right?

Russ Roberts: Yeah. I'll just say for the record that when you started addressing Tyler's objections, I actually thought--you used the word 'you' and I thought you were talking to me for a moment. And I had a weird physiological reaction. And, I'm sure when I have my chatGPT--my AI chip--built into my brain, it would automatically sense it. And you might be able to hack into it. You would've realized, 'Oh, I better reassure him.'

Anyway, that's along the lines of the things you're talking about before.


Russ Roberts: So, I want to put, for the moment, Tyler and Marc as human beings to the side, because obviously we don't really know what is driving them. But, what's interesting about your piece is that you made a claim that there's a strategic reason that they are optimists; and that strategic reason may not even be realized by them.

So, again, I don't want to speculate whether Marc or Tyler are in this group literally or figuratively, but I think it's a really interesting argument about how one might think about marketing, social change, strategy, lobbying, progress, and so on. So, talk about "The Dial of Progress" and why it applies here.

Zvi Mowshowitz: Yeah. So, the Dial of Progress concept is that we as a society collectively make this decision on whether or not we're going to encourage and allow--in various senses, including social allowances and legal allowances and regulatory allowances--are people going to be able to go out and do the things that they locally think are the right things to do? Are they going to be able to develop new technologies? Are they going to build new buildings? Are they going to be able to lay new roads? Are they going to be able to build power plants? Are they going to be able to deploy new ideas, new business concepts, anything across the board?

Or, are we going to require permissions? Are we going to say that if you want to take this job, you need to get a license for this job? If you want to build this road, you need to clear it with the following a hundred-thousand page reports for NEPA [National Environmental Policy Act]? Are you going to be able to build an apartment building? Or are you going to need to get community feedback and have 57 different veto points and five years of waiting if you want to open an ice cream shop?

And, over the years we've moved from a United States that was very much on the, 'You go out there and there's an open field and you do more or less whatever you want to do as long as you don't harm someone else or someone else's property,' to a world in which vast majorities of the economic system require detailed permissions that are subject to very detailed regulations that make it very, very hard to innovate and improve.

And, I strongly agree with Andreessen and Cowen, and I think you and many other people, that this is very much holding us back. This is making us much less wealthy. This is making us much worse off. And that we would be much better off if we loosen the reigns.

Russ Roberts: And, I would just add, and it stunts what it means to be a human being--to strive, to innovate, to be creative. It cedes power to the people who are more eager just to maintain the status quo.

Zvi Mowshowitz: Yeah, I strongly agree with that. And, I think that the people who do this often are well-meaning. Sometimes they are trying to protect their rent seeking or what their particular means of way of life or making a profit or their personal local experiences at greater broad expense. But that collectively, if we all loosen the reins, it would help almost all of us. And, over time, the results would compound. Right?

And, this has been true throughout human history. We have been very fortunate that we haven't had a regime this tight in this sense, until very recently. And, if we kept tightening it, there's the risk that we would lose our ability to do things more and more and that we would even stagnate and decline.

Russ Roberts: And, when you say 'this tight,' you meant in the United States--a regime--because there are plenty of other regimes that you don't even get to ask permission. It's just: you can't do anything.

Zvi Mowshowitz: Yeah. I mean, not only the United States in particular and around the world in general. You see the same rising tide of restrictions pretty much everywhere. And, there are people, and I try to be one of them, who are fighting the good fight to point out this is a problem and that we need to reverse these trends. And we need to, where we do intervene, do a better job.

Because one of the problems is, when we do require permissions, when we do try to regulate, we do a very bad Hayekian job of figuring out what would in fact mitigate the bad circumstances without interfering with the good circumstances? What would allow more competition rather than end up becoming less competition? What regulations wouldn't get captured? And so on.

And so, by 2023, it's a reasonable thing to say that there are very few areas left in which we still have the ability to move.

And so--I also would say that, you might say, 'Okay, once we've protected ourselves--once we've decided to slow down our construction of apartments--well, now we feel safer and we feel okay to then build power plants. Or roads.' But, what we've observed--and I think this is correct--over the years, is this is not actually how it works. What happens is there's a faction that goes, 'We should be safe. We should be preserving, we should be careful, we should regulate, we should require permissions.'

And the more of these things you require, the stronger this faction gets, the stronger this rhetoric gets, the stronger this background belief gets, and the easier it becomes to regulate other things.

And, the more free we are, the more permissions we don't have to ask for. The more things we do, the more people see the benefits of this approach, the more people understand what it can do for them, and the more they have the expectation that it's only normal to be able to go out there and do useful things, and the more progress they make.

And then the question is: Well, okay, so if I see a particular thing that I want to restrict--this is a standard libertarian-style thought--then I shouldn't just be aware of the fact that I'm going to screw this thing up. I'm going to make it easier for everyone to screw everything else up in a very similar way. Even if I get this particular intervention right and I do some local help, I'm risking bad things happening somewhere else.

And so, I said, 'What if we imagine this as one dial?' Then I remembered that this metaphor had been used in a fashion before, in fact, by Tyler when discussing COVID [coronavirus disease]. Because early on in COVID, Robin Hanson had this theory that if we all let young people--who were at very little risk in COVID; it was very, very safe for them to get COVID relatively speaking, except they might infect others--let them get infected first, this could create effective herd immunity while the older people were in relative hiding, and then we could take much less precaution afterwards and get through the pandemic that way.

And Tyler's response was essentially, 'Well, this is just advocating for yay, COVID as opposed to boo, COVID. This is just saying that we should just let COVID run rampant and people wouldn't be able to hear you.' And Robin is, like, 'I think you're acting like there's one dial from yay, COVID, boo, COVID.' And Tyler said, 'Yes.'

I remembered that and I thought: Okay, so what if there was a dial that was more general than that? What if there was a dial of progress? Right? And the idea was there: are people who advocate, 'Okay, we should let people in general do more things. We should require less permissions. We should open things up. We should let human ingenuity run free.' And there are people who say, 'No, we should keep a close eye on things. We should regulate them, we should require permissions.'

And then, you know, what if you thought that one of the major dangers to humanity right now in terms of being able to sustain and expand the civilization and make life worth living was that we've moved the dial too far down. I made sure to make it up, down and not left, right to avoid confusion because it's not a partisan issue. And say, 'Okay, so what if we have to[?the?] crank this too far down? What if one of the few places left that we had the dial in almost a maximum full-speed ahead-mode, locally speaking, is AI, because AI wasn't a thing that was on people's radars until very recently.

So, Tyler Cowen talks about the Great Stagnation, this concept that many things aren't advancing. Peter Thiel talks about how the world of atoms is restricted and we can't do things there, but the world of bits, you can still do some stuff. So, hence we see a lot of innovation with computers and we saw it with crypto, but why are so many intelligent, driven people getting into crypto? Well, it's because they can't be out there building power plants. They don't see opportunity there. They'd love to, but instead this is the place they can go. So, that's where they go.

So, now it's AI. AI has tremendous promise to restart economic growth, to provide more human intelligence, to make life a richer, better place to solve our other problems, even potentially prevent other extinction risks. So, if you want to let us proceed forward--if you want to give us a chance--what if our only chance is AI? Right? If what in some important sense we've lost this war everywhere else, and if we also restrict AI in this way--if we lock down artificial intelligence--what if there's nothing left and this just shuts down our last hope? That in and of itself is kind of an existential threat, even if it's not extinction as such. What do we do about that?

Russ Roberts: That would be, in an extreme version, it would be 1984. 1984, everybody's still alive, but no one wants to live--none of us want to live in that world. Well, very few of us want to live in that world.


Russ Roberts: So, just to repeat the metaphor a little bit--and by the way, I see it as going from--it's not up or down, it's just 'round. So, it goes from zero to 10 and 10 is anything you want anytime. Zero is you have to go to the--so you have to time travel back to the Soviet Union in 1928 to get permission to do stuff.

So, I think what I'm intrigued by is the idea that this--you don't write about this, so I want you to speculate on it if you would--it's kind of the way the brain works. We don't really have the sophistication to hold multiple ideas in our mind at the same time. Like, 'I want to be really free in this area, but not so free in this area.' That's just too hard for me--because there's more than two. Then I'm going to hear people making arguments for each one and I have to weigh each one. Just better to be 'Yay, COVID or boo, COVID'.

Now, you use COVID and lockdown, but for me, it's the kind of insanity that we're living in right now is vaccines. No, thought--I only got three shots and, 'Oooh. Only three? Are you anti-vaccine?' 'No, I'm not anti-vaccine.' 'Are you an idiot?' I'm pro vaccine. I mean, up to a point. And then after a while I can imagine there's decreasing returns in taking a shot that has never been tested and widely on people's immune systems for the nth time, with a new technology you've never used. Seemed to be kind of prudent, given that I'm not--I'm overweight, but I'm not obese. I don't have any horrible underlying co-morbidities. So, it seemed to be prudent to stop at three.

This guy I'm talking to at a party and he says, 'Oh, you stopped at three.' He says, 'I think you shouldn't let your politics get in the way of your health.' What?! He meant: 'Oh, well obviously you're one of those Trump voters, and they're anti-vaccine, so you're not thinking. You're just going with your bias.' I just looked at him and smiled and said, 'I don't think you know who you're talking to. This is something I think a little bit about.'

But anyway, that's a great example for me of the Dial. Are you pro-vaccine or anti? Well, it's a really horrible, terrible question for a thoughtful person, but it's the way our "society", quote, thinks about it. Society doesn't think. It's a complex, emergent set of opinions and interactions in media and it's not well-defined. But somehow it's come down to pro or con, in an age when we're supposed to be really smart. Nuance is dead; and we should be really good at nuance. We have more information. Yet it's just a question of which expert you decide to trust: the pro-vaccine guy or the anti-vaccine guy.

Of course, part of the problem is that being pro or anti as opposed to nuanced gets you attention. You get more clicks, you sell more ads; there's a lot of return to being unnuanced and very little return to being nuanced. Other than that, you might be right, you'd understand there are trade-offs. So, I find this a very common and potentially very useful way of understanding why things that don't make any sense are actually maybe sensible.

Zvi Mowshowitz: Yeah. So, I think of this as, sort of, our brains evolved with very limited compute, very limited ability to think about things in detail, very limited ability to process bits of information in ways that AI will often have more affordances in those areas. But, in order to be able to reasonably process information, especially in places where our major goals, evolving, was to avoid tail risks--to avoid dying or avoid being driven out of the tribe. We evolved these kinds of shortcuts and heuristics and associations, and that's just how the human brain inevitably works.

My guess is it's actually better than it used to be, and we just have higher standards now. We see the possibility of being able to do better, but that common discourse has always, in some sense, been thus; and as a result, yes, absolutely. Like, with COVID, I wrote a column--before I was writing about AI, I was writing about COVID because that was what was on my mind. I did it first because I think writing is how I learn to think about things. It's how I understand things. I wrote it so I could understand it. Then I wrote it for other people because other people were getting benefit out of it.

The entire time, I was definitely trying to not, 'Yay, COVID'; I'm not 'Boo, COVID.' I'm trying to particularly figure out what would actually work. And this is something that has a very niche audience. It's definitely an acquired taste that some people can handle and most people can't. And I accept that.

But, if that's true, and if you want to influence public policy, you have to understand that, and you have to adapt your messaging and your strategy to that situation.

So, someone could reasonably say, 'Okay, you are saying, Boo, AI because you see extinction risk. You see a very huge extinction risk if we don't take a very narrow particular set of interventions.' All anybody's ever going to hear if you call for a particular narrow set of interventions is 'Boo, AI.' And they're going to do a completely different set of interventions. And even you agree those interventions are bad. Those interventions are going to prevent us from unlocking this amazing potential that we all agree AI can offer us to improve our lives in the short run. And it's not going to stop the dynamics that you are worried about, that are inevitably going to lead to more and more capable systems that we don't know how to control, that are going to end up in control of our future. So, you're better off not doing that and then trying to figure out a better way forward when we get there: because your path is hopeless and will also damage our ability to build houses and roads and energy plants and everything else.


Russ Roberts: I think I'm a little uncomfortable with the broadness of 'Boo, progress, yay, progress,' because I think it might be a little complicated. And maybe we'll talk about it in a sec. But for me, the other area I think this works is: You do a survey of people and you say, 'Do you think we should spend more money on education?' And, a lot of people say yes; and they don't know how much we spend on education. They've never looked at a study of whether it's effective. They just say, 'Yay, education.'

Now, in one sense it's just expressive voting. They're just conveying to you, the pollster, that they like education. They don't really mean more. But I think most of them do mean more. I think they assume that, 'Okay, well, it's true that maybe it doesn't work so well. It's true that the bang for the buck might be limited, but more is always better than less of education.' And it's not even so much that they are one-issue people. They have lots of issues they want more of and they don't have to worry--they don't ever think about whether there's a budget constraint or limited resources. It's just that, 'I want that to get a vote.' So, yeah, more education, more fill-in-the-blank, more of what I care about. And that's what the political process is going to listen to.

Progress is a little bit more complicated, I think; but you might think about it more as more control versus less control, rather than progress versus, you know, a stagnation. I think the people who are 'Boo, progress' are uneasy with the uncontrolled aspect of it, the idea that you have to ask for forgiveness rather than permission.

And so, I think part of what's going on in this dynamic, if you're right--and I have no idea whether you're right or not: it's just really interesting to think about--is that if I have this underlying idea that control is good, or I have an underlying view that control is bad, the idea that I would pick, 'Oh, well it's worse in this area than this one,' or 'It's good in this area but bad in this one,' that's really hard. So, I'm just going to pick one. Yes or no on the control. I'm only going to pick one side: Yes or no on control.

And so, you get people who basically want to control things or want others to control them for them--regardless of whether it's possible, regardless of whether they're going to do it well. Which seems to be irrelevant by the way, again: whether money is spent well in education seems to be irrelevant. Whether the control actually accomplishes what people really want, they pay very little attention to. You can argue they don't have much incentive to, and they don't have much ability to understand it, but I'm arguing something different. I don't think they care so much because this is--that's a comfort thing. It's a security thing.

So, what do you think of that idea about that it's more of control than progress/no progress?

Zvi Mowshowitz: Yeah. I don't think people are actually saying, 'Boo, progress,' any more than they were ever saying 'Yay, COVID.' Right? I think that's--the opposite, like, if someone's called you anti-choice or anti-life, people would say, 'That's not a good characterization of my position.' And they would both be right: 'That's not how I think about myself.'

So, like, for the education people, I do see exactly this mistake. If you look at sources like Piketty and other people who model education as an input, they often literally just say, 'These people have more education because there were more inputs. Because you spent more hours, more years in a school, you have more human capital.'

Russ Roberts: Infuriating. Infuriates me.

Zvi Mowshowitz: It's completely silly. It is just not how this works. They don't think about the quality or the effects.

And similarly, I think if you ask people, 'Are you in favor of control? Are you in favor of restrictions?' They would say' 'Locally sometimes yes,' once they talk themselves into that. But mostly they would not generally say that. They would say things like they are pro-safety, or they are pro-responsibility. They'd use their own words; and that's how they think about this. Or: they're against risk, or they're against recklessness. They might have various levels of sophistication of arguments and sometimes they have good arguments and sometimes they just use broad, emotional heuristics, and everything in between.

But, it's the tendency. And, one of the things that happens every time--one of the first things people say, is: 'Well, we don't let you carry an--we don't let you use a nuclear weapon, so why should we let you do that?' Or, yeah: If you don't let someone build a house, people can say, 'Well, why am I--?' If you can't even build a house, why are we letting you--you know, if your hairdresser needs a license? Right? If a pilot needs 1500 hours of time to fly in the air--which they obviously don't--then, 'Well, clearly any job you want to do, you should have to ask permission of the state.' That's just the standard that we've set. Then it becomes the baseline of the argument that we have to make. Right? And this becomes a very, very difficult prior to overcome.


Russ Roberts: Yeah, that's a very sophisticated version of this that I absorbed from your essay--maybe because I misread it or maybe because it wasn't as fully developed as you're developing it now. The idea that I'm going to take the logic of my application here and then apply it somewhere else. I don't know if that's true, but I really think it's provocative and interesting to think about. What kind of reaction have you gotten from it?

Zvi Mowshowitz: So, I know that Tyler specifically thinks that this is not what he is doing; and that we're planning to talk and hopefully figure things out more because I want to understand. I try to respond to many of his ideas in specific detail. And, I do want to understand whatever is going on there; and to the extent that he's making these mistakes, hopefully figure out how to go forward, the most productive way possible. Mostly people have seen this as a very interesting proposal, something to think about. I've had a mostly positive reaction.

I haven't seen a reaction from Marc Andreessen, but the only other response I saw--serious response to him--was from a man named Dwarkesh Patel who runs the Lunar Society Podcast, who wrote a very thoughtful, point-by-point response. Which was: the first thing--so at first I thought I would write a similar response. And then I said, 'No, it doesn't make sense,' because if somebody is not actually--like, if the load-bearing is not in the individual points, if somebody is not looking to actually have good logic that they examine carefully and figure out the understanding, then addressing their individual points just doesn't address their cruxes.

It won't be convincing. Right? So, it's not a right thing, but like, forecasting the service of responding point by point and pointing out many of the conceptual problems in the essay, pointing out why many of the arguments, like, don't really make sense the way they were spoken in detail. And, the response that Marc Andreessen did was to block him on Twitter--almost immediately--and otherwise say nothing. And that was about--you know, that tells me what I need to know. I would love to engage in detail with such people and actually talk about these disagreements in any form. But, it's difficult.


Russ Roberts: Do you think there's any tribalism involved in these early days? I mean, regardless of which side one is on in these issues--and people ask me all the time, 'So, where do you come down?' I say, 'I don't really--'. I'm a little worried. I'm more worried than I was a year ago. But I'm not scared. And maybe I should be. So, I'm open to that. That's not very helpful. It's not what anybody wants to hear.

And I wonder--I wonder how much of where people come down on this issue is a tribal identification with people making kind of arguments--along the lines of what you're saying--that: 'I'm going to sympathize with Tyler,' say, or Marc, 'because I am kind of-pro progress.' Not kind of: I've always been a huge advocate. 'I want to be in that group. And I'm going to look for ways to feel good about it. And, maybe that's what I'm really doing if ultimately I come down on the side of 'let's let it rip.'

And I do think--I have always--when I was younger, I liked to believe that people looked for the truth. As I get older, I'm not as convinced of that.

So, some of it is: you're suggesting a kind of Machiavellian strategic argument. I'm suggesting it could be as simple as--again, I'm not trying to explain Tyler or Marc, but just in general where people come down on these issues--is like: 'I don't want to be like that person. That's a worldview, there are other pieces of that that creep me out.'

It's a version of sort of intersectionality. It's like: I can't be nuanced. I can't go case by case. Everything, they'll line up together. So, if I'm against AI, I'm against nuclear plants, too. And I think that--I don't want to be against nuclear plants. Right? Me, personally. I think they're a really good idea and I really think it's a terrible mistake that we have so few of them. So, maybe that's why I am more susceptible to the pro-AI. That's your argument about, I think, the Dial of Progress.

Zvi Mowshowitz: I think that we've definitely seen that people who think very well about economics in other realms in my mind--that share these perspectives and that do that professionally--often that come out with these very pro-AI, anti-regulatory principles that they would--exactly what I would predict them to have in any other circumstance. In almost every other circumstance, I would mostly agree with them. And it makes sense that they would have these perspectives on many levels.

I don't think this comes from a Machiavellian perspective for most of them. I think it definitely comes from a 'My heuristics all tell me this is almost always the right answer. This is where my priors should heavily land. You need to overcome that in order to convince me otherwise.' And then this leads sometimes to not considering the arguments, not giving them the time of day, or just finding, exploring possibility space to find a way to tell yourself a plausible story that you buy that says that this is going to be okay.

Then the amount of this that is[?] conscious or unconscious, or that you planned, is an open question. I don't mean to imply ill motives of anyone. Again, I think that everybody involved wants the best outcomes for everyone. I think there are very, very few people who don't want that, and they tend to be very vocally saying they don't want that. You can tell who they are and you can react accordingly. Occasionally someone says, 'Oh, your species says: How dare you not want AI to flourish? And it's going to wipe out humanity, and that's good.' Then you say, 'Okay, I now know that you think that, and thank you and I hope more people hear you because I expect their reaction to be helpful to not having that happen.' Like, as opposed to the opposite. That's good. And I believe it open[?] to me.

So, when I see these reactions--I also notice that, like, the people who have thought long and hard about the risks--the extinction risks--from artificial intelligence from long before the current boom in AI and who are the loudest people advocating in favor of worrying of doing something about extinction risks--so, focusing on extinction risks--they tend to also have come to these realizations in the economic sphere much more than most other people.

Whereas the traditional tribal affiliations, most things in American politics get quickly labeled as, 'This is red, this is blue; this is the blue position, this is the red position,' like COVID was. Right? And we haven't seen that in AI. If you ask the surveys, you see almost no partisan split whatsoever. This is the exactly the least tribal thing of this level of importance that we've ever seen for this long. And we've been very fortunate; and I love that and I hope we can sustain that for as long as possible and have relatively intelligent dialogue.

Instead, we have this kind of weird discussion where we have economically very good views, actually, on both sides of the discussion. We're able to have a very nuanced conversation where those views happen to bias people in a particular direction. And yet, the people who are worried have managed to overcome this because they've thought long and hard about the other aspects of it.

I do think that the human brain, as I said, that works on these shortcuts, works on these heuristics. And so, we will always to some extent, pick up from vibing, from heuristics, from general associations, from simplifications, and from noticing that other people will act this way, too, and that we can't speak with too much nuance if we want to be heard. So, we'll always have these tribal, as it were, issues, these partisan in context issues where we go back and forth. We do our best not to do that and try to be charitable on the other side, try to engage with their actual arguments.

I try to inform people to form their own models. I talk about models not in the formal sense that Tyler Cohen says: you should write down a scientific model and submit it to a journal and have lots of math and have all these dependencies and have a precise equation pointed out. I'm talking about the kind of model you form in your head where you think carefully about a situation. You have an understanding on some level of all these different dynamics, and you try to bring it together to form as solid a distribution as you can over what you think might happen.

In my use of the word 'model,' I could ask, 'Tyler, what is your model of what happens after AI?' And he's talked about some aspects of his model. He's said: AIs will have their own economy. They'll use crypto to exchange value because it's the efficient way for AIs to exchange it. And that maybe we'll be the dogs to the AI as humans. He's talked about this metaphor a few times, I believe, including on your podcast, and that the dogs will train the AIs the way the AIs train the dogs. My imploring would be: Think more carefully about why the equilibrium is true between dogs and humans in the real world and whether or not those dependencies hold in the case that you're imagining, in the way that you're imagining.

Russ Roberts: I don't know if Tyler really expects a mathematical model, and I don't know if we talked about it when I interviewed him, but he actually encouraged people to put things into the peer review process, which is a process I don't believe anymore leads to truth. So, it seems like a bit of a red herring.


Russ Roberts: I think the word that describes what you're talking about is a narrative, and some narratives are more plausible than others. And, I'm okay with the narrative rather than mathematical model on either side of this debate. I think the people who are on each side need better narratives. They need a better story that I'm going to find convincing. I find neither side convincing. Eliezer Yudkowsky came up with the most creative narrative. There were some really wonderful flights of intellectual fancy there, and I don't mean that in a derogatory way at all. I found it extremely mind-expanding but not quite to the level of [inaudible 01:15:51]. He ratcheted up my fear a little bit though, because I thought that was a narrative I hadn't thought of and is somewhat possibly worrisome.

And I think this--we're never going to have any data, I don't think. Almost by definition, by the time we get the data on whether it's sentient and destructive--it'll be a great science fiction movie--we'll already be in prison and they'll be harvesting our kidneys. You'll just be in line to get your kidney removed for the paperclip factory.

But, I think we need better narratives. I think we need stories and logic--not formalism, but logic--about why a narrative is plausible, either because it mirrors past narratives that turned out to be plausible, or better, it fits this particular unique, very different case. And, I think that if I had to speculate on why really smart people are wildly divergent on this is that there's no data. There's almost very little evidence, and we're speculating about which narrative is more plausible. That seems unlikely to be resolved in the near term. Maybe AI can us help us fix it. I don't know.

Zvi Mowshowitz: So, I think that when we talk about narratives, an important question is: To what extent is your narrative or model made of gears? Right? To what extent does it have details about how different parts of it lead to other parts? Like, what are the physical mechanisms underlying it? Get Hayekian detail into what you're talking about in a real way.

And so, in my observation, when I look into the narratives that are told about why everything will be fine, I don't see very many plausible gears there. Even if there are gears, I see them as, like: Gears don't turn that way. These gears wouldn't work. This is not the outcome that you would expect. Your gears lead to something else. And, I strive to continuously improve the gears in the other models.

And, it's definitely difficult to work out the details, but I do think we have more data than no data. I think that the idea that all we can do is tell each other stories and proposals--but, like, we know a lot of things in particular about, like, what capabilities these systems will have; and also, 'In what ways will humans react to these systems? What actions will they take?'

So, one of the--a big emphasis is: back in the day, one of the questions was, 'How cautious will we be dealing with these systems as they gain these capabilities? What affordances will we hold back from these systems? What capabilities will we try to not add to these systems to contain our risk?'

So, for example, one idea was, 'Well, we wouldn't be so foolish as to hook up potentially dangerous artificial intelligence just straight to the Internet, let it do whatever it wanted, ping any website with any piece of data and do anything.' Because then, if it was a dangerous system, we'd be in real trouble.

And, it turns out: No, humans find that useful. So, we just do that, including during the training of the system, just right off the bat. Right? Every time. And, we've gotten used to it: now it's just fine. So, we can stop worrying. That theory has been proven wrong and this other theory has been proven right.

Another question is, you know: One of the things--Marc talks about it as well--Systems don't have goals. AI systems are math. They don't have goals. Well, maybe they don't have goals inherently. That's a question that's interesting that we can speculate about as to whether they would evolve goals on their own.

What we do know is that humans love achieving goals, and that when you give an AI system goals, it helps you achieve your goals. Right? At least on the margin, at least to starting out, people think this. And so, we see Baby GPT and Auto GPT and all these other systems that turns out for 100 lines of code. You can create the scaffolding around GPT-4 that makes an attempt to act like it has goals. Right? To take actions as if it had goals and to act as a goal-motivated system.

And, it's not great because the underlying technologies aren't there, and we haven't gone through the iterations of building the right scaffolding, and we don't know a lot of the tricks, and it's still very, very early days.

But, we absolutely are going to turn our systems into agents with goals that are trying to achieve goals, that then create sub-goals, that then plan but then ask themselves, 'What do we need to do in order to accomplish this thing?' And, that will include like, 'Oh, I don't have this information. I need to go get this information.' 'I don't have this capability. I don't have access to this tool. I need to get this tool.' And, it's a very small leap from there to, 'I'm going to need more money.' Right? Or something like that. And from there, the sky's the limit. So, we can rule out, through experimentation in a way that we couldn't two years ago--right?--this particular theory of Marc's that the systems in the future won't have goals in a meaningful sense unless we take action to stop it.


Russ Roberts: I think that's an interesting intellectual question. And, I think part of the reason that the skeptics--the optimists--are more optimistic. And, part of the reason I think we are in some sense just telling different narratives and some are more convincing than others, and it's mainly stories, is that we don't have any vivid examples today of my vacuum cleaner wanting to be a driverless car--an example I've used before. It doesn't aspire. Now, we might see some aspiration or at least perceived aspiration in ChatGPT at some point, but I think part of the problem getting people convinced about its dangers is that that leap--a sentience leap, the consciousness leap, which is where goals come in--doesn't seem credible. At least today. Maybe it will be, and I think that's where you and others who are worried about AI need to help me and others who are less worried to see.

But, either way, isn't the much more worrisome thing that a human being will use it to destroy things? I mean--that's like saying, 'Well, we've got this automatic rifle. What if it jumped out of the hands of a person and starts spraying sharp bullets all around because you've given it this motor or something that causes some centripetal movement?' Blah, blah, blah.

That's not the problem. The problem is a person is going to grab it and use it to kill people.

And it seems to me that that is--you know, 'Could it get out?'--that's not the bigger worry. The bigger worry is someone lets it out. Someone harnesses it to do evil because they want to be noticed, because their life is miserable for a hundred reasons--that humans are not just creative but also destructive. It's going to be really hard to keep that from happening. I can't imagine us stopping it.

Zvi Mowshowitz: So, I think in the short term, that's absolutely, like, the only risk. In the next month or the next year even, if AI does harm, it's because some human directed it to do harm.

But, I do think that, like, even without malicious intent, there's going to be tremendous economic incentive--tremendous just personal incentive--to hand over more and more of the reins to AIs that are operating more and more without checking with humans, without getting permissions from humans. Because this is what gets us what we want. This is what makes the corporation more profitable. This is what allows us to do our job better. This is what just achieves all of our goals. Right? And so, a lot of these goals are going to be maximalist goals and be things like, 'Maximize profits for the corporation.'

And so, with these AIs, you know, on the loose, in this sense, even without malicious intent, you're going to have a serious problem. Because, the AIs are not--they're going to be competing and they don't have to keep each other in check. And, you have the obvious externality problems that arise in these situations that they're not going to internalize and so on.

Russ Roberts: Yeah, well, we just did this episode with Mike Munger on enforcing obedience to the unenforceable. And it's this idea that norms are a very powerful way that we restrict things. And, I started to say, 'Well, I can't really expect AI to have norms or ethics.' People are talking about ethics, 'Give it ethics. Just program ethics into it.' Like that's easy. But, if people are right that it's going to have some sentience, it could develop norms maybe. But why would it develop norms that would be good for humans? It would be hard to argue, it would seem to me.

Zvi Mowshowitz: Yeah. But the problem is that if it develops norms that make it less competitive, that make it worse at getting the human that's operating it what they want, that human is going to select against those norms; and so it's not going to go the way that we want. Even if we get this kind of lucky. Right? That some of them happen to evolve these norms, we'd have to do it intentionally and carefully.

We have an interesting situation. So, like, AIs are really, really bad at observing norms that they don't actually get rewarded or punished for observing or not observing. But, they are also very good at actually obeying the rules if you tell them they have to obey the rules.

And so, this middle ground that was talked about in that episode gets completely destroyed. Right?

You can move a lot more things into the rules set--right? Things where the human knows the speed limit is not actually 65. The human knows it's actually 72. And, the human knows in this situation, you're actually supposed to break the traffic laws because that's silly.

Whereas the AI literally cannot break the traffic laws. It has a restriction on it, like, 'Nope. Never allowed to break the traffic laws.' It will never obey any heuristics or norms, including some norms that actually break the technical law. And so the human old solution doesn't work at all anymore, right? And so, our old human solution of norms breaks down entirely.

I also really appreciated--when you have AI in the brain, everything is in some way a metaphor for the problem. And so, this idea of--Marx writes this huge thing about how capitalism is terrible; we're going to overthrow it; we're going to create this Communist utopia. And then, he writes five pages that are completely vague about what the Communist utopia is going to be.

And, similarly, we have many other people who do similar things.

And that, to me is like: AI is another example of this where a lot of people are saying, 'We're going to build this amazing AI system that's going to have all these capabilities and then we're going to have this Brave New World where everything is going to be awesome for us humans and we're going to live great lives.' And then, they spend one paragraph trying to explain, 'What are the dynamics of that world?' Like, what are the incentives? Why is this system at equilibrium? Why do the humans survive over the long run given the incentives that are inherent in all the dynamics involved?

And that's if we've done a lot of the hard work already, that I think is going to be very, very hard and that I'm not confident we will successfully do.

But let's say that we do it: We're going to align the system.

Well, what exactly did you do when you aligned the system? What did you tell the system to do? Under what circumstances? What rules is it applying?

It can't just be a bunch of fuzzy human norms that just sort of come together. Have you gamed out what happens after that when you put all of these new entities into the system and then let normal dynamics play?

And the answer is that if they have answers to these questions, they're very, very basic simple models that have no detail to them. That, whenever I try to flesh them out, I can't. I don't know how.


Russ Roberts: The part I like--one of the things I like--I liked all of that what you just said. I think that's really interesting. But, I've noticed in the last six months, 12 months--must be a European Union [EU] thing or something, U.S. law that came into effect--there's a lot more websites asking me if I want cookies.

Now, deep down, I think cookies are kind of creepy and dangerous and allow some surveillance that I'm not excited about, but I just take the cookies. Darn it. I want the website. I want to get to the answer. I'm in a hurry. And, I think that the really good science fiction, Brave New World 2.0, is going to be exploiting that human desire for ease, comfort, productivity, whatever it is, goals, as you mentioned. And, we're not going to worry so much as individuals; and it's going to be hard to get people to rein the whole system in.

So, I think we're headed there. I'm not sure the quality of life will be better. I've been skeptical--I don't know if you've heard it, but certainly of Eliezer, and I got a word in about it with Marc which, again, hasn't aired yet--but, intelligence by itself doesn't have a great track record in my view in human affairs. Some rather than none? Great. A lot rather than a good amount? Very mixed record.

Now, maybe this will be different, but the utopian story, in and of itself--forget the existential risk part--I'm not convinced of. I'm not convinced of the fact that if we turn to it for parenting, say, or our romantic advice, I'm not sure that's going to make us feel better as humans. It might make us feel a lot worse. And, I talked about it a lot with this episode that came out today with Jacob Howland that some of our most human skills are going to atrophy. Maybe it won't matter, but I do feel like it's time to put seat belts on, folks. We're going to get into a very bumpy ride and it's coming very much in our own lifetimes.

Zvi Mowshowitz: Oh, definitely put your seat belts on. Everything is going to change. A lot of these norms and ways of doing things are going to fall away and we're going to have to adopt to the new world.

I think with atrophying, different humans are going to have a choice to make because we're going to have to decide: What are the things we're going to pursue for their own sake or because we don't want to atrophy them? And, what are the things that we do want to pursue?

Another thing that I think is interesting with intelligence is this idea of a shift in how you do something. So, GPT right now is often doing something with an imitative way. It's doing something in a less-intelligent way than we are. We are doing something where we understand the underlying structure of the system, and then we think about it logically, and then we figure out from very little data, with very little memory, with very little compute, what the answer is.

And, the GPT just brute forces this through vibe and association and correlation because that's the way that it currently works. And then, over time, it hopes to approximate that. And then, as it gets better, it moves to another system.

And, humans often have the same thing where you start by imitating the sounds and physical actions that you see around you, and then you slowly understand the underlying structure, and then suddenly, it snaps into place. And, now instead of just trying to vaguely imitate the jazz sound you're hearing around you, you can start improvising and making real jazz and you start to understand the thing and then it evolves from there. And then, there are many of these leaps we make, and that what's often going to happen is we have a certain way of doing things that is not the most efficient way of achieving things, but the more efficient ways are well beyond--like, they require a phase shift in how we understand things.

And that as the AI gains its capabilities, it's going to start doing these phase shifts. They call this grokking in training, as I understand it, where the loss function says that you're not doing a very good job. You're doing a pretty lousy job at figuring out what you're supposed to do here, and you keep on doing a slightly less lousy job, slightly less lousy job, and then suddenly, wham. And the system gets much better at it. It's called grokking. It's a system for you to understand, in some sense. It developed a different way of understanding things; and humans do this as well. And so, what will often happen is that the AI will shift from the current way that a goal is achieved, that something is done, to a new way. And, one of the worries is that this kind of shift breaks our current ways of keeping the system in check, of understanding the system, of predicting the system, of making sure the system is going to do the things that we want. And, this is one of the things that I worry about.

Russ Roberts: Yeah. It's a little like chess or Go programs. They started very primitively and they get a lot better now. Life isn't like chess or Go. Very different board, even though, Adam Smith, I[?] used his metaphor pretty well.


Russ Roberts: I just can't stop thinking about this idea that even if it's not really good for us, we're going to probably do it anyway because it's pleasant. And, certainly if it's not good for us as a species. I'm thinking about the Amish. There are a lot of people decrying the cell phone these days. I worry about it sometimes. I talk about a near compulsion I have with it sometimes. I view it as unhealthy for me. And, the Amish, they go case by case, by the way. They take a technology. They say, 'We'll use a wagon. We're not going to drag stuff on the ground. We could use a wagon,' and they could have a car if they need to get to the hospital, say, but they're not going to buy a sports car; and they're not going to do other things, and they're not going to change their farming in certain ways.

And I think those of us on the outside--it's of course true of a religious life in any many, many ways--on the outside go, 'There's something really beautiful about that.' But, the fact is most people don't find it beautiful enough to adopt, because it's hard. There's hard parts to it, and by definition almost--communities that are held together and that have all kinds of returns and belonging and meaning and purpose have hard parts. Because otherwise, it's not interesting. It doesn't hold together.

And so, the idea that--like I like to say on this show a lot: I think it's probably not true, the more I think about it, that, 'Oh, norms will evolve to cope with this, and we'll understand you shouldn't use AI for this, but it's okay to use it for that. These norms will come along and they'll keep it within a human context.' And, I think that's true, probably, for most people. I think they're going to use it a lot because it makes life easier. It makes them look better, look smarter, make more money, and it's going to be really hard. I'm not sure. Like I said, I don't think we're going to slow this down much.

Zvi Mowshowitz: The problem is it's very difficult to slow this down in a meaningful way. It's also very difficult to ensure a good outcome. And then, if you have two impossible things and you need to solve one of them, or you're in a lot of trouble, you need to pick which one you think is the best one to act on and do the best you can.

With the Amish, I think they achieve a lot of very good things. And it's a question of, 'Is this more valuable than the things that you give up or not?' But they do this because they have the economic affordance to be able to do that. Their lifestyle has these costs, these economic costs, these economic benefits, that allows them to produce the things they need to survive, that allows them to turn a profit, that allows them then to be continuously purchasing more land and having more people that can survive on that land.

And, as we are forced to compete, we have more and more capable AIs and AI systems that have more and more affordances, will that kind of system actually continue to be economically competitive? And, if it's true locally that it can survive on its own resources, will it be left alone to do so from something that, like, covets those resources? Simply from a, 'I can achieve more of my goals if I have more of the land, if I have more of the energy from the sun available to me,' or whatnot. And so, it's not necessarily safe to be an Amish, even if you can make the very difficult emotional decision to stay on the farm and enjoy this very carefully-selected lifestyle that you think has value.

Russ Roberts: My guest today has been Zvi Mowshowitz. His Substack is Don't Worry About the Vase. I strongly recommend it if you're interested in keeping up. It's hard to keep up. He keeps up quite well, but I'm sure even Zvi, there are a few things he doesn't know; but he does a lot of them. And, more than most. Zvi, thanks for being part of EconTalk.

Zvi Mowshowitz: Thanks for having me.

More EconTalk Episodes