The Past and Future of AI (with Dwarkesh Patel)
Apr 28 2025

Capture-2.jpg Dwarkesh Patel interviewed the most influential thinkers and leaders in the world of AI and chronicled the history of AI up to now in his book, The Scaling Era. Listen as he talks to EconTalk's Russ Roberts about the book, the dangers and potential of AI, and the role scale plays in AI progress. The conversation concludes with a discussion of the art of podcasting.

RELATED EPISODE
Matt Ridley on How Innovation Works
What's the difference between invention and innovation? Could it be that innovation--the process of making a breakthrough invention available, affordable, and reliable--is actually the hard part? In this week's EconTalk episode, author Matt Ridley talks about his book How Innovation...
EXPLORE MORE
Related EPISODE
Ian Leslie on Being Human in the Age of AI
When OpenAI launched its conversational chatbot this past November, author Ian Leslie was struck by the humanness of the computer's dialogue. Then he realized that he had it exactly backward: In an age that favors the formulaic and generic to...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Adam
Apr 28 2025 at 3:52pm

Regarding the “lost pleasure” of Russ’s granddaughter, I think we need to make a distinction. I would not want to live in the year 1000, but it is still possible to read the thoughts of people from that time with great pleasure. Pleasures come and go as social conditions change, but what unifies us with Plato and Sophocles, such that we can still read them and feel like they have something to say to us? It is the moral, political, and psychological questions that have remained constant, or constant enough anyway. To me the question is, will future people still orient themselves by what they take to be the true, the beautiful, and the good; and then struggle with the resulting uncertainties and paradoxes? If yes, then they and us will remain part of some kind of continuum. If not, then the future is on the other side of a wall those of us living now can’t cross. Good luck to whatever kinds of creatures are over there.

Ajit Kirpekar
Apr 28 2025 at 5:43pm

I wanted to make a few comments regarding AI that weren’t discussed in the episode:

I think the transformer architecture wasn’t hailed quite enough for what it did for the industry. Before it was written and since, there are thousands and thousands of papers being published in ML. No one paper upended the industry like it did. It fundamentally changed how Natural Language Processing is done – effectively replacing all other algorithms that came before it. Its as if before it arrived, people were trying 4-100 different types of models based on the type of problem. In one fell swoop – it became the defacto architecture.
The genius of the transformer partially lies on the compute side of things(which the guest touched on), but completely. The other side of its brilliance is the fact that it can model very long sequences in a way where things from the very distance past that have relevance on stuff today can still be incorporated. Think of a sequence from T0….Tn. In past models, by the time you got to sequence Tn+1, T0 was just long forgotten even if it had relevant influences. People were aware of this problem and tried to find variants on the existing ML models to address this issue. They mostly failed. The transformer largely succeeded through something called an attention head( I won’t go further into that).
Russ starts to talk about the philosophical nature of AI and how it might replace human cognition and creativity. Those are fair concerns, but I don’t think that will happen within this architecture and the reasons why are actually baked into the technical aspects of how it works. I will try to summarize it here:

Models begin as large pretrained creations. They essentially scrape all the text from the web and build out the basic semantic relationships between words.
The next phase is called fine-tuning. This phase tries to build on the pretrained model by feeding in high quality data. Think of peer reviewed journal articles, well written academic tutorials; high quality medical information, etc etc. This is quite expensive to obtain.
A final layer is called the Reinforcement Learning with Human Feedback. It basically uses some humans to rank order the quality of a returned result from the fine tuned model. Then, a Reinforcement Learning algorithm attempts to generate results with feedback that ensures we get good results.

The critical limits of this architecture when it comes to cognition is really coming out of 3. In principle, the Reinforcement Learning part allows the machine to explore new ideas and gives a semblance of creativity where the prior two parts were semi brute force learning of human text. But in reality, this algorithm is still very much anchored by the quality of the fine tuned model. By that I mean; because RL by itself tends to wander around and give non-nonsensical results, it is output is tethered directly to the base language model – forcing it to explore only a certain region of space. That essentially limits its creativity even as it is still allowed to explore. The other part is – Reinforcement Learning learns from human measurement and does well for things that have a proper ranking of quality. Think mathematical proofs or computer code. It does less well for things that have much more amorphous definitions of quality or contexts. Think of poetry or a speech for the right audience or even deeper meanings of philosophy.

Dan K
Apr 28 2025 at 10:29pm

Today and everything that will follow has its beginnings in Prometheus’s theft of fire from the gods. Fire is technology and we are a different species — bigger brains, smaller gut, and smaller jaws just to name a few — because of the ability to cook our food. Every human after fire is a transhuman.

Socrates bemoaned the use of writing and books because he thought it promoted lazy thinking and worse memory, which echoes our current conversation around phones. Maybe he was right and maybe phones are rotting our brains, and maybe AI will take a part of our humanity away. But perhaps it just one more step in a long continuum of reinvention in the face of uncertainty.

That being said, as an English teacher I still am at a loss of how to grapple with Chat GPT in the classroom. I want to find the juste milieu between recognizing the reality of LLMs but also impressing upon by students the importance of doing the work on their own. But truth be told.. I am using Chat GPT all the time and the irony of me scolding my students for using Chat GPT on my assignments that I wrote with Chat GPT is not lost on me.

Luckily I am young(ish) and new to the field of teaching, so while it feels like a big problem, it also an exciting one to navigate.

I really enjoyed the later half of the podcast where you both talked about the craft of podcasting. Don’t change a thing Russ you are one of a kind and are one of my favorite parts of Monday.

 

Gregg Tavares
Apr 29 2025 at 3:42am

It’s always frustrating to me when someone says LLMs/Transformers whatever, have passed the Turing Test. Nothing has passed the Turing Test! Stop the exaggeration! The Turing Test requires a computer to come across is indistinguishable from a human via text communication with no limits on what you’re allowed to ask it. LLMs still fail all kinds of simple questions that easily make it clear they aren’t human.

Here is ChatGPT 4o

Q: Alice has 9 brothers and she also has 7 sisters. How many sisters does Alice’s brother have?

A: Alice has 7 sisters. Since Alice is one of the sisters, her brothers also have the same 7 sisters (including Alice). So, each brother has 7 sisters. Answer: 7.

Wrong: If Alice have 7 sisters, then her brothers have those 7 + Alice. 8 sisters.

It’s easy to trip up an LLM. They’re great for many things and I use them daily but no, they have no passed the Turing Test.

Sylvain Ribes
Apr 29 2025 at 2:45pm

From March this year :

https://arxiv.org/abs/2503.23674

 

In a controlled setup in a faceup match vs a human, GPT4.5 was judged to be human 73% of the time, only 27% for the actual human.

I’m not sure it makes sense to keep arguing that they would _not_ pass the Turing test when by most reasonable standards they would.

And I feel like your way to trip the LLM actually argues against your point, since I believe most humans would themselves fail this question at first glance 🙂

Gregg Tavares
Apr 29 2025 at 8:13pm

Like I said before, ” with no limits “. They limited the interaction to 5 minutes and during that 5 minutes talked to both a human and a bot. So not, they did not pass the Turing Test. I can make the same test here. I’ll limit it to 5 seconds:

Tester: Hello

Partner 1: Hi

Partner 2: Hello

Times up! Ok, which was the human and which the bot? 1 or 2? Obviously you can’t tell because you added an artificial limit. 5 minutes, during which the testing has to type means they likely only got a few responses from either, hardly enough time to test. Further, there’s the limit of response time, a bot will spew out a few paragraphs of an answer, a human takes much longer to type. So either you have to slow down the bot, in which case you had even less interactions in those 5 minutes, or you need a speech to text system for the human and hope they’re a fast speaker and the system gets the hard stuff correct like names, which is unlikely.

So no, no bot has passed the Turing Test

Shalom Freedman
Apr 29 2025 at 7:59am

‘Where are they?’ Why haven’t all the AI super-minds in the universe already combined in such a way as to already be controlling everything that happens not only on our own little earth but everywhere in all the universe or universes? Why haven’t they already made these short-lived biological beings a small stage of the distant past? Why is that human beings are just coming around to develop now what should have already been developed a non-finite number of times elsewhere in a non-finite number of places in all which has already been?

Catherine M Jones
Apr 29 2025 at 9:45am

A fascinating episode Russ- thank you! K

John K Dawson
Apr 29 2025 at 2:35pm

President Roberts,

Another great episode. I really enjoyed your discussion of podcasting. You mentioned finding interesting guests. I suggest Brian Potter author of the Construction Physics Substack would be an excellent fit for an episode. His Substack focuses on the technical difficulty of building things. In particular the lack of productivity gains in the construction industry but many other issues as well (how likely is Boom to be able to produce a supersonic passenger jet for one).

On the AI topic, I would be more convinced the AI future is here if self driving cars were actually self driving.

Sylvain Ribes
Apr 29 2025 at 2:53pm

I am a fan of both EconTalk and Dwarkesh’s Podcast, so I was thrilled to listen to your conversation.
It’s quite amusing to contrast the two of you: Dwarkesh with his youth/SF-coded lexicon and rapid-fire manner of speech versus your more fatherly (dare I say grand-fatherly!), introspective style.
The way each of you approaches your respective podcast appears similarly contrasted. In essence, I’d argue he seeks knowledge where you seek wisdom—two sides of the same coin perhaps. Either way, I was delighted by your conversation.

Ashis Roy
Apr 30 2025 at 10:29am

What about the 7th sense? What about consciousness? What about God? Will AI hives scale up to acquire that 7th sense? Will be able to replicate Gurus and Saints?

How will AI scale up to that?

Leave a Reply to Adam Cancel reply
required
required
required, not displayed
required, not displayed
optional
optional

This site uses Akismet to reduce spam. Learn how your comment data is processed.


DELVE DEEPER

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:37

Intro. [Recording date: March 25, 2025.]

Russ Roberts: Today is March 25th, 2025, and my guest is podcaster and author, Dwarkesh Patel. You can find him on YouTube, at Substack at Dwarkesh.com. He is the author with Gavin Leech of The Scaling Era: An Oral History of AI, 2019-2025, which is our topic for today, along with many other things, I suspect. Dwarkesh, welcome to EconTalk.

Dwarkesh Patel: Thanks for having me on, Russ. I've been a fan, I was just telling you, for ever since--I think probably before I started my podcast, I've been a big fan, so it's actually really cool to get to talk to you.

Russ Roberts: Well, I really appreciate it. I admire your work as well. We're going to talk about it some.

1:17

Russ Roberts: You start off saying, early in the book--and I should say, this book is from Stripe Press, which produces beautiful books. Unfortunately, I saw it in PDF [Portable Document Format] form; but it was pretty beautiful in PDF form, but it's I'm sure even nicer in its physical form. You say, 'We need to see the last six years afresh--2019 to the present.' Why? What are we missing?

Dwarkesh Patel: I think there's this perspective in the popular conception of AI [artificial intelligence], maybe even when researchers talk about it, that the big thing that's happened is we've made these breakthroughs and algorithms. We've come up with these big new ideas. And that has happened, but the backdrop is just these big-picture trends, these trends most importantly in the buildup of compute, in the buildup of data--even these new algorithms come about as a result of this sort of evolutionary process where if you have more compute to experiment on, you can try out different ideas. You wouldn't have known beforehand why the transformer works better than the previous architectures if you didn't have more compute to play around with.

And then when you look at: then why did we go from GPT-2 to GPT-3 to GPT-4 [Generative Pre-trained Transformer] to the models we're working with now? Again, it's a story of dumping in more and more compute. Then that raises just a bunch of questions about: Well, what is the nature of intelligence such that you just throw a big blob of compute at wide distribution of data and you get this agentic thing that can solve problems on the other end? It raises a bunch of other questions about what will happen in the future.

But, I think that trend of this 4X-ing [four times] of compute every single year, increasing in investment to the level we're at hundreds of dollars now at something which was an academic hobby a decade ago, is the missed trend.

Russ Roberts: I didn't mention that you're a computer science major, so you know some things that I really don't know at all. What is the transformer? Explain what that is. It's a key part of the technology here.

Dwarkesh Patel: So, the transformer is this architecture that was invented by some Google researchers in 2018, and it's the fundamental architectural breakthrough behind ChatGPT and the kinds of models that you play around with when you think about an LLM [large language model].

And, what separates it from the kinds of architectures before is that it's much easier to train in parallel. So, if you have these huge clusters of GPUs [Graphics Processing Units], a transformer is just much more practicable to scale than other architectures. And that allowed us to just keep throwing more compute at this problem of trying to get these things to be intelligent.

And then the other big breakthrough was to combine this architecture with just this really naive training process of: Predict the next word. And you wouldn't have--now, we just know that this is how it works, and so we're, like, 'Okay? Of course, that's how you get intelligence.' But it's actually really interesting that you predict the next word in Wikitext, and as you make it bigger and bigger, it picks up these longer and longer patterns, to the point where now it can just totally pass a Turing Test, can even be helpful in certain kinds of tasks.

Russ Roberts: Yeah, I think you said it gets "intelligent." Obviously that was a--you had quotes around it. But maybe not. We'll talk about that.

At the end of the first chapter, you say, "This book's knowledge cut-off is November, 2024. This means that any information or events occurring after that time will not be reflected." That's, like, two eons ago.

Dwarkesh Patel: That's right.

Russ Roberts: So, how does that affect the book in the way you think about it and talk about it?

Dwarkesh Patel: Obviously, the big breakthrough since has been inference scaling, models like o1 and o3, even DeepSeek's reasoning model. In an important way, it is a big break from the past. Previously, we had this idea that pre-training, which is just making the models bigger--so if you think like GPT-3.5 to GPT-4--that's where progress is going to come from. It does seem that that alone is slightly disappointing. GPT-4.5 was released and it's better but not significantly better than GPT-4.

So, the next frontier now is this: How much juice can you get out of trying to make these smaller models--train them towards a specific objective? So, not just predicting internet text, but: Solve this coding problem for me, solve this math problem for me. And how much does that get you--because those are kinds of verifiable problems where you know the solution, you just get a see if the model can get that solution. Can we get some purchase on slightly harder tasks, which are more ambiguous, probably the kind of research you do, or also the kinds of tasks which are--just require a lot of consecutive steps? The model still can't use a computer reliably, and that's where a lot of economic value lies. To automate remote work, you actually got to do remote work. So, that's the big change.

Russ Roberts: I really appreciate you saying, 'That's the kind of research you do.' The kind of research I do at my age is what is wrong with my sense of self and ego that I still need to do X, Y, Z to feel good about myself? That's the kind of research I'm looking into. But I appreciate--I'm flattered by your presumption that I was doing something else.

6:48

Russ Roberts: Now, I have become enamored of Claude. There was a rumor that Claude is better with Hebrew than other LLMs. I don't know if that's true--obviously because my Hebrew is not good enough to verify that. But I think if you ask me, 'Why do you like Claude?' it's an embarrassing answer. The typeface is really--the font is fantastic. The way it looks on my phone is beautifully arrayed. It's a lovely visual interface.

There are some of these tools that are much better than others for certain tasks. Do we know that? Do the people in the business know that and do they have even a vague idea as to why that is?

So, I assume, for example, some might be better at coding, some might better at more deep research, some might better at thinking and meaning, taking time before answering and it makes a difference. But, for many things that normal people would want to do, are there any differences between them--do we know of? and do we know why?

Dwarkesh Patel: I feel like normal people are in a better position to answer that question than the AI researchers. I mean, one question I have is: in the long run, what will be the trend here? So, it seems to me that the models are kind of similar. And not only are they similar, but they're getting more similar over time, where, now everybody's releasing a reasoning model, and they're not only that, they're copying the--when they make a new product, not only do they copy the product, they copy the name of the product. Gemini has Deep Research and OpenAI has Deep Research.

You could think in the long run maybe they'd get distinguished. And it does seem like the labs are pursuing sort of different objectives. It seems like a company like Anthropic may be much more optimizing for this fully autonomous software engineer, because that's where they think a lot of the value is first unlocked. And then other labs maybe are optimizing more for consumer adoption or for just, like, enterprise use or something like that. But, at least so far--tell me about your impression, but my sense is they feel kind of similar.

Russ Roberts: Yeah, they do. In fact, I think in something like translation, a truly bilingual person might have a preference or a taste. Actually, I'll ask you what you use it for in your personal life, not your intellectual pursuits of understanding the field. For me, what I use it for now is brainstorming: help me come up with a way to think about a particular problem, tutoring. I wasn't sure what transformer was, so I asked Claude what it was. And I've got another example I'll give in a little bit. I use it for translation a lot because I think Claude's much better--it feels better than Google Translate. I don't know if it's better than ChatGPT.

Finally, I love asking it for advice on travel. Which is bizarre, that I do that. There's a zillion sites that say, 'The 12 best things to see in Rome,' but for some reason I want Claude's opinion. And, 'Give me three hotels near this place.' I have a trust in it that is totally irrational.

So, that's what I'm using it for. We'll come back to what else is important, because those things are nice but they're not important. Particularly. What do you use it for in your personal life?

Dwarkesh Patel: Research, because my job as a podcaster is I spend a week or two prepping for each guest and having something to interact with as I am--because you know that you read stuff and it's like you don't get a sense of why is this important? How does this connect to other ideas? Getting a constant engagement with your confusions is super helpful.

The other thing is, I've tried to experiment with putting these LLMs into my podcasting workflow to help me find clips and automating certain things like that. They've been, like, moderately useful. Honestly, not that useful. But, yeah, they are huge for research. The big question I'm curious about is when they can actually use the computer, then is that a huge unlock in the value they can provide to me or anybody else?

Russ Roberts: Explain what you mean by that.

Dwarkesh Patel: So, right now there are just--some labs have rolled out this feature called computer use; but they're just not that good. They can't reliably do a thing like book you a flight or organize the logistics for a happy hour or countless other things like that, right? Which sometimes people use this frame of: These models are at high school level; now they're at college level; now they're a Ph.D. level. Obviously, a Ph.D.--I mean, a high schooler could help you book a flight. Maybe a high schooler especially, maybe not the Ph.D..

Russ Roberts: Yeah, exactly.

Dwarkesh Patel: So, there's this question of: What's going wrong? Why can they be so smart in this--I mean, they can answer frontier math problems with these new reasoning models, but they can't help me organize--they can't, like, play a brand new video game. So, what's going on there?

I think that's probably the fundamental question that we'll learn over the next year or two, is whether these common-sense foibles that they have, is that sort of intrinsic problem where we're under--I mean, one analogy is, I'm sure you've heard this before--but, like, remember--the sense I get is that when Deep Blue beat Kasparov, there was a sense that, like, a fundamental aspect of intelligence had been cracked. And in retrospect, we realized that actually the chess engine is quite narrow and is missing a lot of the fundamental components that are necessary to, say, automate a worker or something.

I wonder if, in retrospect, we'll look back at these models: If in the version where I'm totally wrong and these models aren't that useful, we'll just think to ourselves, there was something to this long-term agency and this coherence and this common sense that we were underestimating.

12:56

Russ Roberts: Well, I think until we understand them a little bit better, I don't know if we're going to solve that problem. You asked the head of Anthropic something about whether they work or not. You said, "Fundamentally, what is the explanation for why scaling works? Why is the universe organized such that if you throw big blobs of compute at a wide enough distribution of data the thing becomes intelligent?" Dario Amodei of Anthropic, the CEO [Chief Executive Officer] said, "The truth is we still don't know. It's almost entirely just a [contingent] empirical fact. It's a fact that you could sense from the data, but we still don't have a satisfying explanation for it."

It seems like a large barrier, that unknowing. It seems like a large barrier to making them better at either actually being a virtual assistant--not just giving me advice on Rome but booking the trip, booking the restaurant, and so on. Without that, how are we going to improve the quirky part, the hallucinating part of these models?

Dwarkesh Patel: Yeah. Yeah. This is a question I feel like we will get a lot of good evidence in the next year or two. I mean, another question I asked Dario in that interview, which I feel like I still don't have a good answer for, is: Look, if you had a human who had as much stuff memorized as these LLMs have, they know basically everything that any human has ever written down, even a moderately intelligent person would be able to draw some pretty interesting connections, make some new discoveries. And we have examples of humans doing this. There's one guy who figured out that, look, if you look at what happens to the brain when there's a magnesium deficiency, it actually looks quite similar to what a migraine looks like; and so you could solve a bunch of migraines by giving people magnesium supplements or something, right?

So, why don't we have evidence of LLMs using this unique asymmetric advantage they have to do some intelligent ends in this creative way? There are answers to all these things. People have given me interesting answers, but a lot of questions still remain.

15:05

Russ Roberts: Yeah. Why did you call your book The Scaling Era? That suggests there's another era coming sooner-ish, if not soon. Do you know what that's going to be? It'll be called something different. Do you know what it'll be called?

Dwarkesh Patel: The RL [real life] era? No, I think it'll still be the--so scaling refers to the fact that we're just making these systems, like, hundreds, thousands of times bigger. If you look at a jump from something like GPT-3 to GPT-4 or GPT-2 to GPT-3, it means that you have 100X'd the amount of compute you're using on the system. It's not exactly like that because there's some--over time you find out ways to make the model more efficient as well, but basically, if you use the same architecture to get the same amount of performance, you would have to 100X the compute to go from one generation to the next. So, that's what that referring to, that there is this exponential buildup in compute to go from one level to the next.

The big question going forward is whether we'll see this--I mean, we will see this pattern because people will still want to spend a bunch of compute on training the systems, and we're on schedule to get big ramp-ups in compute as the clusters that companies ordered in the aftermath of ChatGPT blowing up are now coming online. Then there's questions about: Well, how much compute will it take to make these big breakthroughs in reasoning or agency or so forth?

But, stepping back and just seeing a little forward to AGI--

Russ Roberts: Artificial General Intelligence--

Dwarkesh Patel: That's right. There will become a time when an AGI can run as efficiently as a human brain--at least as efficiently, right? So, a human brain runs on 20 watts. An H100, for example, it takes on the order of 1,000 watts and that can store maybe the weights for one model or something like that.

We know it's physically possible for the amount of energy the human brain uses to power a human level intelligence, and maybe it's going to get even more efficient than that. But, before we get to that level, we will build an AGI which costs a Montana's-worth of infrastructure and $100 billion of CapEx, and is clunky in all kinds of weird ways. Maybe you have to use some sort of inference scaling hack. By that, what I mean to refer to is this idea that often you can crack puzzles by having the model think for longer. In fact, it weirdly keeps scaling as you add not just one page of thinking, but 100 pages of thinking, 1,000 pages of thinking.

I often wonder--so, there was this challenge that OpenAI solved with these visual processing puzzles called ARC-AGI [Abstraction and Reasoning Corpus for Artificial General Intelligence], and it kept improving up to 5,000 pages of thinking about these very simple visual challenges. And I kind of want to see: what was on page 300? What big breakthrough did it have that made that?

But, anyways, so there is this hack where you keep spending more compute thinking and that gives you better output. So, that'll be the first AGI. And we'll build it because it's so valuable to have an AGI that we'll build it the most inefficient way. The first one we will build won't be the most physically efficient one possible. But, yeah.

18:25

Russ Roberts: Can you think of another technology where trial and error turned out to be so triumphant? Now, I did a wonderful interview with Matt Ridley awhile back on innovation and technology. One of his insights--and I don't know if it's his--but one of the things he writes about--I think it's his--is that a lot of times the experts are behind the people who are just fiddling around. He talks about the Wright brothers are just bicycle guys. They didn't know anything about aerodynamics particularly. They just tried a bunch of stuff and until finally they lifted off the ground, is the application of--I don't know if--I think that's close to actually true.

Here we have this world where these unbelievably intellectually sophisticated computer scientists are building these extraordinarily complex transformer architectures, and they don't know how they work. That's really weird. If you don't know how they work, the easiest thing to make them better is just do more of what works so far and expect it to eventually cross some line that you might be hoping it will. But, can you think of another technology where the trial and error is such an important part of it alongside the intense intellectual depth of it? It's really quite unusual, I would guess.

Dwarkesh Patel: I think most technologies--I mean, I would actually be curious to get your takes on economic history and so forth, but I feel like most technologies probably have this element of individual genius is overrated and building up continuously on the slight improvements. And often, it's not, like, one big breakthrough in the transformer or something. It's, like, you figured out a better optimizer. You figured out better hardware. Right? So, a lot of these breakthroughs are contingent on the fact that we couldn't have been doing the same thing in the 1990s. In fact, people had similar ideas, they just weren't scaled to a level which helped you see the potential of AI back then.

But, I do think this is actually a really important question, Russ, because--I mean, the sort of big question here is, not, like, what model do we want to use this year or something? The big question is: Will intelligence feed back on itself; and to what extent will it feed back on itself? And if it does, do we get some sort of superhuman intelligence on the other end? Because, the things are making better models--or something like that?

And, there the question is: Okay, can you just have a million super-intelligent AI researchers, a million automated Ilya Sutskevers or Alec Radfords; and they think about, like, what is the architecture of the human brain and how do we replicate that in machines. Or, do you need this sort of evolutionary process, which requires a ton of compute for experiments, which maybe even requires hardware breakthroughs?

And, that will still be transformative. Hopefully, at some point, we can talk about this. I am keen to get more economists' takes on the potential of explosive growth and so forth. That's still compatible with a world where it takes more than a year to get an intelligence explosion. But, that is a fundamental question of does intelligence feed back on itself or not?

Russ Roberts: Yeah. I think intelligence is a little bit overrated. I'm kind of a skeptic on this. And I also believe that most of the really tough human problems aren't insoluble because we are not smart enough. It's because the world is complicated, and it has nothing to do with intelligence--because there's trade-offs; and the definition of good is not well-defined, or best, or better, even. But I know that puts me in a small, pessimistic camp. I really don't think it matters actually, because we're going to see a lot of these changes. We'll save in real time; we'll see if they work or not.

Thinking back to the trial and error, I realized while you were answering the questions like: Isn't this common? I was thinking, well, the pharmaceutical industry has a lot of hit-or-miss wild guesses, and then something works. So, we imagine a day where because of our genetic or biotechnology, we could custom-design pharmaceuticals more effectively, but I think most of--we're not at that day yet. And so there is a lot of that in that industry, in that world. So, maybe it is more common than I think. I don't know.

Dwarkesh Patel: Yeah. I mean, there was--who is that economic historian? Alan Bloom or Robert? Robert Allen, who had the theory of the Industrial Revolution where it happened in Britain first because coal was cheap enough there that you could make these initial first machines that were actually super-inefficient. Where--I think the first steam engine used the pressure from the steam condensing to move the piston back. Whereas, future more efficient machines would push it directly with the steam. So, anyways, the other countries just, at least beforehand, didn't get on this evolutionary stepladder where you make the inefficient machines. The coal is cheap enough. You can just throw coal at it, see if you can get it to work, and then later on you get up the cost curve and you find these improvements and so forth.

Similar thing has happened with hardware, with AI, where in the 1990s and the 1980s, people had these ideas about--well, you could do deep learning and you'd have these different architectures. And now we have the compute to actually try out these ideas. And we'll get more and more compute over the next few years. Maybe we should expect an acceleration, but similar trends.

Russ Roberts: I'm thinking about Fred Smith. I've told this story before, but the first night of Federal Express, the story goes that there were two packages that were carried by Federal Express and one was Fred Smith sending a birthday present to his mom. So, it didn't start off going gangbusters. After quite some time, they couldn't make payroll and they had to close the doors, except that Fred on his way back from the bankers in Chicago saying no, saw a flight to Reno, went to the roulette wheel and made enough money with his sister's trust fund that he kept that company afloat. And it would be nice to say that Fred understood that this was an inevitable idea, but we know the real reason that he kept going was that he's a human being. He had put his heart and soul into this, and he was not going to give up until every avenue had been pursued. It wasn't a theoretical understanding of the world that allowed him to do that, but something more human, I think.

Dwarkesh Patel: Can I try out an idea on you?

Russ Roberts: Yeah, sure.

Dwarkesh Patel: Part of me thinks--look, this idea of a single super-intelligence because of its ability to think in an armchair and divine how the world works--you know, some people have this idea that, 'Oh, it's going to figure out how to get rid of its political opponents and build a nanobot.' And whether it's persuasion, or some sort of other manipulation, coming up with future technologies all by itself, it'll have this omnipotent effect on the world.

Russ Roberts: Yeah.

Dwarkesh Patel: Now, I don't buy that, but here's an idea I do buy. If you look at what changed between primates and humans, it was our ability to--Joseph Hendrick has this wonderful book about this--it wasn't just intelligence. It wasn't even mainly intelligence. It was our ability to coordinate with each other, to share knowledge, to accumulate knowledge over generations.

Now, when we think about AI, instead of thinking of a single AI, super-intelligent AI, what if we thought about billions of beings who are thinking at superhuman speeds who are able to communicate with each other in a way that a human simply cannot. They can literally share their mind-states with each other. They can distill their insights. They can literally merge; they can copy themselves for arbitrary amounts of time. So if you have a super-skilled worker or who has some tacit knowledge, you can just make arbitrary copies of them for pennies per copy. Fundamentally, you have this much larger population size. And if you believe--a lot of theories of economic growth say that more people means more ideas; and that feeds back on itself.

So then, for the same old school economic growth, cultural evolution reasons, this hive mind of AIs actually will be--not as an individual AI, but as a sort of, like, whole economy-wide phenomenon--will just be a total shock and change from the current system.

Russ Roberts: Could be.

26:51

Russ Roberts: Let me ask you a question, since I'm going to duck yours. So, I really like my smartphone--an embarrassingly large amount. I recognize that my affection for it is somewhat akin to an addiction, as best as we can define that.

I sometimes wonder--and I probably asked this question before on the program--if Steve Jobs were alive, and let's say he thought Jonathan Haidt was right. I'm agnostic on the Jonathan Haidt work, but I do think he's onto something. I do think phones are not so great for kids. And I'm pretty sure they're not so great for adults. I'd ask Steve Jobs, 'Would you like to turn the clock back and not think of this product? It's just a phone. It's not a computer in your pocket. Would you do that?' I think human beings have a lot of trouble with saying, 'Oh, yeah, I guess it's been maybe not a net benefit. I'm uneasy with it.' I think most humans would have trouble doing that. I think they would say, 'Onward and upward.'

Similarly, if we had the billion AI agents, or whatever we call them, one of the first questions you might ask them is: 'Do you think it's good for humanity for you to be unleashed?' Forget the incentive--the problem there with self-interest, if that's definable in that world. Could they answer that question? Is that an example--to me of something that intelligence is not really good at? And the fact that they could all be linked together and share their thoughts? You'd have to believe--and this is a very Hayekian point; we'll bring in a little bit of economics here--you'd have to believe that they not only could coordinate their thinking, and not only pull together all the knowledge--which they all share by the way so it's not so different, unlike humans--that they will probably have a very similar database. Are they actually going to be able to forecast the N futures that would exist in different worlds under different policy outcomes? And of course then they'd have to then aggregate the wellbeing, which is literally to me a meaningless question, but you could teach them to do it and they'd be really enthusiastic about it--the people who are in charge of it would be enthusiastic about it.

I think about, if you go back to Hayek's Nobel Prize address, he basically says, 'Macroeconomics is unknowable.' Just like you can't know which sports team is going to win tomorrow. On average, one of them might be more likely to win; but if the quarterback of the other team, or the striker of the other team, or the pitcher of the other team got into a fight with his wife the night before. But, in theory, if you really believe the world is deterministic--which I'm agnostic about but skeptical--you'd just know all that, too. Because you'd have the enzymes of the chemical imbalances, and all the data on all these people. And they could actually forecast all these outcomes.

That's what I'm skeptical about. Do you think we're going to go in that direction?

Dwarkesh Patel: Ultimately, I think not, but let me just lay out the case for somebody who might think that something like this is plausible.

Right now, central planning--or this kind of central coordination and forecasting--there's many reasons it doesn't work. But, one of them is: Xi Jingping has the same amount of compute in his brain as every other person in China, right? 10^15 FLOPS [Floating-Point Operations Per Second].

In the future, there could be a scenario where the central power or the coordinator, or whatever, has much more raw compute than the periphery, and also has much greater bandwidth with the periphery. Where, right now, Xi Jingping can only monitor so much as one human. The future versions of him could be, like, you can run a copy of Xi Jingping for pennies per hour. But, you can run mega-Xi Jingping, which is 1000-times bigger and is running at 1000-times faster speed, and has copies of it monitoring every single communication coming out of the CCP [Chinese Communist Party], and every single time that you interact with a CCP website or something, that person--and then they just distill all that back into the central blob, or something. So, that's the case for why its plausible.

The reason I still think that this kind of central planning will still not work is--I think this perspective underrates the extent to which the whole world is getting more complicated as a result of AI. So if you took Apple as it exists today, or just, like, a computer, and you're, like, 'Could Apple as it exists today coordinate the--' I don't know, maybe Roman economy was pretty complicated, but the economy of [inaudible 00:31:45, Urech? Orech?]. Maybe you could. Like, maybe the economy of [inaudible 00:31:49, Urech? Orech?] could be coordinated in this way. But, could Apple coordinate the world economy as it exists today? I doubt it. Right?

So maybe ASI [artificial super-intelligence] might be able to coordinate the economies that exist today, but not the one where there's also other AGIs who are deployed through the entire economy, who are doing their different things.

I think the big worry I have is: Before we get to this decentralized world where every sector has drastically grown and become more complicated, and because higher population size, there's much more specialization--all these new institutions for AIs to coordinate with each other that we figured out for humans, like joint stock corporations, and state capacity, and whatever, whatever that looks like for AIs. There will be a sort of, like, decentralized state where--this has been sort of like, there's an equilibrium.

But before we get to that, should we worry about somebody having a big head start? They get the first hive mind. And this hive mind, because the rest of world hasn't adopted AI, is more powerful than the rest of the world combined.

That question really comes down to the speed of deployment versus the speed of AI feeding back on itself. And, it's a big debate, and we sort of touched on it before. It's probably the most important debate in the world today. And, it's funny that we don't pay enough attention to it.

Russ Roberts: Well, I think one of the reasons we don't is we don't really understand. ASI, by the way, is Artificial Super-Intelligence. I think the reason we don't pay enough attention to it is we don't--it's far away in emotion.

Dwarkesh Patel: Yeah.

Russ Roberts: It's not far away in time, perhaps.

33:21

Russ Roberts: But, I want to go back to something you said earlier, just to give you a hard time. We're talking about a leader who has got really high access to a lot of intelligence. I doubt there's a very good correlation between the IQ [intelligence quotient] of, say an American president, and the success of their administration.

Now, I think you'd have to argue, 'Well, I know, but that's just human intelligence. This is going to be so far beyond 180 IQ, it'll be like 10,000 IQ. It's not even the same thing!' Again, I like to tease people in this field. It's very related to religious thought.

You know, it's like, 'We can't imagine the mind of God, who can see every person, know every thought. Not just here, but in Alpha Centauri as well--if they're in that galaxy far, far away.' And I just--I wonder if that's just wishful thinking. But, you know, it's hard to know. Hard to know.

I want to give you a quote from a recent interview I saw. This is from Zvi Mowshowitz's page today. He quotes an interview between Daniel Kokotajlo and Dean Ball. I couldn't see a source where he got this, but we'll link to the Zvi Mowshowitz page if we can. He asked Dean Ball, who is a thinker on AI. He says, 'Can you give some examples of things that AIs will remain worse than the best humans at 20 years from now?'--that they'll be worse at? Dean Ball's answer is fabulous. 'Giving massages,'--I'm not sure he's right there, but okay--'Giving massages, running for president, knowing information about the world that isn't on the internet, performing Shakespeare, tasting food, saying sorry.' Now, that's a nice list. It covers a very wide range of the more glorious--not all of it--but some of the most glorious parts of the human experience.

What do you think? Are there going to be things that, 20 years from now, human beings are still--besides those, unless you want to disagree. Can you think of some things that AI is going to struggle to dominate, or are they going to pretty much dominate us all over the board?

Dwarkesh Patel: I think 20 years is a long time.

Russ Roberts: Yeah, it is.

Dwarkesh Patel: I like the sentiment behind that list, but I'd be on board if it was more two years instead of 20.

Here's something that's super paradoxical and surprising about the way in which these AIs are getting better. In the 1990s, Hans Moravec--which is actually the first guy who came up with this idea of scaling, that you can predict when human intelligence comes about as a result of how much compute you're using. In fact, in the 1980s, he predicted 2028 as the year in which we get the compute to build a human-level AI. It's in fact a very simple projection, not that far off. We'll see. We'll see, I guess.

But, he had this really interesting paradox, which is that when we think about what is hard or what will take a long time to develop as a skill in AIs, we think about what is hard for humans. But, we don't think about the amount of time that evolution has had to spend optimizing a skill. The kinds of skills that evolution has to spend a bunch of time optimizing are just so commonplace that, in the human spectrum, they just don't vary that much. For a billion years, you've been learning how to move around, and have control over your limbs, and understand your environment, and have this longterm coherence. Only for the last couple hundred thousand years have you maybe learned anything relevant to doing math, or developed the relevant cognitive skills. And so these things have cracked--are about to crack--frontier mathematics, coding, and so forth.

But, there is this old-school perspective on human intelligence, like the Aristotelian perspective: That, what makes us human is our ability to do these kinds of reasoning tasks, and to have these higher level abstract thoughts. In fact, that's probably the least human thing about us, from the perspective of what will AI get first. It will get the Aristotelian understanding of humans--that will be the first skill it gets. And the very reptile brain stuff will be actually the last skills that it automates.

Russ Roberts: Yeah. I think it's the amygdala--I can't remember--but the part that's working behind the scenes to figure out stuff that you can't even know you're figuring out.

Dwarkesh Patel: Yeah. Yeah.

38:04

Russ Roberts: So, I haven't described your book, but what your book is--you and I have something in common. You've interviewed a lot of people about artificial intelligence, and I have, too. You've interviewed, I asked you before we started, you said maybe about 20. I think I've done 15 or so. In one sense, I'm way ahead of you, Dwarkesh. I interviewed Nick Bostrom, I think in 2014--

Dwarkesh Patel: Oh, wow--

Russ Roberts: when he was worrying about it. But, you're ahead of me. And, more than that, what this book is, is it's a compendium--it gives me ideas--it's a compendium of excerpts from the interviews you've done, grouped around these important issues in AI. As a result, it's very interesting.

And as I'm reading the book, even though I know some of the insights from my pitiful 14 interviews, but you actually not only did more--and you probably remember them better because you wrote the book, pulled the book together, so that's an interesting cognitive thing. One thing that struck me is, despite having interviewed all these people and I've interviewed them across the spectrum as you have--worriers, optimists, practitioners, philosopher-types--your book, in a way, was the first time I got a little bit scared. And so I want to talk about why.

Dwarkesh Patel: Yeah.

Russ Roberts: I want you to react to it. It's going to be a long question--it's already too long; I apologize.

You wrote an essay called "What Fully Automated Firms Will Look Like," and you've been alluding to it in some of the conversation we've had already. Here's what you write:

Even people who expect human-level AI sooner are still seriously underestimating how different the world will look when we have it. Most people are anchoring on how smart they expect individual models to be. I.e., they're asking themselves, "What would the world be like if everyone had a very smart assistant and it could work 24/7?" Everyone is sleeping on the collective advantages AIs will have, which has nothing to do with raw IQ, but rather with the fact that they're digital. They can be copied, distilled, merged, scaled, and evolved in ways humans simply can't. What would a fully automated company look like with all the workers, all the managers as AIs? I claim that such AI firms will grow, coordinate, improve, and be selected for unprecedented speed.

Now, let's not debate whether that's really feasible, and whether it's soon, or tomorrow, or 40 years from now, or 100 from now. Let's take it as true. And I'm reading that, I realized, along with some of the really bullish optimism of some of these practitioners, which you talk about at the beginning of the book. Some of it, they're self-interested, but a lot of it's just they drunk the Kool-Aid. It's just the way they see the world. There's something kind of beautiful about it. But, I realized when I finished some of those excerpts and some of the remarks from those people in the field, it's not that I got scared. I just got sad. And I want to lay out the sad thing.

And, you can talk about scared and sad. I worry--and maybe it's a silly worry--that so many of the joys of my life will not be available to my granddaughter, who is two-and-a-half.

I remember the time I was taking linear algebra and I realized you could prove the central limit theorem in multiple ways. Sorry, not linear algebra. This was an advanced statistics class. Those are the two hardest classes I took in college, by the way, so that's why I got confused. So, it turns out, you could prove it with something called characteristic functions, and then you could prove it with something called Eigen functions--I think. I don't even remember. I was 21 years old; it was almost 50 years ago. I just remember the feeling I got when I saw that these two mathematical techniques that were nothing alike could get to the same result and that there was some underlying coherence there. And, it was so exhilarating. I asked Claude--just before we started talking--to do it, give me those proofs; and I can't follow them anymore. But, I could ask Claude to help me understand them.

And, so my granddaughter will probably never have that thrill.

At some point pretty late in my life, my father told me I was a good writer. I think I was in my 50s. It was thrilling. He spent his life criticizing my writing. I think he'd thought that was good for me. And maybe it was, but that made it all the sweeter when he praised something I'd written.

I don't know if my granddaughter will ever find her writing thrilling someone, given the way she's going to grow up.

Buying my first car, being able to afford it, having a sense of agency and success. The thrill of discovering a used book that you've been looking for--that's been gone for a while. The thrill of discovering a novel thinker or a new podcast with Dwarkesh Patel. It's, like: 'Who is this guy? Hey, he reached out to me.' I apologize. I thought I knew your name, and I saw what you'd been doing and who you'd been talking to. It was like, 'Wow, this guy's doing great.' Then I find out you're--how old?

Dwarkesh Patel: Twenty-four (24).

Russ Roberts: Yeah. 24! Good grief. So that's really fun. I'm going to start reading your stuff. And, this is riffing off the top of my head.

Now, I understand my granddaughter will have different thrills. Maybe better ones. But, reading your essay and your book made me realize for the first time that AI isn't just a better tool that will save me time, or--say, the way Google Maps does or Google Translate. It's going to remake the world in ways that will change the fabric of daily life, not just making it faster or cheaper, but really, really different. And, the world might be a better place. It will be more materially prosperous for the reasons you write about in that essay. It's not a small thing. But, will there be more human flourishing? I'm not so sure. For some people, absolutely. Many of them, you interviewed for your book. They all have the extraordinary experience of knowing they haven't just put a dent in the universe, but a big dent.

Dwarkesh Patel: Yeah.

Russ Roberts: It's this thing--this thing--that I can interact with that is mysterious, is really exciting. But it's going to change everything. And, it's scary. Not in the way that it's going to turn me into a paperclip, or run roughshod over me, or ignore me as a mere mortal. It's just the world is going to be really different. I wonder if even--what it's going to be like to read a novel written in 1950 for my granddaughter? I don't know.

You thoughts? Sorry about that long-winded thing.

Dwarkesh Patel: No, no. I'm actually glad, because it is such an important question [?inside? insight?]. It's something we gloss over when we're having this big-picture discussion about what are the trends in AI or something.

I wouldn't even dismiss the paper-clipper concerns, because I have so much uncertainty. I've changed my mind so many times. I changed my mind in the last week on some of the most important questions about AI. And, so, I'm, like, who knows? Like, this has never happened before. We'll see.

But, okay, putting that aside for a second. Yeah. So, it is the case--and, I think that's worth being honest about this, instead of having cope--that, sooner or later, there will be a point in which a human is just unlikely to add anything sort of like counter-factually valuable, at least in a sort of, like, count the GDP [gross domestic product] number sense. Where, horses right now, you can ride them for fun, but you're not--they're not there because they are needed to move you around. And it is, obviously, a big change.

I have this intuition that humans have adapted to such--think about the world today and how many different transitions there have been, not just from the forager past. And even then, people underestimate how big a deal fire was, or how big a deal tools were. The agricultural revolution, the industrial revolution, secularization of society. The build-up of states instead of having just your local kin, or whatever. We've dealt with all of that. I feel like we'll do okay with material superabundance, but you're not that, in the grand scheme of things, economically productive.

And the other thing I think people gloss over, which I think is important to the picture of what happens after, but people are maybe uncomfortable about is: transhumanism. I do think it's important that we give humans the freedom to have their minds, their bodies, their spirits amplified, changed, whatever they want. The intuition there is just people think of this in a very dystopian way of cyberpunk or something, but just there is a Schumpeterian or Hayekian thing of we just don't know what we don't know about what the future--like, instead of just thinking about new technology as dystopian, just try to imagine how the future might view the fact that they have minds who can experience greater beauty, greater creativity, greater connection with other minds. Aren't debilitated by the kinds of things that we find cumbersome.

Another intuition here is--I'm sure you've heard this thought experiment; I think it comes from Phil Tramwell. Where, if I said, 'I'll send you back to the year 1000, but how much money would I have to pay you in the year 1000 such that you get this much wealth in the year 1000 and I send you back to that year?' I think it's quite plausible the answer is there's no amount of wealth that I would rather have in the year 1000 than just be alive right now. Especially in the future when we're talking about the kinds of things AI hive minds will invent, whether it's health, whether it's on just this wellbeing, and flourishing, and connection. I think it's hard to estimate. Very likely, your granddaughter or your granddaughter's granddaughter will have a similar perspective to us that we have to the people in the year 1000.

Russ Roberts: Yeah. I don't know who Phil Tramwell is, but that thought experiment is the kind of--I spent way too much time writing about in my work.

Dwarkesh Patel: Oh. Oh, sorry. Maybe you're the source.

Russ Roberts: No, no, no. No.

Dwarkesh Patel: I got it secondhand.

Russ Roberts: No, it's not me because I never go back to 1000. I go back to 1950. There's almost nobody alive today who would literally, if they could experience 1950 in day-to-day moments, having experienced 2025, would go back. The story in part of it, of course, is that people in 1950 were playing[?] happy, some of them. Some of them weren't, obviously.

Dwarkesh Patel: No.

Russ Roberts: We can debate how many and what their characteristics were: they were treated badly, and so on. I agree with that thought experiment. I think it's powerful and profound. I think it's not just that I'm used to this and going back to 1950 would be painful. I live longer.

Dwarkesh Patel: Yeah.

Russ Roberts: I live higher quality life. My day-to-day life is more interesting because I'm not working as a farmer in 1900, say.

Dwarkesh Patel: Yeah.

49:06

Russ Roberts: I used to be you, Dwarkesh. I used to have that boundless optimism about the power of human creativity to make the world better. But, I'm older than you. I'm not wiser. But when you're older, I think you lose some of your optimism. I don't think it's necessarily rational to lose it, but it comes from a variety of things outside of analytical thinking, so it's nice to be reminded of it. I hope you're right.

Having an artificial knee does not ruin the human experience. So, it's nice to think that artificial intelligence will enhance our experiences together and individually in ways that will be meaningful to us. So, I like the idea of it.

Dwarkesh Patel: Yeah, and you and I, Russ, will be in a especially good position because what we're--I mean, we've got our post-AGI career already.

Russ Roberts: Correct. Are you sure?

Dwarkesh Patel: Yeah. I think maybe--

Russ Roberts: Isn't that new technology, some ChatGPT, I think that that creates podcasts between two interesting, thoughtful people. We're, like, obsolete almost, already.

Dwarkesh Patel: And you know, it'll get better. But, I think the sort of vicarious experience of knowing--I mean, it's already the case that on any given subject I might do a podcast on, there might be a better resource than that particular podcast. But, people still want to tune in because there's some consistency. There's some feeling of continuity and intellectual camaraderie. Maybe once you have personal AI friends and your actual friend is the one interviewing your virtual Russ Roberts about his new book or something, that changes. But, yeah, at least I prefer this over my previous, the career I was probably going to end up in as a computer science major. As I'm thinking about what will survive AGI.

Russ Roberts: I feel that too, and we'll talk about that in a minute I hope, why that is the case.

51:16

Russ Roberts: I referenced recently in the conversation with Ian Leslie--it hasn't aired yet, but shortly--that in my conversation with Chuck Klosterman, he wonders about the question of who will be remembered as the quintessential rock-and roll band from this short era in the 1960s and 1970s when rock and roll was a thing. And, he points out--Klosterman does--that usually it's just one.

And, a hundred, 200 years from now, will anyone be remembered as the creator of AI? Will there be a name when someone will say, wow--Steve Jobs really changed the world of computing. Steve Jobs will I think beat out Bill Gates; but, we'll see. But, on this, it's interesting to me--you said, 'Oh yeah, that was figured out, the transfer, a couple of researchers from Google.' You didn't mention their names. Part of it it's because there's more than one. So, that's makes it challenging. Is anybody you think going to be immortalized as 'the creator,' even if it's not true?

Dwarkesh Patel: Only in retrospect: maybe Geoffrey Hinton, maybe Ilya Sutskever. Though it is interesting that--obviously I'm not a practitioner or anything, I just happen to know about it from reading about it. But, even just having some familiarity with what kinds of breakthroughs have happened in AI, this near-mode where you're, like: Oh, there isn't a one big breakthrough that is the locus of progress. It is just the industry and the fact that from all layers of the stack, from hardware to software to everything in between, you've been making these small improvements.

And this is probably related to the idea of Gell-Mann amnesia. It's not the same concept, or where--in this case, maybe if the industry you understand best, you understand that it's a result of all this combinatorial human creativity where people are making these small discoveries and it's the hive mind of humans talking to each other and so forth. Maybe the industries you don't understand also work this way. And, whenever you think about, like: Oh, this is the genius behind flight--or cars, or whatever, break-through steam engines--it's also a similar story.

Russ Roberts: Yeah, I think that's a great insight. I love that. I think that might be true. When you're close to it, you understand how complicated it is, but it gets distilled into this actually quite-misleading idea of the great innovator when it's actually a hive mind anyway. It's just not obvious unless you're really close to it.

Do you have any generalizations after spending a lot of time with these folks? So, not only have you interviewed more people, but your podcasts are longer than mine, so you're way ahead of me on the contact. One thing that strikes me is they're awfully optimistic on average. There are some warriors, but a lot of them are just cheerfully optimistic about technology and the future. Any thoughts you have about them as people or their perspective?

Dwarkesh Patel: I think--maybe I am a biased reporter here because I am friends with many of them and so forth and I've interviewed them. But, I think we live in one of the good universes. I think there's a lot of adjacent universes where the kinds of problems that we should be thinking about--I think we should be thinking about the paper clippers, because, again, it's so hard to think about. Maybe we'll look back--I guess we won't be looking back--but maybe somebody will look back and say Eliezer had it right along all along.

But, the fact that the kinds of people who are building these labs think about this--I mean, you may say it's lip service, but lip service is better than nothing. And, often these solutions are--maybe a lot of the key solutions will just end up being, like, you do the obvious thing it's not going to cost you that much, but you have to make it somewhat of a priority.

And, there's a lot of adjacent universes where the idea of a misaligned AI or the idea of this hive mind getting out of your control is just something that doesn't occur to people; and you just think about them as tools until you get to this point.

And so, I do appreciate the fact that these people do seem to be--and I take them at their word, or at least give them benefit of the doubt unless shown otherwise--that they got into this or as they got into it and they thought more seriously about the prospect of AGI, they do take this what happens in the world after a question seriously; and it's better than what could have been.

Russ Roberts: The 'Eliezer' you mentioned is Eliezer Yudkowsky, who is one of the warriors. I'll link to my interview with him. And, do you have one also? We'll link to--

Dwarkesh Patel: Yep.

Russ Roberts: Okay. We'll link to that as well. [More to come, 56:19]