Marc Andreessen on Why AI Will Save the World
Jul 10 2023

AI-partner-300x300.jpg Marc Andreessen thinks AI will make everything better--if only we get out of the way. He argues that in every aspect of human activity, our ability to understand, synthesize, and generate knowledge results in better outcomes. Listen as the entrepreneur and venture capitalist speaks with EconTalk's Russ Roberts about AI's potential to improve the world and why those who fear that AI will destroy humanity are wildly over-reacting.

RELATED EPISODE
Tyler Cowen on the Risks and Impact of Artificial Intelligence
Economist Tyler Cowen of George Mason University talks with EconTalk's Russ Roberts about the benefits and dangers of artificial intelligence. Cowen argues that the worriers--those who think that artificial intelligence will destroy mankind--need to make a more convincing case for...
EXPLORE MORE
Related EPISODE
Melanie Mitchell on Artificial Intelligence
Computer Scientist and author Melanie Mitchell of Portland State University and the Santa Fe Institute talks about her book Artificial Intelligence with EconTalk host Russ Roberts. Mitchell explains where we are today in the world of artificial intelligence (AI) and...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

azeem hussein
Jul 10 2023 at 6:47am

It’s not as if the leading inventors and developers of these AI systems have asked for a moratorium to evaluate the repercussions and add safety measures… Oh wait, they have.

Perhaps being ignorant, lacking critical thinking, and not understanding basic logic, fact, or even the scientific method, lets one exist in a happy go lucky lala land.

I believe the inventor, creator, or maker of the product should be heeded when they publicly express reservations about their invention.

They know it better than Russ, Marc or any other pundit.

Shalom Freedman
Jul 10 2023 at 9:25am

As Russ says in the conversation Marc Andreesen is a very smart person. Russ also suggests though that there is something a bit single-minded in making Intelligence the single supreme value in understanding what is good for humanity. So, too the impressiveness of Andreesen’s knowledge seems to have an underlying limitation regarding the question of the future effect of AI developments on humanity. There are after all so many possible scenarios almost all of which are not considered and certainly are not ruled out in this conversation. Those who are sounding drastic warnings are not ‘apocalyptic eschatologists’ but people at the forefront of AI development and scientific research in various areas. So, while it makes a lot of sense to point to clear benefits in a variety of areas which AI will bring, I do not think those more uncertain as to where all this is leading can be cancelled or dismissed so easily. I believe the more open-minded uncertain attitude of Russ is the wiser one for now.

Cam Christensen
Jul 10 2023 at 12:35pm

My new favorite use case for GPT-4 is to ask it to summarize [difficult pice of writing] in the style of [person or scene]. This is entertaining and surprisingly useful.

An example prompt: “Below is a Federalist Paper. Please summarize it in the style of an EconTalk interview in which the host, Russ Roberts, interviews Adam Smith, the famous political philosopher. [paste of Federalist No. 51]”

The result:

Russ Roberts: Welcome back to EconTalk, I’m your host Russ Roberts, and today we have a very special guest, all the way from the 18th century – Adam Smith, the great philosopher and father of modern economics. Adam, today we’re discussing a piece from the Federalist Papers, specifically Federalist No. 51, penned by James Madison. Could you help us understand the crux of Madison’s argument?

Adam Smith: Absolutely, Russ. At the heart of Madison’s Federalist No. 51 is the concept of checks and balances within government, with each department having its own will and as little dependence as possible on the others. Madison asserts that these independent branches would work to maintain their own power, in effect checking the power of the other branches.

Russ Roberts: So it’s really about harnessing that innate ambition that we as humans possess, and using it to create a balance of power in government.

Adam Smith: Precisely, Russ. As Madison phrases it, “Ambition must be made to counteract ambition.” This isn’t to demonize ambition or human nature, but to pragmatically acknowledge that these are forces to reckon with in governance. If men were angels, as Madison says, we wouldn’t need a government, and if angels governed men, we wouldn’t need checks and balances.

Russ Roberts: It’s fascinating how Madison leverages this understanding of human nature to help design a system that would ensure the government remains controlled by the people, but also that it must control itself.

Adam Smith: Indeed, Russ. And this doesn’t stop at the internal structure of the government. Madison extends this idea to the concept of a federal republic, where power is divided between two distinct governments, offering a “double security” to the rights of the people. Madison’s vision of the federal republic is carefully designed to protect against both oppressive rulers and unjust majorities.

Russ Roberts: That’s a deep insight, Adam. It shows that Madison was mindful not just of tyranny from above, but also from the side, so to speak, in the form of an oppressive majority. It’s this intricate and multifaceted safeguarding of individual rights that seems to be a crucial pillar in Madison’s vision of an ideal government.

Adam Smith: Absolutely, Russ. At the end of the day, as Madison eloquently puts it, “Justice is the end of government. It is the end of civil society.” And the genius of Federalist No. 51 lies in how it outlines a pragmatic structure that aims to achieve this lofty goal, built on the bedrock of a deep understanding of human nature.

Russ Roberts: Thanks for that wonderful summary, Adam. It’s a reminder that the best systems don’t just dream of how we should be, but deal with how we are. For the next part of our discussion…

 

Dan
Jul 10 2023 at 12:49pm

I had to open my podcast app to make sure it was only playing on 1x speed Andreessen talks so fast. Maybe he’s an AI?? O.o

Michael Barry
Jul 10 2023 at 1:24pm

It is lame that this discussion is a month old, especially on a topic this current.

Matt
Jul 10 2023 at 2:45pm

Marc was at UIUC when my wife and I were there, too. He is brilliant and making important points (e.g., the doom cults, the Greens making things worse, etc.) Some of the key points I’ve made:

https://www.losingmyreligions.net/

Ethan
Jul 10 2023 at 3:49pm

This show has made me realize how dumb I am because it doesn’t matter who is on it; Afterwards I am convinced that AI will play out almost exactly how they predicted. Enjoyed it as always.

 

Thanks Russ

Robert W Tucker
Jul 10 2023 at 5:42pm

Russ does an admirable job of fostering an engaging and intellectually stimulating conversation with the incredibly insightful mind of Marc Andreessen. Whenever I listen to Andreessen, I invariably gain new knowledge and insights, although I must admit that I find myself mildly perturbed when he ascribes the same level of certainty to facts, reasonable yet unproven assertions, and even falsehoods (such as his claims about correlations in the behavioral sciences, or his unargued rejection of MacIntyre’s argument on the essential contestability of social claims).

While I thoroughly enjoyed the discussion, I believe Andreessen failed to present a compelling case against the potential risks associated with unmanaged AI. Although he raised some compelling arguments that suggest the majority of AI’s impacts will be positive, he dismissed without cause the possibility of pervasive and insidious harms that could raise legitimate concerns about its overall net benefit. To draw an analogy, I believe certain aspects of social media lend themselves to similar assessments of net benefit.

Still, another great podcast. Thanks to Russ and team, and to Marc Andreessen.

Ben Service
Jul 10 2023 at 10:49pm

Good podcast that made me think as per usual.  I am normally a precautionary principle kind of person but Mark’s point about who enforces the penalty or the precuation is an important one that I hadn’t really thought through and he made a good line about “do you want the government enforcing it”, I kind of just thought that societal norms would enforce it but clearly that can’t do all the work, you do need the back up of some kind of government enforcement I guess.

There is always the cost vs benefit trade off (which is what the precautionary principle weighs up to some extent) which is hard to do, one example my brother in law brings up is pool fencing in Australia, yes it might save a few lives and if it is one of your kids that is saved that is great but the cost of doing it is quite high and maybe you are better off on relying on social norms which might give you most of the benefit for a lot less cost.  In this case is a next word predictor going to kills us all?  I think not but something may eventually, maybe we are focusing on the wrong thing is we think chatGPT is the risk.

It would be interesting to get Taleb’s point of view on AI risk vs reward, Econtalk is one of the few podcasts he will do and I don’t think twitter is a good medium to discuss this stuff, Taleb seems to like twitter for some reason.

The last part of the conversation about China vs US in a bipolar world was a bit concerning, I am guessing both sides are saying that it is the other that is doing the escalating.  Why does the world always tend to end up in a bipolar world with the threat of violence?  I am all for competition etc… but would prefer it without the guns.  Maybe there always has to be an implicit treat of guns in the background though.

Steve Ravet
Jul 11 2023 at 11:29am

Interesting episode, but Marc isn’t engaging with the arguments of his critics at all. Computers won’t have to fool humans into building what they want, we’re already full speed ahead building AI systems. He mentions that there is a shortage of chips for AI… That’s because we’re giving AI as much as we can and still can’t keep up. There’s a mad frenzy of investment money going into AI. If AI is able to subtly take control at some point, would it look different than this?

I’m an engineer in tech, and I see unexpected behavior emerge from complex systems. It’s hard to argue that consciousness can’t emerge from a complex and large enough computer when we can’t actually explain how consciousness emerges.

Repeatedly comparing his intellectual opponents to members of an apocalyptic cult reduces the credibility of his arguments.

If I were to score this as a debate, I’d give the W to his opponents, as represented through Russ’s questions.

Ben Service
Jul 11 2023 at 2:13pm

I too was surprised that Russ didn’t push a bit harder on the emergence of something from a complex system angle.  He talks about that in a lot of other contexts.  I agree that you can never be sure what is going to come out of a complex system, human intelligence for example emerged or evolved out of a complex system.

Robert W Tucker
Jul 11 2023 at 2:31pm

I share your assessment that Andreessen was not especially responsive to alternative views presented by Russ. As I mentioned, Andreesen’s responses, in addition to being brilliant in places, were glib and occasionally even factually incorrect.

To the extent that Andreessen’s anti-harm case is linked to consciousness, the notion needs to be unpacked if one is to discuss it meaningfully. Under present knowledge, there is a good chance that what we know as consciousness is an artifact of our biological brains. (Searle made this point quite awhile ago.) But even if AI were never to become conscious in this complex way, a sense of self-perpetuation might arise when an AI system of sufficient complexity became able to recognize and distinguish its products from its inputs and to project them forward and backward in time. Unpredictable results might follow this development of “awareness.”

Less fanciful and more likely sources of harm arising from the AI include the development of tools of propaganda designed to manipulate public opinion and sow discord and the tacit perpetuation of the human biases and errors in reasoning embedded in AI’s source material. The former problem might be countered via blockchain authentication but the challenge of identifying and eliminating ubiquitous human biases from AI’s outputs may prove more formidable. These two forms of damage are almost certainly occurring as I write this.

More serious, perhaps, is a warning issued by Google’s Bard when I asked for its “opinion.” Bard suggested, “AI could be used to develop autonomous weapons systems that could kill without human intervention.”  A possibility worth considering since we appear to be headed in that direction.

Ajit Kirpekar
Jul 11 2023 at 10:33pm

Piggybacking on the other comments, Mark andresen was remarkably cogent and concise when he explained how the current technology works but left wanting when discussing the dangers of AI. He didn’t belabor on and on about what the transformer architecture is(It’s actually an innovation in facilitating faster computing for long sentences and less a flavor of neural networks).

Yet when addressing his ideological opponents on the safety of AI, he seemed to do a lot of wandering and dismissal of his opponents rather than offering straight talking points from an engineering perspective. He could have simply said, for example that the current technology and how it’s being used isn’t capable of acquiring any form of sentience that should scare people. The transformer architecture and the reinforcement learning component that it pairs with is aimed at understanding human text, not how to manipulate people or dominate the world because it has desires to do so.

I would have found those viewpoints far more convincing than tarring your opponents as crazy. Demagogues or apocalypse believers.

Paul Critchlow
Jul 13 2023 at 5:01pm

Marc’s position is laid out at length here:

https://a16z.com/2023/06/06/ai-will-save-the-world/

His argument boils down to 1. It’s a category error to believe machines want to survive or have goals, only beings created by evolution have those, and 2. It’s unscientific, we have no testable hypothesis.

There’s great responses to these; we have AutoGPT running around creating its own goals right now. There are predictions made such as AI solving the protein folding problem, advancing beyond human science (in at least one narrow field). Bets have been placed on whether GPT will outperform humans on various tests, including one creative thinking test scored by humans and no way for the AI to access the questions beforehand.

But even if I don’t argue the particulars of those points, are we really going to bet everything on Marc’s answers to these questions? Is “machines can’t have goals” or “Geoffrey Hinton has joined a death cult” a claim we can be so confident in that we feel good about encouraging people to worry less about this than we currently are?

Unless I really wanted to believe it, I could never be swayed by this kind of reasoning. This is speculation about tech that doesn’t exist, there’s uncertainty here, Marc just barely engages with the arguments.

Convince yourself of this privately if you don’t think worrying about catastrophe is helpful for you, by all means. Some worriers only put it at 5 or 10 percent.

But he’s going on every platform he can to talk about the psychology of his opponents, calling them bootleggers and death cultists and violent extremists, but not taking time to address any of the counterarguments to his position. So long as we’re trodding out bootlegger/baptist accusations, he’s on the board of Facebook, which has the least safety concerned AI team (see Yann LeCun on twitter), make of that what you will.

Anyways, I’m still glad Russ did the interview, and mostly it seems like AI safety is winning over support.

Comments are closed.


DELVE DEEPER

EconTalk Extra, conversation starters for this podcast episode:

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:37

Intro. [Recording date: June 12, 2023.]

Russ Roberts: Today is June 12th, 2023 and my guest is entrepreneur and venture capitalist, Marc Andreessen, co-founder and general partner of the Silicon Valley venture capital firm Andreessen Horowitz, sometimes known as a16z. This is Marc's third appearance on EconTalk. He was last here in May of 2022 discussing software, immortality, and Bitcoin.

Marc, welcome back to EconTalk.

Marc Andreessen: Good morning, Russ. Thanks for having me back.

1:02

Russ Roberts: Our topic for today is your recent essay, "Why AI Will Save the World." You recently posted it on the a16z website, and of course, we will link to that. You argue that AI--artificial intelligence--will make everything better if we let it. A bold claim in current climate. Give us some examples and why you're optimistic that that is the path we're on.

Marc Andreessen: Yeah. The reason I'm so optimistic is because we know for a fact--as sort of one of the most subtle conclusions in all of science--we know for a fact that in human affairs, intelligence makes everything better. And, by "everything," I mean basically every outcome of human welfare and life quality that essentially we can measure.

And so, people who are of higher intelligence exhibit everything from higher levels of physical health, higher levels of educational attainment, higher levels of career success, more success parenting. By the way, they are also less bigoted. They are better at conflict resolution, they are less violent, they are more capable of solving problems. They're more capable of doing everything from creating art to new science, discovering drugs.

There's this long lineage, there's been studies on this for a very long time and there's hundreds and hundreds of such conclusions, and they all correlate to the concept known as fluid intelligence, which is the ability to rapidly assimilate and synthesize knowledge and then use knowledge to solve problems.

This has, of course, been a domain of humanity for the last many millennia, and it's been the long-sought dream of computer scientists going back to the first original invention of the computer in the 1940s to have computers be able to think and reason, in ways at least similar to how humans can do it. And, it seems that we've finally reached the point where that technology is actually starting to work.

And so, now we have the opportunity to consider the application of machine intelligence to all of the various problems of humanity as what I would describe as an augmentation to human intelligence, which is the actual practical application of machine intelligence to human affairs.

Russ Roberts: Do you think it's--"it" meaning whatever you want to call AI--obviously there's different terminologies and what's in the air right now is a thing that finishes sentences and thereby is able to do some amazing things, all of a sudden. Do you think it's going to get dramatically better? A lot of people think it's going to get unimaginably better. Do you agree with that? And, if so, how's that going to happen?

Marc Andreessen: So, I should start by saying I'm not a utopian: I'm not a believer in what Thomas Sowell called the Unconstrained Vision, notwithstanding the title of the essay. I'm not what I would characterize as a sort of a utopian or an extremist on these things. We might differentiate what I'm about to say from the scenarios in which it becomes overwhelmingly powerful overnight and those kinds of things. That's not what I'm proposing. But as a technologist, and as somebody now who's been involved in watching and being involved in the trajectory of how technology has improved for basically for 30 years professionally, what's happening right now is a very large percentage of the world's very smart engineers and entrepreneurs are now--if they weren't already doing this, now they're doing this.

This is actually the story of Silicon Valley. This is kind of what Silicon Valley does. The reason we're called Silicon Valley is because there was a point at which we did this for the microchip and then we did this for several other waves of technology. And, what happens each time is a small group of people gets something to work; and then the minute that happens, that's like firing the starting gun to be able to get all of these other smart people to participate.

And, then all these other smart people basically come in and they take a look at the technology and they say: Okay, here are the 14 things that it's not doing well yet, and here are the 18 problems that are preventing it from being widely adopted. And, then they solve those problems.

And so, I think the rate of technological improvement from here is going to be very rapid. And, by the way, as evidence of that, there are breakthroughs happening essentially every week right now. There's this phenomenon in AI where people aren't getting much sleep because there are so many research papers coming out with so many fundamental breakthroughs in engineering--at the very least, in engineering and then in some cases in the actual science--that it seems like it's going to move very quickly from here.

5:12

Russ Roberts: But, the way it moved very quickly to get to this point was by wandering through trillions of pieces of language on the web. What's going to be the technique? Sam Altman recently said--now it's not recently; this was, like, a month ago; it was so long ago--that the pace is going to decelerate because we've exhausted the size-of-the-sample trick and we're going to find new stuff. Are they finding new stuff since then?

Marc Andreessen: Well, that's what you'd want to say, if you had an early lead and you wanted to convince other people to maybe back off a little bit and give you room to run. So, maybe he's putting a little English on the ball. Actually, his colleagues--he has two colleagues, Greg Brockman and Ilya Sutskever who are respectively, I think, the head of engineering and CTO [Chief Technical Officer] of OpenAI--they have given many, actually several interesting talks that are available on YouTube. And, I think they're--but in addition to Sam, I would listen to their talks because they actually outline a lot of the future research direction.

Look, I think it's going to move quickly. Let me just give you one sort of piece of context on this, which is there actually was a theoretical breakthrough that happened quite recently. And, this is the invention of this algorithm called The Transformer, which is the form of the neural net that it turns out actually works in a sort of generalized way. And that only happened in 2017.

And, then Google, interestingly, kind of let that sit on the shelf for several years; and then that's the technology that OpenAI picked up and implemented. And so, a big part of why I'm so confident on what's about to happen is because we've just had that breakthrough. Now that people know that works, they're going to try--the way people do--they're going to try thousands of variations on that.

We may still have decades of architectural breakthroughs ahead of us on top of being able to gather and program these things with much larger amounts of data.

Russ Roberts: A personal question: How much time do you spend a day reading those papers that you mentioned that are coming out in great profusion? And, how much of that is personal consumption for you versus what you feel you need to do to be a successful funder of these technologies?

Marc Andreessen: For me, it's all work. The thing I enjoy most about my job is that my job is to actually keep track of all these things. And then, literally--I mean this is what's happening right now--every day, is entrepreneurs are showing up in our office and with all these new ideas. Sometimes the entrepreneurs literally show up with a research paper that somebody wrote. Sometimes it's that researcher showing up to start a company. Sometimes it's somebody else who picks up something out of a paper.

By the way, look: there's lots of work happening that's not being published in papers. And so, the other part of what we do is to actually talk to the practitioners. And so, as a consequence, maybe the sort of unfair advantage that I have in these conversations is I just have this lens on what's happening in real time and these sharp people who are pounding away at all these problems.

And look, this is not a deterministic process. There's no guarantees in any of these things. And look, there have been AI booms in the past that did not pay off, but I think you correctly identified that there is this moment where these products are working in a sort of fundamental--it's just self-evidence that these things are working in this sort of breakthrough way. And, then I sort of couple that with my confidence in the abilities of the people who are now, as we say, flooding the zone in this field.

8:33

Russ Roberts: Intelligence is--kind of obvious when you say[?see?] it. I know a lot of people who are smarter than I am in some dimension, what we would normally call IQ [intelligence quotient] or speed of reasoning or ability to see connections to things that are not obvious and create an aha moment for the listener that they couldn't have come to on their own.

When I ask ChatGPT [Chat Generative Pre-trained Transformer] to write a poem for me or a memo or a letter or summarize an article, I'm not sure I would call that nature of intelligent that I was alluding to. And, I'm curious what you think is the potential in the future. Meaning: you said it thinks like humans. Now, we don't really know how humans think. So, that's a generalization--or however you worded it. You could clarify that in a second. But I don't understand fully what the potential is, and I think the evangelists tend to just assume it's really big. Without having to worry about those definitions. Do you have any idea what they're talking about?

Marc Andreessen: Yeah, so a bunch of things to go through here that are I think really, really central. So, first I listened to your recent interview with Tyler and you were saying you're using the free version of ChatGPT and he was encouraging you to pay the $20 to upgrade to the--

Russ Roberts: And, I did--

Marc Andreessen: You did. Okay.

Russ Roberts: For the record: Yes, I did.

Marc Andreessen: Okay. And, then when you use it--this is now tech support; we'll do live the live tech support--so, at the top of the window for ChatGPT, it lets you pick which model you use. And so, you can pick what's called GPT-3.5 or GPT-4. Are you sure you're picking GPT-4?

Russ Roberts: No, I'm not. I'm not paying close attention to that. Good point.

Marc Andreessen: Okay, so this is actually a way to get a lens on the pace of improvement. This is actually a way to get live insight into this, which is actually: If you want, you can start running the same queries through GPT-3.5 and then GPT-4. A lot of people or people are finding that even that step function is a dramatic step function.

I'm just finding in my daily life the upgrade from GPT-3.5 to 4.0 was very significant in terms of its ability to help me in all of my day-to-day work. And so anyway, it's a good way to measure.

And, then by the way, there are two other kind of free equivalents to this technology--actually that are fully free.

So, one is Microsoft has a different version of GPT-4 called Bing, which is actually free; which by the way also actually is more advanced in a very interesting way, which it has full integration with the Internet. It actually has up-to-date knowledge on a lot of things which GPT-4 does not.

And then Google has a thing called Bard, which is the result of their work, which is--it's a little bit behind, but, like, it's improving fast.

Russ Roberts: I like, Marc, how you're identifying those for me. Because, once you saw that I didn't pay close attention to the 3.5, 4 thing, you've lowered your expectations for me. It's a reasonable thing. Keep going.

Marc Andreessen: So, you can run these things in a horse race and you can actually see the different pros and cons of the decisions they're making. So, anyway, that's how to experience this.

The way I would describe it is, look--it's actually very interesting--this is not, strictly speaking, human intelligence. To start with, we actually still don't really understand how human intelligence works. Although, there are experts in human intelligence who think we now understand a lot more than we did a year ago because it turns out neural networks actually work better than we thought. But, we could come back to that.

Interestingly, historically, the architecture of neural networks that are applied in computers today in GPT-4 are actually based on a paper that was originally written in 1943. And so, as far back as 1943, they knew enough about the neural structure of the brain to be able to write a paper basically saying, 'Here's how you build an electronic analog.' And, that is still the basic architecture that's running today.

This is one of the reasons why I actually think there's still going to be a lot of advances, is because this work today is all derivative of basically what happened in 1943. Knowing what we know today and knowing what we have with modern brain science, we know a lot more about human brain neurons than we did in 1943. And so, it's entirely possible that there are fundamentally better neural network architectures that we will now invent. Because, we now have a reason to, because we know how well that they can work.

This is not human intelligence, but this is much, much closer to human intelligence than anything we've ever gotten. And, then when you use a GPT-4, when I use it, like I say, you don't experience strictly speaking human intelligence; but you see something that is a lot like it in a lot of domains.

And, then maybe take one step further, which is to say when you look at human intelligence, actually the way the psychologists do it is they actually--psychometricians--they break human intelligence into actually two parts, so-called fluid intelligence, which is what they call the g-factor or IQ, where the g-factor for the general factor for intelligence. And, that's the general capability to assimilate, synthesize information, use it to solve problems. And, then they also talk about crystallized intelligence, which is basically memory, your ability to remember a lot of things.

And, it turns out those are actually separate. We know this because there are people who have basically near perfect crystallized intelligence who are not actually higher in fluid intelligence like those people sort of. And, a candidly good memory does not actually necessarily make you, quote-unquote, "smarter."

Having said that, the interesting thing about a system like GPT-4 is it has both: I would say, it has roughly human equivalent IQ--fluid intelligence--roughly of about 130, 135. And, I can tell you why I think we know that. But, then of course, which is sort of analogous to a quite smart person. It's probably analogous to the IQ level of basically your typical student at the university where you are.

However, it has, of course, far superior crystallized intelligence to any human, because it's a computer.

The other thing that's happening is it just knows about so many more things, and that doesn't translate into a higher level of fluid intelligence.

But, when you use it, boy, is it useful because it turns out it knows a lot about everything in a way that any individual person does not. And so, basically, as a user of this, you get this kind of turbo-boost. You get its fluid intelligence augmenting your own, but you also get its crystallized intelligence and its entire memory augmenting yours. I think it's the combination of those two things that give it such immediate practical payoff.

14:46

Russ Roberts: If I query GPT-4 and you query it, does it get the same answer?

Marc Andreessen: No. In fact, if you query it twice, you'll get two different answers.

Russ Roberts: Correct. So, one of the parts that's interesting about it for me is that if you and I were to become friends, let's say we had--well, we are friends, Marc, but better friends. So, let's say we had coffee every other week and I'd know a lot about you after a while and you'd know a lot about me and if I saw a poem or something weird or strange that I thought you would like, I could start to figure out what that is.

And, it doesn't do that yet, as far as I understand. Is that correct?

Marc Andreessen: You mean learn about you over time?

Russ Roberts: And what I love. I don't get to tell it, 'Gee, that was fabulous. Do more like that.' I can. I can say, 'Do more like that.' Or I can say, 'Make it funnier,' or all kinds of fun--'Make it sappier. Make it more cynical.' If I ask it to write a poem. But, it doesn't learn what I love. It will, though, I assume.

Marc Andreessen: Yeah. So, there's a distinction in the technology between what's called the training data--which is the data that's used to train the network to do what it does--and then there's something else called the context window--which is basically all of the inputs in that moment that are sort of used to kind of steer the neural network to deliver the results that the user wants.

Today--today, today--if you're in a single session and you use it--if you're in a single window and you use it to ask it 20 different things, it's building up what's known as its context window. It's learning about you in that session. And you know this, because it actually exhibits state across queries. And so, you can ask it things like, 'Give me 10 great books to read.' And, then in the next query you can say, 'Give me 10 more.' And, you don't have to say, '10 more books to read,' because it knows that that's what you're referring to. So, it's building up the context window across those queries.

The problem is: today, that context window is limited; and then when you open up a new physical window on your screen, it starts the context window over again from scratch and then it's learning from you again as if it met you for the first time.

Today, those are separate concepts. Obviously, it's a term we use as sort of the trillion-dollar question, which is to say: if somebody were to solve the following problem, they would have the opportunity to create a trillion-dollar company around it. One of those obvious trillion-dollar questions is, 'Boy, you want it to actually merge those concepts?' You want it to learn about people in real time and you want it to basically be able to--you want the neural network itself to be getting retrained live on the fly as users interact with it.

There's a huge amount of technical work going into that kind of thing, that kind of idea right now. I'm pretty confident within, call it two years, that will be a solved problem and you just come to expect that you have a relationship with these things that last many years. And, it does exactly as you described: It learns all about you.

17:37

Russ Roberts: Now, you said that you listened to my interview with Tyler Cowen. So, I don't really know whether you did, whether read the transcript, or whether you put the transcript through ChatGPT--and soon, of course, you won't have to put the transcript--there will be other queries and ways to get at that. Is there going to be a way to authenticate those claims in any meaningful way?

You know: 'I didn't use this; this was actually me.' Or 'I didn't--'. Go ahead.

Marc Andreessen: Yeah, yeah, exactly. Well, this comes up a couple of different ways. So, one is, yeah, authenticating that a person is actually a person and not a bot masquerading as a person. So, that's sort of identification of an individual.

And, then there's a related concept which is: This is actually a true piece of content--which is to say it's legitimate and valid. Truth may be a separate issue, but let's say legitimate and valid, right? Orr real. And so: 'This is a true clip of Joe Biden speaking,' as opposed to something that's been deep-faked or something like that. So, we can say authenticating or verifying people and content.

So, this is another one of these kind of trillion-dollar problems, trillion-dollar opportunities.

The way that we think that--the way that this should go, logically, technically there's two ways to solve this. One would be to have a centralized, basically, database. If the government were to run it, we would call it the Ministry of Truth. We probably don't want that.

So, maybe it's a private database: it's like the equivalent of a FICO [Fair Isaac Corporation] score database or something like that, for, you know, for claims like this.

We would propose the right way to solve this is in a decentralized way, kind of consistent with how the Internet works and to have you not have to trust a single entity.

And so, we would propose that this should be solved with blockchain-based solutions such that basically, as a user, you should be able to cryptographically verify your identity. You should be able to cryptographically register content. Based on your identity, you should then be able to endorse specific pieces of content.

So, Joe Biden should be able to have a cryptographic signature. Joe Biden should then be able to use that cryptographic signature to sign pieces of content like legitimate video clips. And, then if you, as a user, are watching a video clip and Joe Biden falls over on stage, you can check to see whether that cryptographic signature matches. And, if it doesn't, you know that that is a piece of content that you shouldn't trust.

There's an issue on this, which is the U.S. government right now is both very alarmed about this problem of potential deep fakes and then they're also trying to outlaw blockchains. And so, they are simultaneously trying to basically get us to solve this problem and trying to prevent us from doing so. And, if you can explain to me why the government does things like this, I will--

Russ Roberts: Oh, that's easy. They do it all the time. They subsidize tobacco and then they tell you, 'You shouldn't smoke.'

Marc Andreessen: Exactly.

Russ Roberts: In the old days.

Marc Andreessen: Yeah. So, there's a practical issue there, but that would be the solution and that is what we're heading for. And, by the way, that's a good idea in general because, look, there are plenty of scams and fraud. There were plenty of scams and fraud before AI. And so, it would be good to have a general solution for this kind of thing that gives people the ability to exercise control and verification on these technologies.

20:42

Russ Roberts: You said--you implied you use it a lot in everyday life to make yourself more productive. I use it to stay abreast of what's going on in the world. I've used it a few times. I used it at a birthday party the other night and wrote a poem for an older person who had never used ChatGPT; and it was fun to stun her with what was a pretty good poem. It was nice. And, it had some details about her that I had fed into it and it touched her, which was cool. She knew I didn't write it. I didn't pretend I wrote it. But, how do you use it, if you could reveal a little bit of that? How is it useful to you practically?

Marc Andreessen: Yeah, so what I'm finding is I'm actually having to retrain my brain in real time to realize that it's available to me whenever I need it. It's typically in the context--I'm reading some paper; there's some concept I don't understand. Normally--like, what would you do normally? It's like, okay, you'd, like, go--hopefully, by this time would've retrained yourself to do a Google search. And, then you'd basically follow the 10 blue links and you'd look for a good explanation; and of course you'd hope that you'd find something good. And, sometimes you do and sometimes you don't.

Here you can just say look, explain it to me. Just say--you tell whatever, the GPT or whatever, 'Explain it to me.' And, then it's very fascinating what you can do from there; because it's, like, 'Explain it. Okay. I didn't understand that. Okay. Explain it to me like I'm 10.' And, then it simplifies it down. And, then if you still don't understand it, you can say, 'Explain it to me like I'm five.' And, then it simplifies it down.

And, actually a fun thing you can do to play with is you can, say, take a very complex topic and kind of ladder it down like this; and then you can get it to, like, 'Explain it to me like I'm two,' and it will actually do it. It will actually translate it into the kind of conversation you can have. So, it's very entertaining. I was very entertained to have it explain quantum physics to you like you're a two-year-old, I finally feel like I'm starting to understand these things.

Look, so there's that. I ask it a lot--I ask it for a lot--I think in terms of lists, so I ask it for a lot of lists. So, 'Give me 10 examples of X, Y, Z.'

I mentioned the very beginning--I used it actually when I wrote the essay, because I mentioned in the beginning, human intelligence gives you better life outcomes in all these different areas. And, I have read a lot of those papers over the years, but I don't have a file of those papers. I never wrote them--I never kept track. And so, I just said, I actually told it, I said, 'Give me 50, basically, life outcomes that are improved with human intelligence and give me the citations.' And, then it actually did that. And, then--we could talk about this, you still have an obligation to crosscheck the citations, but it gave me exactly--

Russ Roberts: I was got to mention that it has famously made up citations.

Marc Andreessen: Yeah. It will. And, we can talk about that. But, in this case, it didn't; and it gave me very, very good results. And it literally gave me a list of 50.

And by the way, and one of the fun things on it is it'll just keep going. And so, it gave me 50; if I ask for 50 more, it'll give me 50 more. Because it has crystallized intelligence--it has all this data--it's able to basically be extremely comprehensive.

It's also, by the way, very good at compare and contrast on different topics. And so, if you have two different concepts you're working with and you think they're kind of the same, but maybe not completely, you can say, 'Give me the difference.'

Another thing I use it for is, if it's a particularly sort of complex topic, I'll say--basically you can sort of tell it to adopt personas. And so, I can say, 'Adopt the persona of the world's leading expert in this.' Or by the way, you can say, 'Adopt the persona of an expert in a different field and then explain this to me.' And, they'll explain it to me from the perspective of that other field.

For example: 'Explain human intelligence to me as a neurologist--as an expert in neurochemistry, versus a psychologist.' It'll give you different answers.

Yeah. It's the ultimate thought-partner in, I think, an incredibly compelling way.

Russ Roberts: So, I've been pretty disappointed in that I asked it to write a poem in the style of Billy Collins, and it just gives me Dr. Seuss. But, it will get, I assume, better. It's great Dr. Seuss, by the way. I enjoy it very much. But, clearly that needs to be improved.

You know, Marc, you could have a lot of followers on Twitter if you posted some of those ideas. You know: you put a thread--I mean, I don't mean to--anyway.

Marc Andreessen: So, one of the things you can do, for the ones that have internet access--so, for the Bing one that has internet access--you can say, basically, 'Write in the style of this, the following Twitter account.' And, you can give it the handle of a Twitter account and it will then adopt the persona of the person behind that account. If you do it for yourself, you could hypothetically use it to write your own tweets. Which, of course, I've never done, but one could do.

Russ Roberts: I hope not. What I like is: someone, I hope, will put all of the EconTalk transcripts into ChatGPT and let me interview Adam Smith.

Marc Andreessen: Yeah, exactly.

Russ Roberts: Which would be lovely. Obviously, all that's coming. And also, any viewer can star in any movie they want. They can remake a movie with, instead of Harrison Ford as Indiana Jones, you can pick your own character, including yourself. So, there's just some extraordinary things coming.

25:36

Russ Roberts: I want to challenge this claim that intelligence makes everything better.

Would you really believe that the higher the IQ--what I'm going to challenge here is the linearity of it. Would you really argue that a higher-IQ person makes a better parent? I wouldn't.

Marc Andreessen: Let me start by saying again, I'm not a utopian and things are overdetermined. Look, when IQ is linked causatively to things--either correlated or causally linked to things--typically in the literature, it's like a 0.4, 0.5 correlation. Of course--you know this--but the significance of a 0.4, 0.5 correlation, there's two ways to look at that, which is, 'Wow, that's half or less than half of the cause,' which is true.

But, the other way you can look at it is in the social sciences, those are monster numbers. Right? Like, almost nothing is 40% causative on anything out of a very small number of exceptions. And this is one of the few exceptions--go ahead.

Russ Roberts: No, I just challenge that because, most of these things I don't think 'better' is well-defined. I don't even know what a better parent is. At one point, I think in your essay you talk about mentoring and how a ChatGPT could be an amazing tutor. You just gave some wonderful examples. They obviously could be a mentor, could help you make decisions, or could help you think about how to make decisions. I think I don't expect it to be very good at making decisions where there's serious trade-offs and I think the--

Marc Andreessen: I do.

Russ Roberts: Okay, well let me say why I don't and then you can respond.

Marc Andreessen: Yeah.

Russ Roberts: Thomas Sowell, who you mentioned earlier said--I think; I always confuse whether it's George Stigler or Thomas Sowell. Would a listener please get this straight for me? And, it's not going to be ChatGPT. But, Sowell studied with Stigler, so it's tricky. But, in economics we say there are no solutions, only trade-offs.

Solving world poverty or deciding--a better way to say it: What's the shortest distance between two points if there's no traffic? Wayz is--and Google Maps--are phenomenal and they're really phenomenal when there is traffic, but they're not so good--and they don't try to, say, answer questions like, 'Would I be happierr doing this tonight or that? because there's uncertainty about whether I'm going to enjoy it or not. But, more importantly, I might not be in the mood and it doesn't know those data; and there's a thousand things. And, happier is not the only thing I care about. I might want to care about the future and investing in a concept and--the whole idea that everything can be reduced to metrics, I find unpersuasive.

How is it going to solve those--a better way to say it: There's no will of the people. So, if we said: 'What's best for the United States or what's best for Israel or France, X policy or policy Y?' It's an unanswerable question. It's not a question of how smart you are. It's inherently unanswerable, in my view. You want to challenge that?

Marc Andreessen: Yeah. So, let's come back to the--I'll put a pin in the better parent thing and the higher intelligence. We'll put a pin in that and come back to that, because there's some interesting things I want to talk about there.

But, yeah, on your specific thing, I guess I would say: Look, again, I'm not a utopian on these things. So, the way I would describe it is not so much that the machine should be solving the problem for us of what should I do today, balancing all the factors that I can't even articulate, or what's best for the United States? I'm not proposing that that's the answer. I'm not proposing we turn over our society or our lives to AIs and let them make those decisions.

In fact, I would argue the opposite, which is these are machines. I would give a very practical answer. These are machines. Like machines, we should use them to offload all the work that we can, offload onto them so that we can spend our time on the bigger questions.

To the extent that we have a machine that can take a lot--what's the historical role of machinery in life? To take the drudgery out of life. It's like translate--sort of offload onto the machine the thing that the person should not ultimately be doing so that the person could do something more useful, more valuable, more productive, more satisfying.

The way to think about this, I think, is, quote-unquote, "just another machine"--it's a new kind of machine that's more sophisticated than the ones we've had before. We should offload many things on it that we can, precisely so that we have more time for the bigger questions that you described.

And then, by the way, on those bigger questions, we can use this machine as a thought-partner. Right? And, for example, as it builds up information about our ourselves as individuals or ourselves as a society, it will have better and better ideas, but we should be in charge of the result. We should decide how to apply that.

In the ideal scenario, we have much more time and energy to be able to actually go after those really big, important questions.

30:05

Russ Roberts: So, one of my favorite parts of your essay is where you explain why artificial intelligence is not going to run amok. I had Eliezer Yudkowsky on--you know what his position, I'm sure. He's not alone. Numerous people have raised a frightening scenario that artificial intelligence will develop goals. One of two scenarios: either we'll tell it to do something and it will pursue it so doggedly that it will harvest my kidneys to make more paper clips; or it will suddenly develop desires that I didn't put into it because it will--like all intelligence--it will have other things that crosses its mind, is the claim. That will include domination; and it will get me out of the picture. It'll be the equivalent, as Eric Hoel pointed out, of Neanderthal man inviting a homo sapien into the circle, campfire, and saying, 'Hey, you're smarter than us. Can you help us learn some things we don't know already?' And, instead, they go extinct. Why are you unconvinced by those fears?

Marc Andreessen: Yeah. So, I think there's two kind of recurrent patterns in how people think about these things, and let's introduce two large words to talk about it. So, one is anthropomorphizing. Right? And so, there's this long-seated kind of tendency on the part of human beings to sort of impute human behavior into things that are actually not human. And of course, this has a long tradition and in many mythologies and in many religions.

And it's one of these things where it's seemingly irrational. But of course, it probably makes some evolutionary sense because you want to not take the chance, right?

Russ Roberts: Yeah. And, it's fun to be afraid if it doesn't actually get you.

Marc Andreessen: Yeah. Well, horror movies. If you hypothesize the terrible scenario, maybe you can avoid it or at least come to terms with it.

And then, the other concept is what's called millenarianism, which is basically the tendency to form a apocalyptic cults, apocalyptic religions.

And you know, there's sort of a long tradition--in sort of Christian and Christian-derived cultures, there's a long tradition of millenarian apocalypse cults, many of which became very famous because they either killed a lot of people or they all committed suicide. And so, you've had everything from the People's Temple to the Manson cult to Heaven's Gate; and many, many others. And of course, California in particular, we are very famous for our cults. We have many thousands of cults at any point in time. And, most of them end fine. People just go off in the woods or whatever and do whatever they do. Every once in a while they end in [inaudible 00:32:30] death.

And, the concern with a cult always is that they are going to tilt, fall on apocalyptic and actually take their own prediction seriously and actually act on them.

And so, I think what you have is you have a generation of--it goes back to your point on IQ--you have a generation of extremely smart people who have kind of thought themselves into an apocalypse cult. And, it is an apocalypse cult that anthropomorphizes. And so, it reads into machines things that aren't there, and then it sort of applies from that, therefore, the end of the world.

And we could go through this in detail, but I think all the claims that you just described, I think they all kind of have this characteristic, which is, like, 'Okay, we have this relatively pedestrian starting point in terms of what technology can do. We are now going to hypothesize these series of abstracted breakthroughs that are going to result in--quite literally, right?--self-aware machines ultimately deciding to kill humanity.'

I think these are fundamentally religious claims. I think that generally in life, making up your own religion is a bad idea. Generally in life, becoming a member of an apocalypse cult is a bad idea.

By the way: I would not criticize them in this way if they were not tripping the line into advocating for actual real world violence. Which is what's actually now happening, which is people like the ones that you mentioned are now advocating for real world violence up to and including things like airstrikes on data centers, and even, potentially, nuclear war to offset the risk of a runaway AI in another country.

And I think at the point where people are calling for that level of real world violence, we have to actually call it what it is, which is an irrational apocalypse cult.

34:00

Russ Roberts: I assume you know some of these people personally. Even I do--at least, I've interviewed a few of them. And we might make a distinction, I'll let you bring it up, of bootleggers and Baptist, which we talk a lot about on this program--self-interested people who find alarmism profitable versus people who have a altruistic desire to save humanity. We don't have to name names about who is on which side or where, but it's clear that some regulation of this might help entrenched participants.

But, let's take the so-called Baptist--the good-hearted people who actually are not going to profit from this personally, at least in a financial sense. They may profit from it in other ways--reputation, esteem, people pay attention to them. But, if we take them as altruistic, they're smart people and you're assuming that they've worked themselves up into a fever over something that--is it--do you put zero probability on it?

For example, you're digging a trench and I'm digging a trench. You're using your hands, I've got a shovel. And, you say, 'Well, I'm not using a shovel because the shovel is a tool that could run amok.' And, I'll say, 'I'm not worried about it.' And, we could go up to a steam engine, I'm still not worried about it.

Why is it that they think this tool is not like a steam engine? Why do they believe, at least in as best as you understand it, why they think sentience--which I think is the key part here in agency--is going to emerge unexpectedly?

Marc Andreessen: So, this goes back--so, this, to me, is sort of the sleight of hand at the core of all of this. Or let's say the leap--I think it's a sleight of hand--but it's say it's the conceptual leap that they're willing to take.

And so, this conceptual leap is derived--the intellectual lineage of this conceptual--and the leap basically is exactly as you said, which is you're going to have a system that is going to be a mechanistic system that's going to be designed by people, that's going to run in predictable ways. And, then at some point, whatever you call it, sentient self-awareness, a mind of its own, it's own goals are going to emerge. So, what they basically have proposed is there's an emergent phenomenon.

The intellectual lineage of that is probably Vernor Vinge was the origin of this with his idea of the singularity, which I think goes back to the 1970s. And, the singularity was this idea--the singularity is: At some point computers get so sophisticated that they'll be basically more sophisticated than a human brain. At that point, they will be so much smarter than we are that they will develop in unpredictable ways and a dumber person or a thing cannot model a smarter person or thing. And so, at that point, they're in charge of history-with-a-capital-H, and we're not.

Ray Kurzweil kind of applied that to the next level, applied that specifically to AI. And, he has all these charts of the exponential takeoff of the sort of neural equivalent circuitry of a chip. And, at some point, it passes human--at some point, the chip is more complex than an individual human brain. And then, on his charts at some point a single chip is more complex than all human brains. And then he has these sort of singularity breakthrough moments.

So, that's, I think, the origin.

Again, this maps exactly to the--this maps exactly to sort of gnostic millenarianism. This is this thing of sort of at some point there will be this utopian/apocalyptic, basically total transformation of society, right? Which is either going to lead us to heaven on earth--you probably remember the term, 'immanentizing the eschaton'?

Russ Roberts: Yes.

Marc Andreessen: You remember this term? Yeah. So, 'immanetizing the eschaton' was the term people used in the 1960s, 1970s, 1980s to describe this kind of thing, which basically, right--the eschaton was basically the arrival of heaven on earth. Immanetizing the eschaton basically meant using human faculties to bring about heaven on earth. Of course, the problem with bringing about heaven on earth is you stand a pretty good chance of bringing hell on earth.

Eric Voegelin famously talked about how previous attempts to immanetize the eschaton included both communism and fascism. Right?

Russ Roberts: It didn't end well.

Marc Andreessen: It did not turn out well. Like: This is a bad trend, right? This is a bad kind of road to go down.

But this is kind of the road that they go down.

And, the reason I'm confident in calling it that is because this Singularity concept is sort of an Eschaton. Like, it has that--it has that flavor.

What I call the sleight of hand, or kind of the move, or the fallacy or whatever, is this idea that you get basically like this superpower: You get this, kind of, capability, again, call it sentience or goals or whatever.

You just get it magically out of emergence. You just get it suddenly out of emergence. You get it without anybody actually--you get it kind of emerging. Even though nobody today knows to how to sit down and do that, you just kind of get it as a consequence of the complexity of the system.

I mean, as, I don't know--a philosopher or something--maybe that's fun to think about. As an engineer, that's not--anybody who has ever programmed a computer knows that's not how these things work.

And so, that's for me where it leaves the realm of practical reality and just ends up being this sort of quasi-religious fantasy.

Russ Roberts: You know, Marc, I really want to believe you. Although: There is a part of me that doesn't, and we'll come back to that in a minute.

38:56

Russ Roberts: But I think what they would say--partly in response to what you said--they may have other things, but you said something is behaving in predictable ways. And then it goes through emergence: it gathers some other ability or desire.

And you didn't mention--what I love about your essay is you said the human brain can do that because it's not created by humans. It's the result of evolutionary pressure and it's full of urges. And, those urges will not be present in--and there's no reason to think they're present in AI.

And, I think it's a really--the way I like to think about it is: I wouldn't expect AI to be tribal. We're tribal. I like my kind. I can't help it. I may work against it. Being aware of it is helpful. But there's no reason to think that the self-driving vacuum cleaners would want to form a coalition with the self-driving cars to, I don't know, clean a city at a higher pace.

It doesn't seem logical to me.

So, that part I agree with.

But, the part that seems strange is that--the predictable part. It's not so predictable. They claim not to understand. Or, do you think that's sleight of hand to make it scarier?

Marc Andreessen: Yeah. Again, I'm going to use--this is not steel-manning now, but it's just like--it's this thing that they do. By the way, this is classic of apocalypse cults. Any apocalypse cult has this characteristic. They got a leader up in front of the cult and he says, 'The end of the world is coming.' Right? And, a rational observer says, 'Well, how do you know that?'

Ultimately the answer is, 'Well, you can't rule it out.' Right? Like, 'I have my message from God, I've got the tablets, I've got the thing, I've got the secret writing, I've got the vision--I've got whatever the thing is--and I claim that it's going to happen. And, I claim to have a scenario for how it's going to happen and you claim it's not going to happen. And, how can you claim that? Because you can't rule it out.'

By the way, look, maybe it happens. Things happen. Maybe things happen. It's very important to very understand, at the point when you're engaging in that kind of rhetoric, you have left science behind.

So, let me put a me very direct point on this. There's no science to these claims. There's no testable hypothesis. There's a hypothesis; it's not testable. It's not falsifiable. Anybody trained in science, what do we say when we have a hypothesis that's not falsifiable? We say, 'We're no longer operating in science.'

By the way, we also have no metric. If you read the literature that these guys put out, you'll notice it's like--they don't propose that there's basically some thing that you can track that can detect when this happens. They propose that this is some all or nothing thing that happens overnight.

By the way, there's a very practical objection to all this, which is kind of sometimes called the thermodynamic objection, which, again, sort of connects this back to reality, which is: Look, we're sitting here today and let's say that GPT develops whatever you want to call it--a mind of it might of its own or its own goals or whatever. Like, it can't get chips. Right? So, now it has its evil plan to take over the world. It needs, like, more chips to be able to run its evil plan. NVIDIA is out of chips. And so, what--

Russ Roberts: They have a story for that. They explain: they'll get some poor low-IQ person--not you or me, Marc, because we're too smart--but they'll get a low-IQ person, an employee of some lower level, and they'll convince him to go buy chips for them.

Marc Andreessen: No, no. But, the chips literally don't exist. Like NVIDIA can't make the chips. There's chip shortages all throughout the AI ecosystem.

Russ Roberts: Oh. Well, they'll fix that. That's easy.

Marc Andreessen: Exactly. So, basically--

Russ Roberts: They'll get Senators, the Congress people to vote for subsidies to things that the chips need and then in a week or two, that'll go away.

Marc Andreessen: So, this is what's called the thermodynamic objection, which is: Okay, you're the AI, you're the sentient artificial intelligence. To accomplish your evil planting over the world, you need the chips, you need the electricity, you need to go buy the votes in Congress, you need to do this, you need to do all of these things.

And, that somehow these things are going to happen basically overnight, very quickly, very easily without putting--at this point, neither one of us are steel manning, by the way--but without putting a footprint into the world. Right? And this is this sort of takeoff idea, and this all happens in 24 hours.

It's like--I don't know about you, but anybody who's ever tried to get Congress people to do anything, it doesn't happen like that. Once you enter the real world of politics to get a bill passed--

Russ Roberts: You don't understand, Marc. It's so smart. Nicholas Bostrom told me this in 2014 when I interviewed him. He said, 'It's so smart, it'll know'--it won't need to bribe Congress people. It'll know how their minds work. They'll have all this data about you and it'll just figure out the argument that you're a sucker for. It'll say, 'This is going to save your mother's life. Buy--subsidize the chips.'

Marc Andreessen: Yeah. And again, we left with this--when people go down this path, to me, it's obvious what's happened, which is we've left the world of science and engineering and practical reality behind. We basically created a God. What Gods do is they're able to call into being miracles. Right? They can exercise their will in the world arbitrarily.

And at that point, we basically created a proxy religion. And, of course, I can't help noting at this point that all these people are atheists. Right? And so, they basically have been sitting inside their own heads for their entire lives. They've completely ruled out any sort of traditional religion. There's a giant, gaping hole left over, which is a standard thing that happens; and they've constructed a new religion around a new Godhead. It is just, like--okay, it's fine. It's fun. It makes for great movies. It just doesn't map to actually sitting down and trying to get something to work.

I mean, the other cynical answer I could give on this is: God, I wish things worked like this because, boy, would we make a killing. In the short period of time between now and the end of the world, like, we would make a freaking fortune because, like, we're talking about something that could generate infinite wealth, basically, on demand. And I wish I could tell you we had a formula for doing that, but we really don't.

Russ Roberts: Do you know what I get comfort from? When we go extinct, no one will be able to watch this video where we ignored the risk. So, it's okay. You're not going to look that bad.

Marc Andreessen: It is good.

44:54

Russ Roberts: Let's turn to the less-than-apocalyptic version. By the way, I'm in Israel and I had to explain in English the other day to a native Hebrew speaker what 'drinking the Kool-Aid' is. This is related to apocalyptic cults. It's a very arcane and small reference that anybody can now find by just asking ChatGPT to explain it to him.

So, what I want to turn to now is this question I think is more interesting, at least in the short run, which is: Is this good for us?

Let's start with a general question. Do you believe that any technology that is not explicitly destructive--and by that I mean, say, a nuclear bomb or a virus--that any toy of which our lives are full of now as 2023 residents, that they're all good? We should allow any one of them--any company that can figure something out should do it and anybody who wants to buy it should be able to--and that's part of the human experience?

Marc Andreessen: To put here again, I'm not a utopian and I'm not a believer in the Unconstrained Vision. And so, I would not make an absolute statement of all these things are good or even that all these things are net positive.

Look, again, there's a big part here which is I can't predict what people are going to invent. Somebody invents a kit for my eight-year old to be able to make deadly pathogens--like, I'm probably going to hopefully not want him to buy it and use it. At least maybe keep it to the backyard, not bring it inside the house.

You know, I don't want to rule anything out. Having said that, I just make two points, which is historically I would say virtually all new technologies have turned out to be net positive, at least from where we sit today.

By the way, many of those have transformed society. There are certainly a lot of people who were on the receiving end of gunpowder who would say that it was not net transformative. But, nevertheless, gunpowder is one of the things that led to society's basically growing up into the nation state that we have today. I think most of us would be much less happy going back to the time before gunpowder and the time before large nation states. I think on net, most of these things have been positive.

By the way, I think both nuclear power has been net positive--which we could talk about--and also, I actually think nuclear weapons have been net positive, which is maybe a more aggressive claim, but I think that's also true.

The other point that I would make and I'm sure you've talked about this in the past on the show, is: There's this question of how do you confront this question of a new technology? The sort of received wisdom in our culture today is you use something called the precautionary principle, which is this concept basically that--people sometimes use this explicitly, sometimes they use it implicitly--but it's basically that the burden of proof should be on the inventor of a technology to prove that it's not net harmful--or, by the way, to prove that it's not harmful at all--before it's deployed.

The precautionary principle is a new idea like that it's basically new since the 1970s. Before the 1970s, people didn't think in those terms. They thought in the opposite terms, which is: You want to identify the good uses and then you deal with the bad kind of as they come.

I might describe this as the modernist approach was focus on the positive and then deal with the negative later. The sort of postmodern or precautionary principle approach is deal with the bad things upfront and then maybe you'll get benefits down the road.

I would be a very strong advocate against the precautionary principle. I would be a very strong advocate for actually the original modernist view. I think we should generally proceed with at least an implicit assumption that new technologies are net positive.

And the reason for that, and then again, I'd say the reason for that, one is there's just a lot of historical proof on that. And then, the other is the process by which the precautionary principle is actually engaged with, I find fairly horrifying. Right? Which is, it's sort of a combination of thought experiments by people that have actually a very poor predictive record. And, then it invites the participation of, let's say, other forces that are--there's no reason to believe that they're going to have good decisions to be made.

And then you always have this issue with the precautionary principle of if you're stalling out a new technology because you think it's net harmful, what you're not seeing are all of the potential benefits that are not being realized. And, the precautionary principle never takes those into account. And, at least historically those have been overwhelming.

Because most technologies have been net positive, the gains that you're leaving on the table, if you get conservative on these things, you know may never see those gains because you never see them happen. But, they are still net losses to society.

And, that's specifically the risk on AI right now, which is we're not going to get the payoff. We're not going to get the payoff. Because we don't get the payoff, we won't see the payoff. Because we won't see the payoff, we won't even be aware of what we're missing out on.

But, that will still be a big net loss for society.

Russ Roberts: You can think of it as the difference between asking for permission and asking for forgiveness. One's more optimistic than the other, but in both cases, there's the risk of the Type One or Type Two error. You miss a great technology cause you failed to foresee the benefits. The worry is you might embrace a new technology because you have no chance of foreseeing the cost. The cost can also be hard to see.

49:59

Russ Roberts: I was going to take this as an example and I'm kind of torn about this. I don't have a simple answer to this. But, I've mentioned this before on the program: I really love my iPhone, and part of the reason I love it--Adam Smith writes about this really beautifully, strangely enough in 1759, which is shocking--but he said: We don't really care about what things achieve; often, we only care about the fact that they serve the purpose they were designed for and they do it so well. We call that beauty.

And, he gives the example of a pocket watch that loses two minutes a day. And, he says, so you pay a premium, £50, say, pounds, to get a watch that's more accurate. He said: But, it doesn't get you into your meetings on time any better than the first one did. Which is, I think, it's a very deep insight into the human experience.

But, part of the reason I love my iPhone is that it can be done at all. There's something I find inspiring about it. I am a religious person, but as a human achievement, I think it's an extraordinary thing.

I'm starting to wonder whether it was as great as I used to think it was. First of all, I can see some of it is harm that it's doing for me. I see harm it's potentially doing--it's a big debate, obviously, as to whether it's bad for children and other living things. But, there's some evidence, may be wrong, that it might be. And I have this thought experiment and I'm going to give it to you, is: Steve Jobs comes back and he's excited to see the number of iPhones that are out there in the world, smartphones that are competing with him from other companies. And, then he gets on a train. Or better yet, he's at a dinner party and no one's talking to anybody. They're all on their phones checking their social media or wandering around on the Internet. Is that worry at all?

Marc Andreessen: As you know with these questions, they are always two part questions, right? Which is, like, One is: What do we think? How do we feel about these things? And then of course Part Two is: Do we want the government taking action on our behalf as a result of how we think and feel?

So, let's even conclude--let's even grant all of your arguments. Let's, like, grant that the iPhone is doing this and that, yet it's causing you to get distracted, it's degrading your real world relationships, and so forth. How do you feel about the government saying, 'Okay, fine, hand them over. They're no longer allowed.'

Or doing what they're doing in China now and saying, 'Great, you get to use it three hours a week, Thursday night, Friday night, and Saturday night between eight o'clock and nine o'clock. After that, the secret police is going to come to your door.' So--you know this--in my mind, you always have to separate these kind of moral ethical considerations. You have to separate them from the practical results.

And then, correspondingly, how do you feel about people who--by the way, you might even agree with people who might say, 'iPhones are net negative for society.' How do you feel when they start demanding regulatory changes, and how do you feel, like, when they show up at the White House and they start demanding new laws to be passed?

So, this is the problem, is: the translation from the one into the other is, let's say, imprecise.

Let's pick a harder version of your question though, which is nuclear power. So, we talked about, I mentioned the precautionary principle. The precautionary principle was developed by the German Greens in the 1970s and it was very associated with--it was the environmental movement and was very associated with the question of nuclear power at that time. For 50 years we've been running--in the developed world, both the United States and throughout Europe--we've been running with this presumption that nuclear energy is more of a threat than a benefit. We have these very well-publicized nuclear disasters like Three Mile Island early on in the field.

And, we basically became convinced that this stuff, this nuclear power, is net bad. How do I know that we became convinced of this? Well, because if you contrast the plans at the time to what's happened--fun fact, Richard Nixon played a key role here. Richard Nixon created a project in 1972 called Project Independence. He proposed building 1,000 new nuclear reactors in the United States and cutting the United States over entirely to nuclear power, and making all the cars electric and basically eliminating carbon emissions in the United States. So, that was an interesting idea. He also created the EPA [Environmental Protection Agency] and the Nuclear Regulatory Commission, which made it their missions in life to prevent that from happening. And in fact, that did not happen.

Europe, of course, went through a similar thing. There is a little bit of a contrast between France and the rest of Europe as a consequence of they were somewhat more enthusiastic about this.

But, let's go back--

Russ Roberts: That's because they liked Jerry Lewis movies. I'm sure it's related.

Marc Andreessen: It probably is.

They went back--well, I mentioned the German Greens. This goes back to the German Greens. Germany today is in a literal energy war with Russia. They are literally paying Russia for oil and gas, funding the invasion of Ukraine. They cannot stop paying Russia. They are shutting down their remaining nuclear reactors. And, you know this, but what are they converting over to from nuclear as they shut down the nuclear reactors? They're converting over to coal.

The German Greens basically are in complete control of German policy on this. I think it's the Finance Minister or whatever is one of these guys. And, they are basically--by being averse to nuclear power in the basis of the precautionary principle, they're flooding the atmosphere with hydrocarbons and funding the Russian invasion of Ukraine with no end in sight.

Again, they've fought themselves into a corner. In the hypothetical world where they had their own Project Independence and they had their own--whatever--300 nuclear reactors or whatever, that gave them completely clean energy sector, unlimited energy with basically--it basically emits water as its byproduct and a very small amount of containable nuclear waste--on their own criteria on both of those fronts, they would be much better off. But, they have convinced themselves that their wisdom allows them to see around corners in a way that's resulted in the exact opposite of what they want.

So, to me, it's like that's the just gigantic obvious cautionary tale. And that's just, like, so clear and vivid today. The fact that the people who believe in the precautionary principle will not mark their belief to market on the basis of their experience in nuclear power, to me again indicates that they've left science behind and they're firmly in the realm of religion.

55:59

Russ Roberts: A recent episode with Jacob Howland--it hasn't aired yet--but he worries about whether our human skills are atrophying as we embrace various technologies. I think about--my dad would never have bought a snowblower when we lived in Boston. He thought it was good for me to shovel the driveway. He wanted me to learn how to drive a stick shift. Part of that was practical, but part of it was, I would call, aesthetic. There's something really fun and pleasing about controlling a car through a manual transmission as opposed to the easier version of an automatic transmission.

Much of life is about overcoming challenges. As we knock these away in the area of thinking, a listener asked me whether I was worried that we would lose some of those skills. That there's a certain paternalism of artificial intelligence.

That's an interesting idea, that it takes care of so many things for me, like a parent looking out for me, that I don't have to think about those. Many of those things, like mentoring and raising a child, we alluded to earlier, they're an important part of the human experience. Are we going to lose some of those skills? Does this worry you at all?

Marc Andreessen: Well, to start with, there's this interesting phenomenon where a lot of these things become art forms. Right? So, the classic example is the horse. A hundred years ago, if you were rich, you had a car. If you were poor, you had a horse. Right?

Today it's flipped. So, there's a club actually in my neighborhood--I never thought I would live in a neighborhood that had this--but there's actually a club in my neighborhood that's a horse club. I take my kid over there and we go hang out and watch all the horse stuff. And, like, it's the richest people in our community who have horses.

By the way, there's a blacksmith who actually, like, makes quite a good living a hundred years later. But, it's an art form. And, they have a horse show every year; and they breed the horses, and they jump the horses, and they do this and that with the horses, and they play polo. And, like, it's an art form.

And it's the same thing, by the way, with cars. It's like, if you are super into cars, you'll tend to own vintage cars. They'll have manual transmissions. If you're super into cars, into racing, you'll be racing old cars. And so, then there's people who do that around here. They'll be driving these 1970s Ferraris around the track at 120 miles an hour with the stick shifts.

And so, these things actually survive as art forms.

By the way, another kind of interesting take on this with an economic lens is handmade objects end up becoming the luxury goods. Right? And so, the handmade leather shoes are much more expensive.

Russ Roberts: And they brag about the fact that, 'Each pair has its own defects, so don't be alarmed by it. That's part of the process.'

Marc Andreessen: So, there's a thing in men's wear where if you buy a normal suit off the rack, it has four buttons on the sleeve, but the buttons are stitched to the--they don't actually open because nobody actually--people used to actually unroll their sleeves when they were in their suits to do work. And, they don't do that anymore. And so, the sleeves are visual, but they're not functional. So, one of the things you get if you buy a handmade suit like on Savile Row is you actually get working buttons.

And so, you can always tell if you're in a meeting with somebody and there's, like, 30 people wearing suits, you can look around the room and you can spot the people who have had their suits handmade because they will have unbuttoned the top two buttons--because they can. And, the people who bought the suits made by machines can't. Right? Exactly, right? And then literally those people are wearing--

Russ Roberts: Who said this show isn't practical?

Marc Andreessen: Exactly. So, people doing manual labor in their suits, which is not so much a thing anymore.

And, yeah, then those people tend to be wearing, like, $2,000 handmade leather shoes.

So, there's this thing--and by the way, it's the same thing--like, recorded--as you say, same thing in music, right? Which is: the same argument was made about music, which is basically recorded music is going to basically render the art of playing music obsolete. And of course, the opposite is true, which is the premium music experience is not listening to music on your stereo or on your phone. The premium experience is having the musician play for you, and going to a concert, or going to a party.

Again, here you get into the thing where: what do rich people do? They hire famous bands to come actually play. They hire the Rolling Stones to come play at the kid's birthday party. Yeah, it is conspicuous consumption, but it's an aesthetic experience--

Russ Roberts: And, extraordinarily inferior to the recorded version--in some dimensions.

Marc Andreessen: Exactly, in some dimensions, and very superior in others. And of course, as a human being, given the choice, you would always pick the live performance because you're going to remember it as a time and place in a particular experience with other people.

So, I view this stuff as, like--no, this goes back to this sort of what's the role of mechanization in our society? The role of mechanization in our society is to take out the drudge work, to take out the things we don't have to do--with due respect to your father is to stop shoveling the driveway by hand. Something that I myself did many times.

Your life is net improved--most people's lives are net improved by a lot--if the snowblower less than clear the driveway in five minutes instead of 45 minutes, they've got the other 40 minutes to spend with their kids. That's more valuable than anything.

And so that, I think, is the actual practical lens that we put on these things.

Again, this is not ruling out downsides and this is not ruling out different--look, Socrates famously was, like, opposed to literacy. He was opposed to writing, for what he viewed were very important reasons that are along this train of thought. And, you know, look, he probably had a point, if you go back and look at his original arguments in terms of how culture was going to change when it went from oral to written. But still, like, written culture led to the creation of the Enlightenment and science and technology and everything around us today. Big net positive.

1:01:35

Russ Roberts: So, my favorite thing, I think--well, I don't know. There's so many, Marc. But, you've said a lot of interesting things and one of them I've really particularly liked, of course, is the--which will not surprise listeners--is this idea of: do you want the government to do it? I think I've said 1,000 times on this program--and somebody could verify that if they use ChatGPT well enough--but, I've said many times in this program that I would always prefer a cultural norm to emerge about how we behave.

So, if it's weird that at our dinner parties or at my meetings at work, some people check their phones--compulsively by the way, not just because they have an emergency at home, but sometimes they just can't help themselves--well, we need to develop ways for that to happen.

And similarly, you could argue, 'Well if you're raised by a parent that didn't give you a cell phone until you were 10--which would be unbelievably cruel in today's world--maybe you'll end up marrying some other benighted person who's not just--that's going to go along with a whole bunch of other things and you're going to maybe interact better with that person on the other side who, the spouse to be, because they were raised in this other certain non-technological way.

I mean, the Amish--or an extreme version--religious people marry people like them often because, for all kinds of reasons similar to this.

I wonder if there's a way--I've been surprised at how little evolution of culture there has been around technology in the 10 years that it's become so central in our lives. Norms about how to act on social media, norms about how to treat the cell phone, norms about your notifications, say. I have this radical thing, I only get my notifications twice a day. It's a nice thing. It's pleasant. I'm glad I do it. I suspect it's unusual. If I had young kids, would I encourage them to do something similar? Yes.

But, a lot of these so-called solutions, or I would just say a richer menu of choices, they don't seem to be happening. That could be because they're not--nobody wants them. They don't speak to people.

But, I wonder if there's a way to make this easier for people because they're using these tools in groups.

So, tools in groups, the reason they're--you don't want to deprive yourself of technology is you don't want to be cut out; and there's very few ways between Off and On. There's not that much in-between.

Just any way we could make it--I don't know. I can't even think about it. Does that make any sense to you?

Marc Andreessen: Yeah. So, a couple of things. One is: Good news, Apple is very focused on this. And so, Apple, every release now has all kinds of new features for doing all kinds of limitations. And, they've got child control and they've got all kinds of self--they've got this thing whatever screen time or whatever where you can set all these limits. And so, the companies are actually doing a lot of this. It's actually a good test.

It's a good test whether anybody wants any such limitations, but they have the capability to do that. Cultural norms, I think that there is a way to do this. So, we do this at our firm, which is we have certain meetings where people just, which is everyone, phones down, put phones face down.

There's another form of this kind of thought process that's happening right now around a lot of companies that are trying to figure out how to work post-COVID. Right? And so, it's cultural norms, what is the balance between being in the office versus being at home. When you're at home, what is the expectation of your availability? One of the criticisms of modern technology over the last decade is it tends to make a lot of white collar workplaces a 24-by-7 experience because you're expected to be online all the time. But, COVID has given a lot of companies and employees the ability to basically take stock of a situation like that and kind of say, 'What do we actually want?'

By the way, another idea--and I tried this myself, I would say with uncertain results--but, another idea is if you have friends who observe Shabbat at a certain point and you're not Jewish yourself at a certain point, you start to get jealous and you're like, 'Wow. That sounds like that might be a pretty good idea.' And, maybe it's even--so, what I'm trying to do is trying to do between Friday night and Saturday night or even into Sunday night is, 'How about I just not read the Internet? How about I read books and papers instead?' It's a challenge because--

Russ Roberts: How's that working for you?

Marc Andreessen: I would say medium--I would say, God is not pleased. But that's not my main goal right now. So, it's fine. I mean it's fine. The idea that you go into a tunnel in which you get to spend time with friends and family and you get to read longer-form things, it's enjoyable.

And so, look, these are, I think, all absolutely fine cultural questions, societal questions. I think we should have these arguments, debates, I think people should try different things. There are people in San Francisco actually doing--there's a lifestyle in San Francisco--again, as I said, we're famous for our cults. So, there's this new lifestyle in San Francisco where some people are doing what they call a dopamine fasts. So, they're taking it a step further, which is they're literally trying to carve out time every week to literally not have a dopamine response from anything. It's almost a very ascetic kind of thing where it's not only no phones, but also no books, no TV, no stressful conversations, basically go into a tunnel of total relaxation. Like a zen practice of some kind. Maybe there's something to that.

So, look: I think these are all the kinds of questions that we should be spending our time on. The questions of meaning, the questions of what it means to live a good life, the questions of what it means to have a community. These are all the great questions.

Again, just the engineer in me can't help pointing out, like, the purpose of technology is to offload all of the drudge work so that we can spend our time on these questions. You only get these questions when you have a materially wealthy society; and the wealthier the society gets, the more time you have for these questions.

Russ Roberts: That's a first world problem, for sure.

Marc Andreessen: That's right.

1:07:18

Russ Roberts: You're an exceptionally smart person in the fluid intelligence sense. And, you also have read a lot. I'm curious how much you've read in what we would call philosophy or wisdom literature, religious work. I sense that given our conversation about cults, zen attitudes that the culture of Silicon Valley--and we have it here in Israel, very much so in the high-tech, credible high-tech sector that Israel has--there is some thirst for those things: for philosophy and religion and other sources of wisdom outside of engineering. Do you sense that, or am I imagining it?

Marc Andreessen: I think there definitely is. I think people who don't have that at some point realize that there's something missing. The challenge is that the temptation--at least in the world I'm in--the temptation is then to try to roll your own.

And, by the way, this is what I think the effective altruists have essentially done. This is what the AI X-race[?] people have done. This is what the rationalists have done. This is what the atheists generally do is they, they're kind of like: Okay, by default--

Russ Roberts: We can do this better--

Marc Andreessen: Well, or we can think from first principles. Right? Like, everything we do in science and technology is based on thinking from first principles. And so, therefore, obviously, we can do that in religion and culture and philosophy and how to live a good life.

And, this is the--you know, Nietzsche talked about this, 'Okay, God is dead. We'll never wash the blood off our hands.' His point was: Okay, it's not so easy to create your own values. It was a hard-won struggle over thousands of years to get to Judaism and get to Christianity and get to all of these things. And good, bad, indifferent, whatever you think of them, they went through a process of winnowing good and bad ideas for their Lindy[?]. They went through a process of an evolutionary process for a very, very long time.

And, there are things in them where you look at them today and you're, like, 'Okay, maybe they carry X too far.' But, still on balance, they worked their way through the process of selection and pressuring and got societies to the other side.

And so, then it's like: Okay, what are the odds that somebody who basically--and by the way, what are the odds that people generally who have had technical educations and have had very thin humanities educations; or maybe even worse, have had modern humanities educations, which is maybe worse than having no humanities education these days--what are the odds that they're going to sit down and construct from scratch, a complete approach fundamentally to philosophy and ultimately to life?

I just think--I would say Number One, the odds of that are very poor. And then, I would just say Number Two: What tends to happen--I think, at least my reading of it--what tends to happen is they don't actually construct the new ideas they think they're constructing. What they tend to do is they tend to construct an ersatz version of what they inherited culturally. And in particular, in our society, generally, they tend to assemble sort of a proxy Christianity.

They would never admit this. Generally, they will argue vigorously against it, but generally they are creating a fake version of Christianity.

And then, you just have to ask the question--and I'm not a religious person--but you have to ask the question of: Okay, what is your confidence level that that's actually going to lead to better outcomes?

And again, that's where I think there's this risk of drifting off into occult territory, because if you're ungrounded in your construction of new values and you're doing it from scratch and you're very thinly educated on how people have done this in the past, like, it's just hard to see how it goes well.

Russ Roberts: Well, very reminiscent of John Gray's Seven Types--or Seven Kinds--of Atheism, which I interviewed him about. I apologize to listeners: it wasn't a very good quality conversation. He did it over a phone. He didn't have many other technologies, but he has a lot of insight. And so, if you put up with the mediocre audio quality, it's an interesting conversation I had with him a long time ago.

1:11:15

Russ Roberts: Let's close talking about national policy, which you touch on in the essay. You may know better than I do that in my conversation with Tyler, I think he asked whether Israel is going to it develop its own AI policy, or suggesting it should. Bibi Netanyahu recently said: 'Israel will be a leader in this area.' I don't know what that means. I think it will be no matter what. But, whether it should have a national arsenal?--I don't even know what it would mean at a national level.

But, your point is that even if many countries decide that there is an existential risk here, there are other countries that won't.

And, it reminds me of one my favorite poems. It's by Hilaire Belloc called "Pacifism." It says:

Pale Ebenezer thought it wrong to fight
But Roaring Bill (who killed him) thought it right.

So, that's it, end of the poem.

So, if America is convinced by these--the worriers, the Doomsdayers--that it's a bad idea, China is not worried about it. They're going full-steam ahead.

So, is it important that other multiple sources of this technology get developed in parallel? Are people going to get ahead or behind? How much do you think it's going to be open source so that anybody can use it as a tool at a very primitive level and then to all the power and profit, which you of course are going to look at, is going to come from making it user-friendly?

Marc Andreessen: Yeah. Well, look. So, first, the race is on. Right? And so, the race is on within the United States; the race is on outside the United States; the race is on with China. All those things are true. Actually China, they just dropped a paper on a model that they've built, which is sort of close to GPT-4 quality. They're coming up the curve quickly.

You'll enjoy this. One of the things that OpenAI does when they release a new version of GPT is they run all the sort of standardized exams. They run all the standardized exams you're no longer allowed to use for college admissions, and they run them through GPT. Right? And so--by the way, that's how I think you can approximate the IQ of these systems, is because these are all kind of IQ tests for the most part.

But, they run these exams through. And so, you can see OpenAI publishes in their papers--they publish: Here's how it scored on the SAT [Standardized Achievement Test] and the ACT [American College Test] and so forth. So, China just did this, but they ran the Chinese state exams through their system. And, of course, that includes sections on Marxism and Mao Zedong-thought and Xi Jinping-thought; and of course it turns out the Chinese LLM [Large Language Model] is really, really good at Marxism and Xi Jinping-thought.

Look, you're going to have Chinese AI. It's going to be like--it's going to be Communist AI. It's going to happen.

By the way, if you know, the Soviets actually tried to do this. Like 50 years ago, there was a big cybernetics program in the Soviet Union to try to build, like, Soviet AI. Like, that didn't work. The Chinese have certain advantages today, including, let's just say, access to IP [Internet Protocols] from the United States. And so, they're coming up the curve quickly; and of course they have a lot of scientists and engineers to work on this.

And then, in all seriousness, they have published a roadmap for what they plan to do--it was called their 2025 Plan--and the Digital Silk Road. And, their roadmap is exactly what you'd expect, which is the Chinese Communist Party expects to use AI for authoritarian population control inside China. And then, they plan to use it as a carrier of Chinese values, and the Chinese, system and Chinese assumptions about how society should be ordered, and to take that around the world.

And, they plan to roll that out as a requirement on top of their Silk Road investment program. They plan to roll that out as a technological layer on top of their 5G networks that they've spread all over the world. And so, they have a vision for the use of AI that's very authoritarian, and they have an agenda to spread this as far and wide as possible.

And so, we are, like it or not, in a new Cold War dynamic; and this is going to be one of the key technologies that's going to influence the shape of the world for hundreds of years to come.

There's this fascinating phenomenon that happens when I talk to people in Washington, which I experience as a sort of schizophrenia, which is: as long as we're not talking about China, their conversation is entirely about how to regulate/punish the American tech industry. Whenever China comes up, the conversation changes completely and it's like the old 'me against my brother, me and my brother against my cousin' kind of thing, which is: all of a sudden it becomes a conversation about how to contain and ultimately defeat the Chinese vision.

I think it's going to happen. The general tensions between the countries are ramping quickly--between the systems are ramping quickly. My guess is within a couple of years the dominant discussion here is going to be the United States or the West v. China, but we're still in this schizophrenic state where people are not quite clear on this. And, I think that's something that people should think very hard about.

1:16:02

Russ Roberts: Should a country like Israel, quote, "develop its own"? I'm not even sure what that means. What would that mean and should it?

Marc Andreessen: When a national leader says a country wants to be a leader in X, they mean one of two things. They either mean they want to do it really well or they want to regulate it out of existence. Right?

And so, the EU [European Union] has decided that they want to be the world leader in regulating artificial intelligence, and they have a law that looks like it may pass as soon as October that effectively bans AI inside the Eu. So, that's one form of leadership.

Probably that's not what Netanyahu has in mind. He probably is a more practical guy and probably wants Israel to be a technology leader, which Israel is in many other areas of technology. Well, look, Israel has a very good shot at doing whatever it puts its mind to and a lot of the best and brightest Silicon Valley engineers either are Israeli and here, or are Israelis in Israel working with companies that we're involved in. So, Israel has as good a shot as any country at being a real leader in this.

I think it's a good idea to put a real focus on this. There is a challenge which is: There is a critical mass of AI development basically in two places in the world. And, those places are the United States and China.

And then, within the United States, it is incredibly concentrated into the San Francisco Bay Area. I actually say that--and usually when I say that, it sounds like I'm a booster for where I happen to live--but in this case, I actually think it's concentrated to actually a quite extreme degree and to a degree that's probably not healthy, but it just literally is the fact that like 99% of the development is happening in the San Francisco Bay Area. That's a consequence, by the way, of the existence of Stanford and Berkeley, and then also Google and Facebook and Microsoft--all their labs that have been working on this for the last whatever, forever, decades. It's basically been here.

And so, the critical mass is in the San Francisco Bay Area. And so, basically practically speaking, you've got the San Francisco Bay Area and then Beijing, Shanghai is overwhelmingly where the development is. Again, this goes back to the China thing, which is: for better or for worse, there is a Cold War dynamic developing. It is a bipolar dynamic. I think countries like Israel, they will have their own capabilities, but a lot of what they will have, I think, is going to be U.S.-derived.

And then, correspondingly, there are many other countries around the world that are going to have to make a fundamental choice. This is a very similar choice that they make. In trade policy, it's a very similar choice they make when they decide to build 5G networks. And, it's going to be a really important choice here, which is: fundamentally if you take your pick of country, but if you are anybody from Germany to Argentina, to South Africa, to Indonesia, you're going to have to decide fundamentally are we making a U.S. bet or a China bet?

Generally, freedom-oriented regimes, I think are going to generally want to make the U.S. bet. Authoritarian-oriented regimes are going to be tempted to the China path. Which bet you take is going to help tilt your political system in the future, because this technology is going to help shape the politics and the culture of then what happens in your country.

And so, we're back to one of those moments, whatever in the 1970s with the Soviet Union, where they're, like--there are a choice of two systems and there are temptations down both paths, and people are going to be making decisions. I think, to my mind, that's the adult conversation that has to happen, and that's the adult policymaking process that has to happen, certainly in the United States but also in many other countries.

Russ Roberts: My guest today has been Marc Andreessen. Marc, thanks for being part of EconTalk.

Marc Andreessen: Good. Thank you, Russ.