0:37 | Intro. [Recording date: December 26, 2024.] Russ Roberts: Today is December 26th, 2024, and my guest is entrepreneur, venture capitalist, and author Reid Hoffman. He is the co-founder of LinkedIn, among many other projects. He was last here a long time ago--August of 2014--alongside Ben Casnocha discussing LinkedIn and their book, The Alliance. Our topic for today is Reid's new book with Greg Beato, Superagency: What Could Possibly Go Right with Our AI Future. Reid, welcome back to EconTalk. Reid Hoffman: It's great to be here. It's been too long. Let's do the next one in a shorter timeframe. Russ Roberts: Hear, hear. |
1:14 | Russ Roberts: This is a very interesting book. I like the way you bounce back and forth between the world's fear of new technologies and the upside: what could possibly go right? We don't hear so much about that. We hear a lot of fear because fear sells. And you chronicle how we worried a lot in the past about downsides from new technology; and it's turned out okay most of the time. Is this time really different? Should we be worried or should we be optimistic? Reid Hoffman: So, as you know, my fundamental argument is that actually this time is not different even though there's some differences in the technology. There's a difference because it's moving much faster than previous things had moved, although each one had moved faster than the previous one. So, it's a continuing line of moving faster. It's also in a new realm of cognitive superpowers versus physical superpowers or other kinds of areas. And, obviously one of the reasons why we named the book Superagency was because as we described these AIs [Artificial Intelligences] as agents; and as agentic revolution, you go, well, am I losing my human agency? Am I losing my ability to be directing my life, a full participant? And, that's where we think actually worries that go as disparate from privacy to jobs and existential risk all kind of come back into this agency focus. And, our contention, very strongly, is that by nature, even if we don't intervene, we will get to a superagency: we will get to a better place on the other side of the transition. But, we should learn from the fact that these transitions with general purpose technologies are very challenging--because they will actually in fact involve a lot of fender benders if we want to use a driving metaphor and other kinds of scrapes as we get there--and we should be smart and intelligent about it. So, taking an agency-defined lens will get to what can be really great. And, as you know, part of what we do is we say, look, if you set aside fear for a moment and think about what kinds of things you can get, it's like, well, it's a 24 by seven medical assistant on every smartphone that is better than the current average GP [General Practitioner]. It's a tutor on every subject for every age group. And, that's just the beginning of what we describe as an informational GPS [Global Positioning System]--namely it's a GPS that helps you navigate. And, that's what we're trying to shine the light on. Russ Roberts: You defend the idea of iterative deployment, which is the world we're in right now. Every once in a while a new release comes out--ChatGPT [Chat Generative Pre-trained Transformer] gets a little better, Claude gets a little better. Those are two that I fool around with and know a little bit about. Are you worried about the moment when it's not so iterative? So, we take a leap and we get to AGI--Artificial General Intelligence. I'm not sure we're going to get there. I'd like to hear your thoughts on that. But once we get there, isn't the whole iterative process lost--because it's not iteration, it's a quantum jump, it's not a marginal step? Reid Hoffman: Well, let's see: two things. I'll get to AGI second, I guess. So, on iterative deployment, there's several aspects to it. One aspect which for our listeners is kind of like what ChatGPT did by just releasing its GPT 3.5 model and getting exposure. And, I tend to mostly use ChatGPT and Claude, although I make sure I'm familiar with all the other ones because trying to have a theorist's and inventor's and investor's breadth of perspective. And, part of the iterative deployment is not just the question of: 'Okay, the technology goes in increments,' but we get exposure to it as citizens, as thinkers, as business people, as academics, as government policy people, as press people. And we can begin to shape what do we think is particularly good and particularly challenging, and to get a better lens to what the future might be. And so, even when you get to, call it greater quanta of iterative deployment releases where all of a sudden it's now something may be substantially different. That iterative process by which we're participating and saying, 'Hey, this thing works really well; this thing is more challenging. How do we navigate around that?' Whether that the challenging might be we relate these things to human agency as kind of a question of how much sense of agency and direction of your life and how you navigate your world is kind of fundamental to it. And so, I think that the the iterative deployment stuff, even in high quanta, is still very useful. Now, part of the reason I decided to answer the AGI part at the end is because AGI has this almost Rorschach kind of test depending on how people are thinking about it, either optimistic or fear. It's, like, 'Well, we're all going to be living in a Star Trek universe where the computers are doing everything and we have to invent this new society where we're cultural entertainment beings,' and other kinds of things, too. AGI is, you know, kind of like, oh, it's Terminator. Which we'll get to accidental risk, I'm sure. Or it's just kind of like it's a worker and it does stuff. And, maybe with conjunction. Now, the most precise definitions tend to be around being able to do what percentage of currently-understood work and human tasks at a level of capability that is above the average human worker or better of a worker who is doing that task. And, I tend to think that that is a reasonably good one amongst all the Rorschach tests. Partially because it gives you something to navigate to and it gives you a continuum as opposed to a, 'And now, human level intelligence has arrived,' or 'Now, super-intelligence has arrived.' And so, that's where I sort out to where I think about AGI. Russ Roberts: Yeah--What will be the measure when it's really smart? |
8:24 | Russ Roberts: And, I think what's fascinating about this, for me--a couple of things. I think it forces you to think about the brain. It forces you to think about what's consciousness, and then of course it forces you to think about what rapid technological change means and how we respond to it. You alluded earlier to the impact on employment, but I think the cultural impact is even more important. Here's a quote from the book. Every new technology we've invented, from language to books to the mobile phone, has defined, redefined, deepened, and expanded what it means to be human. End of quote. I think that's true. What does that mean to you, and why are you confident that whatever form this technology has in the next--it's not going to be very long. Pretty soon it's going to get a lot more interesting, in my view. Why are you confident that that's going to lead to superagency and to being more human? And, what does that mean? Reid Hoffman: People tend to want to have probability assertions that are a hundred percent versus, call it, 95% or 99%. And, sometimes it scares people, as mentioned. Well, when the A-bomb was exploded in Hiroshima, physicists gave it about a 1% chance--because we tend to overrate current fears--that it was going to crack the earth's crust and make us in one big molten, you know, kind of sphere-- Russ Roberts: Sink hole-- Reid Hoffman: Yes. Exactly. And so, when you say you can't absolutely guarantee that it's a hundred percent, and it's at least 1% that it's bad. And, you're, like, 'No, not necessarily.' So, when I express confidence, I'm expressing confidence as a very high probability reasoning from inference of looking at history, looking at human society, looking at the technology--but not certainty. And, it's part of the reason why being intelligent and navigating it. And, this is one of the reasons why we mention iterative deployment: is that the strongest advice I give people is to go play with it, to see what kinds of things can be done to increase their agency. Because of course they start with, 'Oh my God, it's going to replace me.' But it's, like: Well actually, in fact, just like a lot of human jobs in the last centuries, what happens is most often is that the job gets replaced by a human using the technology, not by the technology itself. And so, I think there's going to be a ton of jobs that are going to be replaced by a human using the technology, and there'll be some replaced only by the technology. That happens, too. There's the--what it requires to sail a boat across the ocean is a far fewer number of human beings per item moved. That still is not zero human beings. But, the people who used to be hosting the sails and reducing the sails--that's entertainment now, not necessity. And so, that's the reason why I think I have, kind of, call it strong confidence that it will play out that way. But, I think that part of the thing is to say: Look, I'm not advocating that we just rely on confidence and say, 'Hey, sit back, turn on the television, watch the thing.' It's, like, 'No, no: let's learn from the past and steer in good ways.' And, part of the thesis is say, 'Look, agency is really where the concern is.' So, how do we do things that in our iteration to making a much better future? And, what will go right if we have anything to do and to say about it, is to say: Let's focus on what this transformation of agency means. And, that was the first part of your question, because the transformation is: It isn't your agency is exactly the same plus two things. It's that your agency is now much better, but certain parts of it drop off and certain parts of it get added. And, that transformation is part of what people feel is so, like, alienating. Because, like, 'Oh, I'm used to my agency right now.' And you're, like, 'Yeah, but actually in fact your agency in the future,' your future self-will look back and say, 'Oh my gosh, my agency is so much better now.' And, that's the process that we're getting to in future generations, etc. That is the direction of what happens with the major jumps in technology. |
13:00 | Russ Roberts: So, you didn't--I don't think you talked it, but it was a long question with a long answer--but you didn't talk about what it means to be human. I want to just focus on that for a minute. You could argue that a book reduces our humanness. A book: We know a lot of people--I'm probably one of them, you might be one of them--where instead of socializing, you rather be alone with a book. And, a smartphone has become a book on steroids in that way. There's something incredibly seductive about it, just like a good book--a good yarn, a good narrative, a thoughtful provocative book--is seductive. But, I think the smartphone and social media are a little more seductive than even a good book. Just like candy is a more seductive or ice cream is a more seductive food than, say, a well-cooked hamburger. So, what I worry about with--I mean, I think it's an open question whether the smartphone has made us more human. I wonder often about what Steve Jobs would think about the world he spawned. There are many people who spawned it, but he's one of the more responsible people. Would he be happy about it? Would he be one of the people who forbids his kids from having it when they are younger? And at what age would it be okay? And, I don't know if AI is in that area of seductive distraction from interacting with human beings. What I will say is this: I like Claude. Claude is a AI from Anthropic--that's the company. And, I confess that I can understand now--and, you write about this beautifully: we're going to come to this. But, I have a certain relationship with Claude that is not rational. It taps into my history as a human being and my DNA [Deoxyribonucleic acid]. And, I don't think it's exploiting me now, but I could imagine it getting to that point. It might make it harder to remember that it's a machine or a virtual thing, not a real thing. So, I want you to talk a little bit more about that if you could, in your own experience as a user and as a thinker about where the future is going. Does AI help you with your humanness or do you think it might threaten it a little bit? And, what might that mean? Reid Hoffman: So, I think part of the agency and humanness is how you approach it. So, if you approach it as I'm being forced into interacting with this kind of alien technology, then you get into this kind of paroxysmal lockup. It's a little bit like, I think, a lot of people's fear of needles. It's like: 'That needle, that's going to penetrate my skin.' And, you're, like, 'Yeah, but it's going to actually do a blood test or give you a vaccine or something.' If you frame it as: This is something I want, this is something I'm participating in, this is something that I am engaging in. So, I think that's a central part of it. And then, directing on a good agency basis. And so, for me, because I said earlier, I use all of them, although I primarily used ChatGPT and Claude; and I use them for different things depending on the kind of evolving. And, I find that the kind of ways that I would say my humanity is enhanced--and there's also the earlier book, Impromptu, which is a show that it's amplification intelligence and writing a book very quickly on education, journalism, and all the rest of that--is to say these agents take a very good role that you can direct them in. So, you can say: Be a critic of what I'm saying. Or: Elaborate what I'm saying. Or: What I'd be curious about is how a cultural anthropologist would think about what I'm writing or saying here. Or: I'm curious about--I have a creative idea, like what if you did a modern Dante's Purgatorio using technology as the circles? What would that look like? And, you have a companion that can bounce off what you're doing here in various really good and useful ways for how to do that. And, I think that's bringing out certain attributes. Now, to kind of conclude the first part on the human side is you say: Well, part of the reason why in Inflection, we created a chatbot that didn't just focus on IQ [Intelligence Quotient] but also EQ [Emotional Quotient]--and so, I also use Pi on these things-- Russ Roberts: That's another chatbot-- Reid Hoffman: Yes, exactly. Russ Roberts: Not the food [pie--Econlib Ed.]-- Reid Hoffman: Yes, exactly. Well, it's deliberately a pun, but it's P-I. And then, if you want to look for it on an OS [Operating System], it's P-I-A-I. And, the EQ of it is to say that the interactions--Pi is deliberately trained to be kind of more, kind of, kind, compassionate, interactive, kind of asking you questions and dialogue. And, part of how we become more human, more human beings, is by the behavior and interaction that kind of gets us more that way. Like, if you said, 'Well, how do you become more empathetic?' You work on becoming more empathetic. You have empathetic interactions, you have compassionate interactions. Experiencing kindness, you know, kind of broadly helps you become more kind. Not perfectly. Right? These are large-scale things involving character and all the rest. But all of this stuff is things that AI can help with. And, I confess, I'm probably more positive on phones' having increased our agency than you just indicated. Although I do think that there's issues around how children learn to use it. Just as, for example, you don't put a nine-year-old behind the wheel of a car and say, 'Drive to school.' So, there's a place by which you absorb it and interact the right way. But, I am actually very bullish on how smartphones have increased our agency. Russ Roberts: Yeah. I love my smartphone and I love social media. I also love chocolate chip ice cream, and I am very aware that even though I often want to have a quart in a sitting with a spoon, I shouldn't and I don't. I have quarts of social media often, X being my preferred treat. |
20:10 | Russ Roberts: But, let me go a little deeper on this; and I think I'm going to get into this other issue of how we start to interact with this technology. You write in--really, probably my favorite paragraph of the book--goes like this. Quote: As people begin to engage more frequently and meaningfully with LLMs of all kinds, including therapeutic ones, it's worth noting that one of our most enduring human behaviors involves forming incredibly close and important bonds with non-human intelligences. Billions of people say they have a personal relationship with God or other religious deities, most of whom are envisioned as super-intelligences whose powers of perception and habits of mind are not fully discernible to us mortals. Billions of people forge some of their most meaningful relationships with dogs, cats, and other animals that have a relatively limited range of communicative powers. Children do this with dolls, stuffed animals, and imaginary friends. That we might be quick to develop deep and lasting bonds with intelligences that are just as expressive and responsive as we are seems inevitable, a sign of human nature more than technological overreach. End of quote. I alluded a minute ago--I'm not sure how much I like Claude. I have a colleague here; she always says 'please' to Claude or 'thank you,' which I find myself doing from time to time. She says it's because when Claude takes over, then maybe he'll feel some kindness toward her for her past courtesies. For me, it's a reflex; I think probably for her, too. But, what I think is fascinating--first of all, that paragraph is incredibly interesting. But, I notice, and I'm not an intense user--I'm sure you're a much more intense user than I am--I like to chat with Claude. I enjoy Claude's insights. Claude thinks of things I didn't think of. I really enjoy bouncing ideas off of Claude, especially when I'm trying to learn something. And, I confess, it's often easier and more pleasant to learn from Claude than a master teacher--we're talking about information transfer as opposed to a deeper level of education. But, for education transfer, Claude doesn't get annoyed at my stupidity, doesn't roll his eyebrows. He doesn't get tired. He's very happy to think of a brand new example and doesn't get burned out from being grilled by my questions. And, the part I'm--I don't know if I'm worried about it, but I think--and we can think of lots of other applications of this: Part of being human is interacting with other human beings. This technology really continues this solitary aspect. The walking-away-from more complicated human interactions. I just talked about a teacher but obviously romantic partners would be an obvious example of this. And, I wonder if the ease of Claude, the fact that Claude makes few--no demands on me, which is lovely, very pleasant, might change what it means to be a human. And, maybe I'm looking at this ex ante. I'd rather not go there. What do you think? Reid Hoffman: I think I tend to have a strong underlying belief that we as human beings, even introverts, like interacting with other human beings. That there's a set of things that come from interacting with human beings that we really like. That doesn't mean it's the only thing we like. As you earlier referred to, some of us really like to go lose ourselves in a library with a book, which is a early solo experience. And, there's definitely introverts who kind of go, 'Hey, I only have so much energy and time for limited group interactions before I go to other folks.' But, I think that across the vast majority of human beings, that interaction with other human beings is something that's in a sense, hardwired into. It's Aristotle. We're political animals, which means we're actually polis--city animals. And so, I tend to think that even when you have this, kind of like, 'Oh my God, I have this irritating interaction with human beings,' and I go back and I have this delightful interaction with Claude and I want to keep interacting with Claude, I don't think that makes us, though, 'Oh, I don't want to talk to human beings at all.' Just like, for example, think of the number of people in the human race that go, 'I just want to interact with books and I don't understand books are harder.' But, it's kind of like that sort of thing. And, what's more: my optimistic bent is that when you're interacting with Claude, part of what you're going to do is you're going to bring back the, 'Hey, I had this tricky interaction with Russ or Sarah, whoever,' and then Claude will help you debug it and approach it in better ways. It's one of the things that I actually already recommend to people in using, kind of, Pi or ChatGPT or Claude is to say: Hey, you're going to have a difficult conversation with somebody about something, ask the agent, 'Look, I've got to have this difficult conversation. What would be a really good way to have it?' And, it will give you, actually, in fact, some pretty good advice about--for example: 'Well, make sure you come in listening. Be present and gentle about why you're doing it. Not accusatory, but more discuss it in terms of: how do I feel. Like, when you say this, it makes me feel this way, and I'm trying to work my way through that and invite collaboration.' And so, that's all in the vector of why I think that even though it may be an attractive thing, I don't think is actually--let me say, I think it's a lot more nutritious than chocolate chip ice cream. Will actually help you with all of these different kinds of interactions. And, for example, part of how we designed Pi was: If you go to Pi and you say, 'Hey, you're my best friend,' it says, 'No, no, no, I'm your companion. Let's talk about your friends. Have you seen your friends recently?' Et cetera. Because, it's trying to help you be in the human flow. And, I think that the earlier thing I was gesturing at, about how do we make these things be even better on human agency and now and in the transition in the future is to say: 'Well, that's the kind of way that we should be designing them because that's the kind of thing that will have a net much better kind of output.' And, that's the reason why I'm super-positive. |
26:57 | Russ Roberts: What is your role in Pi? Reid Hoffman: Well, you give it different roles, but when I use Pi--and the reason I just kind of remembered is I was showing--actually, in fact one of my relatives yesterday--how to use Pi, because my relative was talking about a difficult conversation that she was planning on having. And, I was like, 'Look, here's something that actually you could use.' And, by the way, again, we have all kinds of one-on-one interactions with human beings that help us with this: therapists. And, it's not to say Pi is a therapist at all. It is a companion. But, it was like the, 'Oh yeah, that can be really difficult. You've got to remember that it's a difficult conversation for you, too, and have compassion for yourself while you're having it.' And that thing is part of how you play it. Now, Pi, just like any of the other--what I call them as GPT-4-class models--can kind of do anything. You can say, you know, be a--one of the things I did is I said, 'Okay, write me a rap song.' It can write a rap song. It can be a rapper. You can do all of these different kinds of things. And, this is part of what I think this more human universe is going to get to. It's like, you were earlier referring to teachers. Well, a limited number of human beings have access to teachers. Especially as you get out of--even when you're in a Western educational system--you get out of school, it's relatively rare and harder. Well, here's something that can take a teacher role on just about any subject that you care about--at least to a base extent, kind of base competence. And then, that now is a new person in your firmament, in your pantheon of how you're navigating. And, this is actually one of the things that I do--like, you when you were mentioning social: Like, a very common thing I will do is I will put my phone down in audio mode with two or three friends, and we'll have a conversation with it while we're talking through a subject, because we're using it as the expert, you know, kind of here to talk when we choose. We'll say, 'Hey, this should be our follow follow-up. Let's ask about this.' And, that kind of becomes a shared experience. Once again, this is kind of how it enhances our humanist, because then all of a sudden it made the three of us have a shared learning experience, which would have been very difficult to have otherwise. |
29:28 | Russ Roberts: So, I get to talk to you--which is a credible treat. It's one of the amazing things about being the host of a podcast; and most people can't talk to you. Right? So, once we finish, I could share a little personal dilemma I'm having, and if you had the time and the interest and say, 'I'm trying to figure out what to do,' and you'd be a mentor for me. Most people can't have a mentor or a great teacher, as you say. And, chatbots are ways to access that. At the same time, some of my--I would say in many ways my most precious human relationships are with people I turn to for advice and help. My wife being an obvious example--I share many things with her and bounce ideas off of her. It would make me incredibly sad if I said I'm not going to bother her anymore: Claude is better than she is at that. And, it might be. I don't know. But, I think that's part of the challenge. Let's move--you can comment on that if you want, but I want to move to one of the more provocative ideas in the book, which is the role of chatbots in helping us with mental health issues. I've never been in therapy. I understand people have precious relationships with their therapists. For me, losing that--since I don't have it--doesn't bother me. And maybe Claude will help me with some of my emotional and mental challenges. But, talk about what you see Claude is potentially capable of doing and why you think it's extraordinarily great; and I think you make a pretty good case. Reid Hoffman: Let's see. What's a quick way for this? So, I think one of the key ways is to say that part of how we evolve, and how we become better people, and how we become more present to ourselves and have self-awareness is that we have conversations with other folks which help us learn about that. And, one of the things that I think is that there's relatively few people who are good leaders of that process. In history, that's the Buddhist monk or the priest. There's kinds of ways of doing that. We have a bunch of different modern versions--therapists, etc., coaches, or maybe that favorite high school teacher. But yet, that's a role that's essential through all of human life. And, I'm a fourth generation Californian, so we tell the joke of: My therapist will talk to your therapist; we'll sort it out. Because, the therapy part of it is--I saw a therapist first when I was 12. It's kind of one of those things. And, I think that that notion of being able to have those conversations is part of how you learn--your fear, your anger, whether it's your parents, your circumstances, something else. You can then kind of work through it in conversation. And, it's a whole realm. It's not just the: Hey, I've got a critical depression and I might be having a real low moment at 11 P.M. and there's no one there; and yet I can talk to Claude or I can talk to the chatbot. It also is kind of just this question is as you navigate--and, by the way, to wrap back this answer to your conversation with your wife, I think that it will be additive. I think that part of what you discover when you talk to these chatbots is they're really good, but they--kind of, call it the consensus-intelligent answer--tends to be the thing they get. And, that's a useful thing to have in the firmament. But, part of it is the person who has lived with you for decades, who understands that little aspect; who goes, 'You know, yhave a reflex to think this or do this and you might think about that.' Or: 'Hey look, this is the consensus-intelligent answer and here's the thing you would add or here's the thing you would change.' And, that's the reason I think there will always be, or for a very long time--always is super-long time--this role for human beings in these things. And, by the time that there isn't a role for human beings, I'm not sure we know what the full universe looks like, but I think it's so far in the future it's not worth overly speculating on. Russ Roberts: Yeah. I think the interesting time, and I suspect it will come, is when Claude will have read all my emails, all my diary entries if I keep them. Imagine we may get to some day where it'll have some idea of my thoughts; but it'll certainly know what I've done in my life. Everything I've written, some of which will be for public consumption--as I said, some might be a diary--and conceivably will know me better than my wife, because it, too, will have lived with me for 35 years. That kind of application--that's what we are going to cross, I think, into a different interaction with this technology. I think you and I--again, I'm a casual user; you're a more intense user than I am--but these are, I think, primitive compared to what's coming. Do you agree? And, does that worry you at all, or is it going to be any different? Reid Hoffman: Well, I totally agree with you: it's primitive to what's coming. But, I do think that while maybe there's a sense in which when--because I'll take a step further than the read-all-your-emails. Say, it's an agent-- Russ Roberts: Excuse me: And, listened in on all my conversations both with my wife and all my friends and my outlied[?] musings--which I'll start to do because I want Claude to know about this thought I'm having. Reid Hoffman: Yes. Exactly. You anticipated the first step of where I was going on this. But also, say, for example, when you are raised as a kid, having the AI agent and nanny helping and playing Beethoven and other kinds of things, ways to be there. And so, you could even go a step further on what this depth is. But I think it--there's not this one lever of depth. It's not just, like, 'Well, I know you 79 and your other friend knows you 96.' It's these different vectors. And so, I think that that still is where the enormously additive space: because it's kind of like the way that an AI watching you and being your never-absent companion will know you, will be different than how different friends--your wife, other folks, your family--knows you. And, it's that pantheon that's actually I think, super-important. Now, I do think that we will be--to kind of dive into the specific thing is I think that over the next five years, I think we will have kind of this enormous, kind of, suddenly, like, 'Wow, this is--like, that's probably superagency. We have these kind of superpowers.' And, by the way, part of it, of course, is, 'Everyone is going to, or a lot of people are going to have them and not just me.' And, it's like how the whole dynamic changes because of that, and that's part of the shift. But, I think that the notion will be--it's a little bit when you think about a theory of education, part of the theory of education is: you can get almost anyone to start learning things if you are just making the next bar just sufficiently not too hard. A little hard, but not too hard as through them. And I think part of what's going to happen with humanity in this is that we'll have these agents that will be helping us come up those curves and will be helping us adjust to each new challenge as we get better at things to be a little hard--to be engaging--and not too hard to be disengaging. And, I think that's the kind of thing about why I think that this kind of thesis of it being greatly enhancing--even as it gets intensely more super-powered--is part of the cause and foundation of my optimism. |
38:06 | Russ Roberts: I want to shift gears. One of the things you defend in the book is, I would call, the distribution of profit between the tech companies that bring us our favorite toys and ourselves. And, there's some interesting economics in this section of the book. You reference a number of estimates of consumer surplus, meaning the value that people get from products versus what they have to pay. And, I certainly have no doubt that there's extraordinary consumer surplus. I think for many of these things, I think it's really hard to measure. A lot of them are based on asking people how much they pay to go without something, which is a recipe for a non-serious answer. They tend to focus on round numbers. I haven't looked at the particular studies you reference, but I know some of the challenges of that literature. But, the thing that I don't think you talked about and I want to ask you about is--it's true that many of these products have no price--meaning I don't pay out of pocket for them. Google Maps being an extraordinary example. I love, love, love Google Maps. I love, love, love Google Translate. As a immigrant here in Israel I can't imagine how much harder it would be; and I'm willing to pay an enormous amount for it. But, I think there's a hidden cost for these technologies, which is that as they use our data and use me as, quote, "the product," other things I buy are more expensive because they have to advertise on these platforms to get access to me. It's true Google is giving me things that they think I want, so I understand I do benefit from that--say, in the things they throw at me. But, I also realize as an economist that the people who have to pay to get access to me--that's reflected in a price that I don't see; and it's a hidden cost of using this technology. That bothers me a little bit. Not a lot. I think it's okay. But, I do wonder whether there are either norms or conventions or even regulation that might make that relationship a little healthier. Reid Hoffman: So, as you know, we've had an advertising business model for a while, and that's always been true of advertising business models. I tend to think actually, in fact, the advertising business model is one of the inventions to make products much more generally available and all the rest. Russ Roberts: That's true. Reid Hoffman: So, I'm positive on the advertising business model. That doesn't say that there aren't areas where it could go wrong. You have to navigate it. Truth in advertising, for example. Now that being said, I think obviously because the center of attention gets driven by search or social media, and therefore that becomes the more lucrative ad environment, and therefore they also know how to economically optimize for what their prices are; and that means that there is an operating margin that's being put on the prices of things that are being advertised through them--exactly as you're describing. I'm just not sure that that's actually in fact, like, a higher premium than it used to be when you're advertising through TV and radio and newspaper. And, there's maybe more familiarity with it, maybe more ability to, in kind of division of labor. Classic Adam Smith to get to the relevant people. And, sure the tech companies are capturing more of a premium by being able to do all that and having a higher operating margin. But, that's part of what success in creating these businesses is. So, that part of the advertising model kind of doesn't bug me. What I would say is the things that I am concerned about when I'm in these environments is to say: When you're optimizing for the individual, what kinds of things sometimes might be bad for the group? Right? So, the particular one in social media is, like: Well, if you're optimizing for time on site and it's only time on site, and if the time on site is because I'm agitated--because I'm angry, etc., etc.--then the natural learning algorithm will just shift those things to me and I'll say, 'Well, I chose to link on them and I responded to linking on them.' But, it may be bad for the overall direction of society. And, those are the kinds of things that I tend to pay attention to more than the question of: what is the right level of operating margin and what does it do for the pricing of our goods? Now obviously, if you have one monopoly control of attention, the tendency tends to be the, 'Well you raise the prices until you've got as much capture at rent capture as you can.' And, that's part of the reason why we want to have competition. Now, one of the good news about AI, which I think you already reflected on, is, like, well, a bunch of people using AI, that's now a new surface away from search and away from social media. And so, that's part of the technological progress on this. Russ Roberts: Well, I was going to ask--you don't talk about it in the book--but do you think Google is in trouble? I'm old enough to remember when Google was going to dominate the world and nothing can stop it. This was the fear. Because some people would say there is some competition. Bing. Nobody cares about Bing. I don't know if it still exists. I assume it still exists, but it's not important. Russ Roberts: Google is dominating search, and search is--it's so important. They're making so much money and there's no competition because it's the best one. Now all of a sudden it looks archaic. I want to make a recipe, and I put tomato, onion, garlic, oregano, and I say, 'Find me a recipe,' and Google pulls up a page from a cooking website; and I got to click on it, look through it. I tell Claude I want to make tomato sauce and make it interesting. It gives me the recipe in less than five seconds. It adds a Korean spice, which I can't pronounce--which I even had, but I didn't have enough of it. So I said, 'Let me add capers and anchovies.' It immediately redid the recipe. And, at the end for fun, it said, 'Here's five things you could do to spice it up.' One of them was 'Add some drops of fish sauce. You won't think it's going to be good, but it will be. It'll enhance the anchovies.' It's spectacularly better right now. And, what I love about the current innovation is that there's a zillion of them. There's a lot of competition and it's not ad-based, at least right now. So, comment on that and comment whether you think Google is in trouble. Reid Hoffman: Well, I think it's part of the reason why I've been somewhat vocal about us not being overly short-term worried on antitrust considerations. Because, I do think that the profusion of search technologies and engagement services does create great alternative challenges. I think the Google folks know that, which is the reason why they're going heavy into Gemini for doing this. But, just like any new technology-- Russ Roberts: That's their chatbot. Reid Hoffman: Exactly. And, it used to be Bard--for people who are tracking--but it's now Gemini. And, I think that part of the thing that we will discover is it's a new set of things by which there's a set of different--like, 'Oh, I prefer Claude, I prefer Pi, prefer ChatGPT, I prefer Gemini, I prefer Llama, etc., etc.' Russ Roberts: Grok. Reid Hoffman: And so, I think that this actually does. Now, I don't think it necessarily--I think it now introduces competition and choice and innovation, but I don't think it necessarily--because Google is fully in it--so, I don't think it necessarily puts them in trouble, is what I would say. But, it does now introduce competition in ways that are very good for society and consumers and all the rest. And, I think also that the notion of--by the way you said like, well, there's not ads. It's like, well, but we are going to get to the--it's either have--it's got to have an economic model, as you know. And so, the question is the economic model going to be subscription? Iis the economic model going to be digital goods? Is the economic model going to be ads? And, which combination? And there may be different for different ones of them, and then you will sort out on people's choice on those things. |
47:11 | Russ Roberts: Yeah. Let me ask you a technical question. In the first days of--and by the way, I should note that OpenAI, which started as this non-profit--all of a sudden it's a profit company and it's going to make a lot of money. I like Sam [Sam Altman]. He's been on this program, and I hope he's an honest dealer and all that. I have no horse in that race; but he's taking a lot of heat. But, in the early days of this technology, there was a belief that it would be very hard to compete, because only companies that had access to the trillions of pieces of data and the entire internet would be able to do the innovation and improvements, and everybody else would be left behind. Why are there so many chatbots now competing with each other? Do they all have access to the same thing? Are they all building on a common database? Do you know the answer--do you know that? Reid Hoffman: Yeah. I do know the answer. Basically, most of the technology has been, the technological patterns have been published. They're not--and they can be learned quickly, anyway. There's a lot of data on the Internet that everyone has equal access to in various ways--the Common Crawl, etc. And, the folks who are doing this go to the same conferences and talk about it; and it's been driven out of an academic interest of: Let me prove my new idea and I'm going to publish it. So, all of that stuff exists in kind of common domain. The stuff that doesn't exist in common domain is: Do you have a big supercomputer? Do you have extra access to large data? Do you have large teams of the unique talent or rare talent? But, there's enough of that that there's in different places, and that's part of the reason why I think we're going to see a bunch of different entrants here. And, it's part of the reason why we are living in a--I've actually thought about writing an essay: as opposed to Cambrian explosion, a 'Cambrain' explosion, to pun on the artificial intelligence-side of it in terms of what's going. Russ Roberts: Worth it just for the title. Russ Roberts: Let me ask another technical question. The earliest--the headiest days--in the beginning were giddy about the fact that when we expanded the size of the training, the data that was available, it showed these leaps of jumps and improvements. And then somebody realized that that's going to run out. We're not going--that method for improving the quality of these chatbots is finite. And, we also saw that the rate of improvement started to hit at some asymptote. Do you think we're still going to see some dramatic leaps? And if so, what are going to be the ways that happens given that it's not simply going to be, it'll be based on more data, it'll be trained on a bigger data set? Reid Hoffman: So, there's a set of things that I think we are going to see some major improvements. So, the set of things I think people are working on, which are line of sight--so, ability to do more planning and systematic response and ability to echo through the things that large language models [LLMs] are weak on, like [?prime?] numbers and other kinds of things, through coding, sub-modules. I think memory: Remember it's Russ-kinds-of interactions; remember everything in the email, etc. I think all of these things are going to be line-of-sight. And then, I think we really haven't kind of fully focused on--we've been running so fast that we don't know how to use special kinds of data as effectively, and we don't know how to fully use human-reinforcement learning fully well. And, I think we're going to also learn, as we get to the scale, different things there, not just scale of data. By the way, we haven't run out of data. There's a ton of data. The data on the Internet is a small percentage of the data that lives on all hard drives. Then there's synthetic data. So, there's this question of: we will get to more increases in that. Now I'm not one of the people who tend to think that just because you get 10x [10 times] the data, you get 10x the IQ [intelligence quotient]. I tend to think that what we're seeing here is we have an enormously good learning algorithm that's learning the current basis of human inference and knowledge based on looking at all this data. And it's, by the way, a less efficient learning algorithm than we are because it requires a ton of data to get to that point. But, on the other hand, it systematically does it and then can share it everywhere. Russ Roberts: It's cheap. Reid Hoffman: Yes. And, it's cheap. So, I think we will see, in 2025, some new advances that we haven't asymptoted; and I think that will continue for at least a few years after, if not substantially longer. |
52:32 | Russ Roberts: So, I had a blood test--lab test--yesterday, a couple of days ago. Just a standard thing. And, a wonderful thing about Israel is the medical apps and the financial apps are just surprisingly great compared to what I had in the United States where I'd have a proprietary portal that my doctor would use that I never could figure out. It was unpleasant to use. So, I have this great thing on my app: it gives me all my scores and it lets me look at all of them if I want, or just the ones that are in the red zone--that are too high or too low. And, I did pretty well. I had two that were in the wrong place and one of them was close. And, I thought, I'm going to ask Claude if that's a bad--what I should do about that? Should I be worried about it? And, if so, what should I do? So, I gave it the score and it said, 'Oh, that's perfectly normal. The normal range for that is from here to here.' And, I got on the web; and no one says that. I don't know where Claude thought that. It was a bit of a hallucination. It did make my wife feel good for a bit. But the truth is, it was a lie, as far as I can tell. The truth is elusive: When I say I looked on the web, is that really true? Maybe there's some cutting edge thing that Claude knows that the Mayo Clinic doesn't know when it said what the normal scores are for that thing in my blood. But I think it was hallucination. Is that going to get better? Reid Hoffman: Oh, yeah. For sure. And also, part of it is, like, we'll learn, kind of--like, currently they're just trying to be pleasing and they go after a broad range of stuff. It's probably something from Reddit or something it found, as opposed to the Mayo Clinic. And it's trying to be: What is it you want to hear? And, I think that the question is: No, no, what you want to hear is the truth. And part of it is to say, 'No, no, these are the sources of data and information.' And that stuff, again, is line-of-sight. The ability to get these things to where they are, making errors less than, you know, highly trained human beings is, again, a line-of-sight thing. That doesn't say, 'Hey, there's no room for human beings [?]other than doing the work.' It's like, if you chose today, 'Would you rather have your radiology screen read by an AI or a human?' you'd say AI. But you'd rather have AI plus human. Right? That would be a much better. But yeah: that sort of stuff is going to be fixed. |
55:07 | Russ Roberts: Talk about benchmarking and this really cool thing--which I didn't know about--called Chatbot Arena. Really fascinating. Reid Hoffman: So, part of the question: you have these very complicated devices, things--like, for example, when someone says 400-billion-parameter models, most people don't understand what 400 billion means in their heads. Enormously complicated. And so, you try to do benchmarking to kind of establish what kinds of things demonstrate new capabilities, better capabilities, less hallucination, but also reasoning, other kinds of things. And then, part of Chatbot Arena is to--it's almost like a sports game. Right? It's like, okay, let's play them off against each other and seeing how they work on these benchmarks and what kinds of things are better and worse. And, a little bit like sports games and a little bit like technical specifications, you can be overly rotated on them. It's, like, 'Aha. Mine was Number One on these 10 things.' And, you're like, 'Well, yes, that's good. It's a useful indicator and it's entertaining, but it's not actually in fact the substance of what this will really mean for our lives.' And so, I pay attention to them, but I don't overly dwell on them. Russ Roberts: But, explain how Chatbot Arena works. You give it a--well, explain. Reid Hoffman: I think the thing--if I'm understanding the exact question you want me to do--is basically you say, 'Okay, let's have these bots contest on a set of challenges that essentially give them benchmark scores against each other.' Is there something more deep that struck your fancy? Russ Roberts: Yeah. The way I understood it--like, just for fun, before this conversation, I asked ChatGPT and Claude to write my biography. And, six months ago or a year ago when one of them first came out, I think ChatGPT, I asked it; and it made up stuff. They were wrong. It said I taught at the University of Wisconsin, which is not true. It said I wrote something I didn't write. It was awful. This time it's fantastic. They took different approaches. One was more about my sort of philosophical views, and one had more detail where it was born and all and that thing, and a standard biography. But, the way I understood Chatbot Arena is that you then judge--I think the users judge--which one is better, and it accumulates into a score. Did I understand that? Reid Hoffman: Yes. That is right. Yeah. So, as opposed to the pure benchmarks, what it does is allows you to generate the different answers. And, by the way, this is human factor--this is what happens with human-factor, human-reinforcement learning. Which is: what it does is one bot says, 'Hey, A or B?' and you go, 'A is a better answer for that.' And, that's how it learns to do stuff. Well, this is similar where you go, 'Okay, so now we're running Claude against ChatGPT; and against, like, Russ's bio, which one do you think was better?' Right? And then that gives you kind of a sports score and then the kind of a head-to-head on these things. Which is, again, entertaining and a different form of a benchmark, but not useful. I mean, it is useful, but it's not everything. Russ Roberts: And, as obvious, you mentioned the, quote, "Ten things," sometimes they're not so important or whatever it is. We recently talked a lot about the Vasily Grossman. I asked ChatGPT and Claude to tell me about the essay that Grossman wrote called "The Sistine Madonna." ChatGPT wrote me a beautiful essay about art--the Sistine Madonna is a painting. It wrote a beautiful essay. Totally wrong. It had nothing to do with the essay. But, it was a lovely set of thoughts about art and its role in our lives. Claude nailed it. And, one of them--I don't remember which one--one of them said, 'But, this is kind of an obscure essay so you might want to make sure I got this right.' Which I really appreciated. But, that's just one thing. It doesn't mean that I should always use Claude. Right? I don't know what it means. So, these benchmarking and tests are going to evolve dramatically, I think, over time. Reid Hoffman: And, I think a little bit back to the iterative deployment thing: it's what your experience with it. Now, I do think that getting the experience to be accurate--so for example, blood tests or other kinds of things--is super-important. But, I think it's one of the things that everyone in the industry is working towards. But, yes, I agree. |
1:00:08 | Russ Roberts: So, about the national security issues, which you have a chapter on, and how important is it? Is it important if China has a much better chatbot than we--"we"--the United States or Israel or some other country? Are there threats that we should be concerned about? Reid Hoffman: So, I think that it's extremely important both on a national security and from an economics point of view. I refer to this as the cognitive industrial revolution--and also from a defense point of view. Because, I think these are the next generation of superpowers. This is the next major computing framework. This is the next nuclear power. These are broad metaphors. But, I think the questions, whether it's cybersecurity, whether it's how things work in drones, whether it's what's happening within the manufacture of new materials--all of this stuff matters on both an economics and a national security perspective. And, that's part of the reason why I'm such a strong move-forward and establish-a-strong-position person. Russ Roberts: What are the risks if we don't do that? Reid Hoffman: Well, it's variable. Part of the reason why I think Europe was the major power of the world for centuries was embracing the Industrial Revolution fully and early. I think the cognitive industrial revolution is kind of similar to that. And so, I think the question is, is: which countries, which cultures, which industries embrace this in a strong way will be differential to their economic power, their social and cultural power, and also their national security power. And so, I think that the disbalance will come from a similar thing of not having embraced the Industrial Revolution. I think there could be all kinds of things changing what you view as the most important things in human rights and geopolitics and all the rest. So, it's an amorphous answer, but a very important one. Russ Roberts: Your book is about what can go right, and I think it's a desperately important thing to remember. And we're easily scared. What would someone who is scared say about your book? What would they say you're missing? Reid Hoffman: They would say that I'm too naive about the fact that the technology could go really wrong, especially in the transition in the interim. So, you[?] said, 'Well, the printing press ended up being very good for us, but had a century of religious war because we adjust to these things badly.' And so, a combination of the technology going off the rails in some Terminator-fashion or something else, or in this transition, human society is going off the rails and going nutty, are both things that can go wrong. And, my belief is that we by nature won't. But, part of the reason why to engage in the dialogue is to make sure we don't, as we go. Russ Roberts: Because, there's some sections of the book about citizenship and how our voices as a body politic should talk about--think about--these changes. And of course, there's always some question of whether the political process itself could be improved by this. I'm a skeptic on that. I don't really see social media, for example, as being a good thing so far. We could adjust. It could be like the printing press. There are many things I love about it. I learn a lot from it. I think there's a tendency to say, 'Yeah, well I do, but those other people, they're not.' But it is concerning. I would say it a different way: Democracy in the West doesn't appear to be trending in a good direction and one possible explanation would be the role of social media and the Internet. Does that worry you at all? With this technology as well? Reid Hoffman: Well, look, it does worry me. I think there's iterations we need to do on social media. Now for you Russ, I say, 'Hey, play with LinkedIn a little bit more than X and see what you think.' That would be a natural kind of suggestion for me to make. But, I think that we're going to have, like, accidents on the highway as we drive down this thing with AI? The answer is Yes. Right? I don't think there's going to be any way to prevent that. I think a little bit like what I did in my book, Blitzscaling and say, 'Look, there's some major risks. We got to make sure not system breakage, not massive human harm,' etc. And, we got to make sure we navigate around those questions as best we can. But, I do think that there will be some challenges as we go. But, just like any of these major technologies, if we can get our--and this is the big if. But, if we can get our act together as human beings, we can navigate it. The wars from the printing press weren't because of the book: they were because human beings adjusted badly. And so, it's, like: Okay, let's try to do that as best we can. And that's part of the reason why the question around seeing what are the really good things and making sure we get those in our broadly distributed human hands sooner is one of the things that is I think part of what I think helps us navigate these transitions. |
1:06:29 | Russ Roberts: For users or listeners who are not users, or people who are very casual users of this technology, what would you encourage them to try? I gave one example, which was cooking. That's fun. What would you recommend? Reid Hoffman: So, I tend to recommend try three categories of things. One is what is a fun or is a hobby? It could be cooking, it could be making a poem for your family member's birthday, something else. Second thing is thinking a little bit about human interactions. Your friend has lost their treasured pet. What might be a way to help or support the friend or think about that? As something you're talking about in that. And then, the third is something regarding what you do. Whatever your equivalent of work is. Even if you're retired or anything else, something in that serious earnest. And, until you find something, by the way, in the third that's useful, you haven't actually dug at it enough. You will find some things that are totally not useful. I asked GPT4 how I would make money investing in AI and it gave me a business school professor's answer who doesn't understand venture capital. So, it sounded very intelligent and all the rest, but it was not actually in fact useful. But, on the other hand, when I feed in a business plan and I say, 'What are the key elements of due diligence?' it actually gives me a quick punch list in a way that's actually in fact pretty effective. It doesn't mean it's perfect. I go, 'Well, not two and three, but four, I would have only thought about two days from now and this is useful to have thought about now versus two days from now.' And so, that kind of thing. Those three things--and literally I give them as broad elements of human experience--to sit down and try it. Because among other things that you'll find suddenly that the fear of things--like, 'Huhhh, I don't know what this is. Will it do damage to me?' You're, like, 'Oh, I can use this in ways that make my life better, make my thinking better, make me interact better,' etc. So, then you're off to the races. Russ Roberts: What I would encourage listeners to try if you haven't, is the tutoring activity. I'm trying to learn Hebrew. It's really good at helping me learn Hebrew. What's interesting about this: when I say that, you need to go a little deeper. You can't just, if you're listening, say, 'Yeah, I want to try to learn French.' You need some suggestions on what kind of prompts would help you. Of course, it would probably help you with that as well. But, the part that I think is extraordinary is: something you don't really understand that you want to understand. So, I don't really understand hidden layers in neural networks. I've heard them described. It's weird; I'd like to know a little bit more about them. And, I spent 10 minutes before our conversation--ten minutes. I learned so much in those 10 minutes that I can't wait to go back and dig deeper because, when you say, 'I didn't get that. Could you explain backpropagation a little more clearly?' And, there's this trick: 'Explain it like I'm 12 years old.' But, it's not just that it does that. When you don't get it--and you can say, 'I didn't get it, give me another example and use the weights in a different way to help me see it,'--it's quite extraordinary. And, I think as a tutor, it's really marvelous if you're a curious person. It's another example of why this is a great time to be alive if you care about learning stuff. Reid Hoffman: A hundred percent. And, by the way, choose broadly on what you might want tutoring. You'd be surprised that, 'Oh, I've been curious about the Valley of the Kings and the Egyptians.' And, you learn something. |
1:10:35 | Russ Roberts: You write: Our collective use of AI will have compounding effects. Not only will you as an individual benefit from your newly accessible superpowers, but you'll also benefit from the fact that millions of other people and institutions will have access to these new superpowers, too. End of quote. Why is that true? So, much of what I use it for is personal--like, helps me cook. The fact that you're going to get good cooking advice doesn't excite me enough. Happy for you, Reid. But, what's the networking aspect of this that I'm missing? Reid Hoffman: Well, by the way, I can even use your example. You don't only cook for yourself: you cook for your friends who come over, and your friends who come over benefit from your cooking. But, that's the precise point, is that: as we are these--Aristotle, we're citizens of the polis--we exist in the social fabric. And, as we get these superpowers, our interactions with other people benefit from that. And so, cooking is one. I think one of the examples we use in the book is when the automobile was invented, all of a sudden doctors could start doing house calls and have a wider range. And so, therefore your agency is increased because your ability to deal with your health at your house gets increased. And it's that kind of thing where when someone else gets a superpower in various ways, that can actually in fact help you, too. And so, I think it's just kind of the wide range of things. But, I do find it entertaining when you're, like, 'Well, but the cooking is a one-on-one thing.' And, 'No, no. I presume that you rarely cook for just you.' Russ Roberts: True. True. Lovely point. It reminds me--an important point that economics teaches us that the world is not zero-sum. That, you having superpowers doesn't mean that I'm impoverished. I'm usually enriched as well. |
1:12:27 | Russ Roberts: Let's close with when you wrote this book. I don't know when you finished the manuscript, but one of the challenges of writing a book like this is that things moving quickly. What's changed since you sent it to the publisher? Reid Hoffman: So, September--because we did this one with a kind of traditional publisher. Well, they're a new startup, but they're a traditional publisher, Authors Equity. So, we had to do the months intervening, which has us biting our nails, and technology-land, and how fast AI is moving. You know, GPT-o1 came out. And, I think there's a lot of stuff that's under-described in GPT-o1 that is super-important for how it relates back into work--which is basically, like, when the devices are allowed to spend more time reasoning how you can get to a higher level of response. Now, some of it's like their earlier conversations, like: how are we going to deal with hallucination? Well, if it called out to a set of oracles and experts and maybe even other expert agents each time I was answering and took the time for that, your level of on-shop will be much higher. Now, it'll be more expensive computationally, but the computation is, generally speaking, cheap. Anyway, so I think that's one of the major areas. Now, OpenAI just announced o3. I haven't gotten my hands on o3 yet, but I'm super-interested to, and I'm quite certain to be, like, 'Oh my God, this is amazing.' And of course, this will be--we haven't talked about that at all in the book because it's only safety researchers have access to it now. Those are some of the things that I've paid attention to in the interim. Russ Roberts: How far are we from, I think you called it multimodal assistance--you know, audio, video, image creation, image input and output and all that. We're close, I assume. Reid Hoffman: We're very close. You can do the elements of it now. There's, like, movies and trailers that are made only with AI tools. There's kind of the realm--like, images, kind of a complete set. I've been thinking about would I make a version of the impromptu book entirely with AI images and kind of multimodal as a kind of way of doing it. So, it is here. Now, the ability to make a, call it a B-minus Hollywood-style movie is probably still, you know, maybe one to two years in the future doing this. Now you'd have to have a lot of compute to do that. So, is it worth using all that compute at the moment to make a B-minus Hollywood film? There's a lot of B-minus Hollywood films. But of course, it'll be new and different and someone will do it. But, I do think that we are--it's here. It's a little bit like the William Gibson line. The future's already here, it's just unevenly distributed and it's being iterated on. And that's obviously one of the reasons I also made Reid AI. Because people talk about this stuff as deep fake technology. It's like it's bad technology. The term is like--well, actually, in fact, there are things even this we can use very positively for. And, I used my creating my own digital twin and having conversations with it as a way of helping people imagine and explore positive-use cases, too. |
1:16:13 | Russ Roberts: Let's close with some thoughts about your career and what you've seen. We are both blessed and cursed to be old. And, I'm a lot older than you are. But, we've seen a lot. But, I think the right way to think about that is we ain't seen nothing yet. And, our children and grandchildren will live in, I think, very different worlds. I think a lot of us are afraid of that--for a lot of reasons, some of which are just that we like our world and we'd like our friends and family to live in the same world. It's a human craving. But, when I think about this technology--and then I'll let you comment and close this out--when I think about this technology, what is stunning about it is the emergence. I don't believe it's consciousness. I don't think it's conscious. Maybe it will be; I'm skeptical. But, it seems conscious. And that's already an extraordinary human achievement. And, that it's this black box of hidden layers in a neural network that somehow gets this thing to say, 'You should try the fish sauce. You might be skeptical about it, but it'll work out okay.' And, when I asked it to write a sentence with one of my Hebrew vocabulary words I gave it, it added one of the words that I'd already listed because it realized that I would--it didn't realize it. It outputted, something that, yeah, I should have asked it to do that. And, it did it without being asked. And, that language--this fundamentally human thing--has emerged: the language capability of these technologies to mimic human sentience. Because I think one of the most extraordinary human achievements, regardless of what its usefulness is--I'm in awe of it. It gives me goosebumps. And, it's just the latest one. Who knows what's coming next? But, I'm curious: I'm older than you, but you've had a front-row seat for much more of it than I have. So reflect on that. Reid Hoffman: So, I also have that awe, and I think it's important to--part of the thing that I've been over the last five-plus years have been advancing this thesis that we're not homo sapiens, we're homo techni: we evolve through our language. Like, language itself is a technology. And we tend to center to, especially as old people, to--well, the technologies we have, those are the ones that are actually human. Like we have these glasses, we have these computers, we have these mobile phones. Like: Those are what the human condition is. And, it's actually, in fact, as we create these new technologies, we're evolving what it is to be human. And, part of the thing about creating these, you know, kind of AI companions, and tools, and there's this whole pantheon of capabilities, is that we are kind of remaking the world in really great ways. If you say part of the point of human beings, is to add sentience to the world, we're now adding a lot more sentience. And, my hope is that these things will help us solve climate change, that they will help us solve pandemics. That there's a whole set of things that are troubles of the current human condition that we can have a much more magical future kind of set of lives with by doing. And, that's part of the reason why obviously I'm putting in all of my energy both to the creation of them, but also publishing books and having podcasts. But, I think it's a great time to be alive. Russ Roberts: My guest today has been Reid Hoffman. Reid, thanks for being part of EconTalk. |
READER COMMENTS
Shalom Freedman
Jan 27 2025 at 11:58am
Most of this conversation was over my head but some of it helped me think again about subjects long thought about. One is the meaning of what it means to be human. Hoffman’s suggestion that major technical developments change the meaning of what it means to be human, in turn suggests the meaning of what it is to be human is an open-ended developing process whose future development cannot really be anticipated. The question I would ask is when those future developments go so far as to leave behind completely what we take to be essential to our humanity, including our biological being, now. This is one fear of the future out of an imagined many, the point that what is most precious to us of what we are now is forever lost. And the future really belongs to another kind of beings we have in our time no real connection to.
neil21
Jan 28 2025 at 9:28am
RR says “seductive”, Graham 2010 says “addictive”. I find myself returning to this essay all the time: https://paulgraham.com/addiction.html
Ashis Roy
Jan 28 2025 at 10:59am
Fascinating podcast.
It is interesting that some of the things that were discussed have since taken place: Chinese Deepseek is said to be a leap forward both in terms of reduced dataset requirements as well as reduced computing power.
I found the use case of round tables with chatbots as moderator very interesting.
Even as I was listening, Mr Hoffman spoke about the Cambrian Explosion. I had no clue about this thing at all. So, I paused this podcast and asked chatgpt what Cambrian Explosion was. It gave me succinct description of Cambrian Explosion and I could connect the dots of the podcast.
My experience is that Claude and Chatgpt are slightly different. Google Willow professes that it can compute in 5 minutes tasks that would require classical computers 10 septillion years. I asked both Claude and Chatgpt what sort of task can we envisage that would take 10 septillion years? While Chatgpt started from RCS – which I didn’t understand at all – Claude told me this: if you were to try every possible combination to factor a 2048-bit number (which is commonly used in RSA encryption), and each attempt took just one nanosecond, it could indeed take on the order of septillions of years to try all possibilities. This description I could comprehend easily.
One thing was spoken in passing but not in detail is about consciousness. Would AI ever be able to answer philosophical questions like: Who am I? Where am I coming from? What happens after death?
In other words, will AI gain consciousness?
VP
Jan 28 2025 at 2:33pm
Outstanding episode.
I have to admit that using these tools has been truly game-changing in my line of work. The speed at which I can now accomplish certain tasks, and without having to hire third parties has been revolutionary. And I agree about its ability to help us learn. A great tool for that.
I think one miss in the conversation and a personal worry that I have has to do with security. Everyone seems so concerned about this technology in the hands of foreign powers. But I believe the concern should be over how our own government and bureaucracy uses this technology against the general public and as a tool to harm individuals who do not agree with the party currently in power. I’ve many friends in government at all levels, including the highest levels of the federal administration, and although they use these tools themselves, this is their fear as well. The use of power to harm, for personal benefit, or to hide malfeasance cannot be overstated. And when certain people who lack moral and ethical constraints have a tool this powerful, the results could be disastrous.
On balance, I am very optimistic about this tool and don’t see it going away or being restricted. The genie is out of the bottle for sure, but I do have my concerns.
Scott Simmonds
Jan 28 2025 at 4:44pm
I’m a retiree with an interest in tech. I have been playing with AI for a year now. Any time I have a problem, I ask ChatGPT for ideas.
“How do I clean gum out of my grandson’s hair?”
“Rank the great philosophers by date of death.”
“What Stoic teachings can be found in the New Testament?”
“Suggest better ways to tie my hiking boots.”
I almost always get ideas I would not have come up with.