When Prediction Is Not Enough (with Teppo Felin)
Apr 15 2024

Image created via DALL-E3 with prompt "create a drawing of the wright brothers using AI to help them with their first flight" If the Wright Brothers could have used AI to guide their decision making, it's almost certain they would never have gotten off the ground. That's because, points out Teppo Felin of Utah State University and Oxford, all the evidence said human flight was impossible. So how and why did the Wrights persevere? Felin explains that the human ability to ignore existing data and evidence is not only our Achilles heel, but also one of our superpowers. Topics include the problems inherent in modeling our brains after computers, and the value of not only data-driven prediction, but also belief-driven experimentation.

RELATED EPISODE
What Does "Unbiased" Mean in the Digital World? (with Megan McArdle)
Listen as Megan McArdle and EconTalk's Russ Roberts use Google's new AI entrant Gemini as the starting point for a discussion about the future of our culture in the shadow of AI bias. They also discuss the tension between rules...
EXPLORE MORE
Related EPISODE
Vernon Smith on Rationality in Economics
Nobel Laureate Vernon Smith of Chapman University and George Mason University talks with EconTalk host Russ Roberts about the ideas in his new book, Rationality in Economics: Constructivist and Ecological Forms. They discuss the social and human sides of exchange,...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Wais
Apr 15 2024 at 12:00pm

Fascinating episode – I humbly disagree with both Russ and the guest.

Russ claims humans like to analogize the brain with peak invention at any given moment. The argument is inductive (in the past humans have been wrong, so they must be wrong again) and doesn’t lay out a theory for why humans are wrong now – he and the guest actually spend quite a bit of time on why inductive prediction is wrong so it’s ironic his first argument starts with this
The idea that the brain and computers are one and the same isn’t an analogy, it’s reality. Turning machines and universal computers that can run any type of software. These machines have gone from doing calculations (not to be confused with non-universal machines) to playing chess to chatting with humans in a human like manner.
Humans are also universal computers, we take sense data/information that’s imperfect and the brain renders our reality for us (this is one form of software running on the brain). There’s software for language, music, etc.
All of this is information processing and while there are obvious hardware and software differences between humans and computers, the base layer of universal computation exists in both.
Today’s AI algorithms do not learn, function in the same the way the human brain does, but it would be flawed logical leap (similar to saying that heavy birds can’t fly) that in the future with better hardware (more powerful and efficient) and better software won’t be able to create knew knowledge.
Lastly, the guest conflates bias with new knowledge creation. Bias is error in judgement based on lack of conscious/effortful thought. New theories are the exact opposite – conscious effortful creation of knowledge. Do they both arise from computation/algorithms in the brain, yes, but so does everything else in the brain.

Eric
Apr 15 2024 at 8:26pm

It is quite appropriate for you to point out that all computers and their algorithms are operating according to the general model of Turing machines.  No matter how versatile or useful these become, they are all various versions of symbol manipulation machines — machines that operate at the level of syntax without needing to have any conscious understanding of the semantics of what the symbols represent.  (See my comment below about how John Searle and Erik Larson discuss this aspect of algorithmic computation.)

That is the inherent fundamental limitation and difference in kind that makes them all different from what normal humans do.

3:49
Russ Roberts: … What’s clear to you–and what I learned from your paper and I think is utterly fascinating–is that what we call thinking as human beings is not the same as what we have programmed computers to do with at least large language models. And that forces us–which I think is beautiful–to think about what it is that we actually do when we do what we call thinking. …

Teppo Felin then illustrates the profound difference in the way that even children operate.

LLMs depend on mechanically manipulating Large Models of Language.  Even as children, we model the world that the language represents.

Another resource: Non-Computable You: What You Do That Artificial Intelligence Never Will by Robert J. Marks II.

Bob
Apr 15 2024 at 9:52pm

Non-Computable You is rigurous in the same way as Aquinas managed to prove the existence of god, or Aristotle was able to describe physics… as, not at all.

The line of what a computer will never be able to do has been moving at a rapid pace, and it’s only speeding up. The creativity and aggression of romantic chess? Yep, computers today do that. Image generation? vision? Understanding of moods? The line keeps moving. It’s crazy enough, for instance, that a computer is better than most doctors at writing empathetic text. 10 years ago, you’d laugh at the idea. Today it’s factual.

Now, claiming that computers will be able to do everything a human does in a more thermodynamically efficient way is too strong a claim, but it takes the epitome of hubris to claim that humanity’s capabilities are beyond what is possible with compute. We’ll see those claims as those of people that said that african people were genetically inferior, or women just aren’t stable enough to hold high office. It’s all prejudice.

Eric
Apr 19 2024 at 5:07pm

When Russ and guest Teppo Felin observe (e.g. 3:49 to 9:10) that “as humans, we’re doing something completely different” from what LLMs are doing, that is clearly, observably true.  One way to see this is to learn why an LLM AI will routinely “hallucinate”, i.e. generate responses that might sound plausible and sound like a human, but are factually incorrect or nonsensical.  Over and over, they string words together in ways that just make stuff up.  Examples.
This is because the LLM doesn’t model or understand the world itself.  Instead, like any Turing machine, it is a symbol processor.  The LLM algorithm mindlessly models the common patterns of short sequences of characters (tokens) in its training data.  It uses those patterns to simulate or mimic people writing about the world, but without being able to reliably assess whether the combination of words in its response matches the real world.  It is trapped inside philosopher John Searle’s Chinese Room of manipulating the syntax of symbols without having access to the semantics of the symbols — something that even an average child is learning.
Because of the inherent limitations on what an LLM is, those who seriously study algorithmic computation as a discipline can show logically why this cannot be fixed.  For a deeper dive, see “Proven: Hallucination is INEVITABLE in LLMs (Research Paper Breakdown)”.
Gary Smith, author of “The AI Delusion” and the article “An AI that can “write” is feeding delusions about how smart artificial intelligence really is”, explains the inherent limitation in simpler language.

“They are trained to identify likely sequences of words—nothing more. It is mind-boggling that statistical text prediction models can generate coherent and convincing text. However, not knowing what words mean, LLMs have no way of assessing whether its utterances are true or false.

“Scaling up LLMs by training on larger and larger databases may make the BS more convincing. But it will still be BS as long as the programs do not understand what words mean, and, consequently cannot use common sense, wisdom, or logical reasoning to distinguish truth from falsehood.”

Mindless algorithmic symbol manipulation simply isn’t the same thing as having an actual conscious understanding of the world and the semantic meaning of symbols.  Modeling language is not equivalent to knowing the world.

Nohemi Kibe
Apr 15 2024 at 12:13pm

The claims here about the history of flight *or* heliocentrism will depress, but not astonish, anyone who knows much about the history of those topics. Which doesn’t bode well for the rest of Felin’s comments, whenever those are transcribed.

Bob
Apr 15 2024 at 10:14pm

Yep, all philosophical drivel build upon premises too scary to want to doubt. If anything, it’s hilarious, as it two humans being unable to go past their preconceived, rational notions, projecting their own behavior on the machine, as the alternative is unacceptable.

When calling something impossible, we have to decide why, and then take a really long, hard look at what invariants cannot be broken, and what would be the result of us being wrong about said invariants. Does it break mathematics? Probably impossible, yes. Does it break our knowledge of physics? A little less impossible, but still tough: See flight: And we all have to remember in that example that humans have seen things heavier than air moving through the sky from day 1. If the things that we believe are impossible are not in the real of physics and mathematics… impossibility is just lack of imagination.

But we have thousands of years of tradition telling us of the uniqueness of humanity, just like we had hundreds of years of people writing about the superiority of the white race. And as it happens, humans are just really bad at looking outside of their rationalizations, especially when their emotions are actually hindering, not helping, their creativity. Because if humans are not all that special, and in fact, most of what we call special to humanity can be copied into something that uses less energy than we do, and doesn’t suffer from many of the faults that make humans horrible, then worldviews collapse.

Using the bigotry of western tradition to prove that we’ll always be unique, better and more creative than those poor, simple machines is just really funny, and has a good chance of being seen as just a really dumb perspective in less than a hundred years. Instead of reassuring ourselves of how great we are, we should be looking at what happens when we are wrong. What happens to human society when, for an extremely large percentage of interactions, we are better off talking to a computer, which dominates us economically, and where extra productivity is so large that the Ricardian views of trade break down: The transaction costs are higher than the production capacity of the weaker nation (humans), so no trade ever happens. We could find ourselves there, and by the time we think it’s actually plausible, we’ll already be doomed.

Eric
Apr 15 2024 at 2:36pm

There is a known inherent reason why team AI is not able to do and cannot do all that “team human” routinely does.  However, that can be harder to notice when we can be misled by subtle faulty assumptions such as is implied in this quote.

Russ Roberts: Well, I think that’s a clever example. And, an AI proponent–or to be more disparaging, a hypester–would say, ‘Okay, of course; obviously new knowledge has to be produced and AI hasn’t done that yet; but actually, it will because since it has all the facts, increasingly’–and we didn’t have very many in Galileo’s day, so now we have more–‘and, eventually, it will develop its own hypotheses of how the world works.’

The misleading subtle assumption is that a Large Language Model (LLM) “has all the facts”.  An LLM doesn’t itself have any facts at all.  None.  It isn’t modeling “the facts”.  Instead, it Models a very Large body of Language expressed in symbols that humans interpret as referring to facts.  The LLM doesn’t have the facts themselves at all.  What the LLM has are the statistically analyzable patterns of our symbols that represent the facts.  It models the sequences of symbols, without knowing the facts.
This is a powerful example of what philosopher John Searle was describing all the way back in 1980 with his thought experiment called The Chinese Room.   Imagine working with a language you do not know, given only a large library of rules about how to manipulate the symbols of that language.  For a good presentation, see The famous Chinese Room thought experiment – John Searle (1980).
What Searle described in 1980 is still true of the powerful LLMs of today.  They operate stochastically on the symbols, but without any ability to consciously understanding the meaning of the symbols.  The algorithms show how much can be done purely on the basis of syntax (how the symbols are arranged) without having the benefit of semantics (an understanding of what the symbols actually mean).
I would suggest this is related to the observations of computer scientist Erik Larson, the author of The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do.  Review: Abductive inference: The blind spot of artificial intelligence.  It is because algorithms manipulate symbol syntax without having semantic understanding that they can do well at deductive and some inductive reasoning, but have failed when it comes to abductive reasoning.
The mind of every normal young child is actively building a model of the world, which is something fundamentally different from what the mindless algorithms of LLMs are do by modeling the syntax of large bodies of language.

David Gossett
Apr 15 2024 at 3:56pm

The Wright Brothers, Beethoven, Michelangelo, and others were able to tamp down their prefrontal cortex and think irrationally. We all have tremendous creativity but rationalize that all of these ideas are bad before we can act on them (voice in the head). It’s not a leap to dig a pit. It’s that hunter’s ability to stop their prefrontal cortex from predicting or even being rational.

Current AI is a regression to the mean. It’s pure prefrontal and no creativity (deviations). But what if two AIs are competing? AI one has an answer, and AI two has the same idea. That is rational and safe. But then we tell AI One that it has failed. This reinforcement will force the AI to move away from the mean. There is no reward for rationality. In other words, pit Gemini against GPT4 and look for the differences.

Now, we get irrational answers for how to cure cancer, which are quite creative. In fact, all answers will go directly against the historical data (think Wright Brothers). And why stop at two AI models? How about 50 or 1000 models, all trained with slight differences? Humans have bad ideas. So will AIs. But in those irrational AI answers, there will be breakthroughs that will crush human capability.

PS: Some say that we are losing our ability to be creative. I think it’s social media reinforcing our rational brain. Say something silly, and you will be bullied and made fun of. Social media has made us way too rational…

Jonathan
Apr 16 2024 at 4:01pm

I was not able to read through all (long, likely excellent comments) above and mine is rather short: My understanding of the argument by Felin is that AI does not engage in the (theory-guided) trial and error learning process that brings forward human knowledge.

And so, the key point is that that’s something that computers can’t do. They take existing data as a given, whereas we as human beings find and create, through experiments, new data.

They’re problem-solving and doing, which is different, right? And, prediction, like you mentioned, is inherently based on past data. And so, we need some forward-looking mechanism to kind of bootstrap novelty and to see things in a different way

But many AI (or rather machine learning methods) do exactly that. They run X million small experiments to come up with new solutions. This is how AI became so strong in Chess etc. Not by just looking at data from past games. Or to generate new vehicles that excel at complicated terrain and have vastly different mechanics than any engineer would have ever imagined.

I believe i am missing something here b/c sure Russ/Teppo have considered this… So i’d be very interested to have this clarified by someone…

Jordan Henderson
Apr 16 2024 at 4:36pm

I’m interested in hearing more about Schlesinger’s Cat.

Russ Roberts
Apr 17 2024 at 6:39am

Oops! Sorry about that.

Shalom Freedman
Apr 18 2024 at 12:06am

Humanity should not commit suicide simply because it cannot achieve certain goals for itself which AI can.
Who will the readers be for the AI Emily Dickinson and Wallace Stevens when they emerge? The greatest AI poets have yet to appear, and perhaps human beings will not be able to understand them at all since they will probably be in a language or languages only AI whatevers will be able to read                                             3. Speculations on AI and the human future are a good breakfast but I suspect they will be a very poor dinner.

Maureen
Apr 22 2024 at 4:39am

I was really pleased to listen to this episode.  So much of the discussion on AI taking over human capabilities has seemed very exaggerated to me.  For a start, it sounds like a version of arguments against many other scientific developments that we now take for granted, for example, the fears about human cloning and genetic selection that were brought forward around 40-50 years ago when cloning emerged as a real possibility.   However, my main distrust of these arguments is much deeper, based on my 25+ years in working on prevention of technological accidents.  This experience has made me skeptical from the beginning of concepts such as self-driving cars, and by extension, AI’s potential.  I believe this comes from our tendency to look at our gains in medicine and engineering in such a positive way.  They are indeed amazing but our wonder at we have accomplished sometimes blinds us to how much we still do not know.   We are constantly confounded in the world of technological accidents in the little changes, that unnoticed because they are so small or even routine, that are a factor in producing a serious incident and even a disaster.   In this regard, I am speaking now only referring to the human side of the interactions.  When one speaks of natural forces, such as physics, chemicals, ecosystems, and the biology of the human body, as a few examples,  I have come to believe that we are centuries away from understanding the behavior of nature, and I cannot believe, with the little that we feed our AI partners, that they will do a much better job without significant hints from the human side.  If we don’t have the data, or the proper theories, the AI won’t invent it.  If a computer hasn’t seen it, if the human doesn’t know it (or have any basis for knowing it), the AI can’t predict it. As a sort of analogy, one can look at music.  We have thousands of composers and thousands of years of people composing, and yet to a large extent, new music is invented every day and the vast majority of it has something unique about it.  To me, like music, every instant of nature arises from such a complex array of factors that it holds infinite possible futures and AI cannot see all of them because we cannot tell it what all those complexities are  and all the random and non-random ways they will interact with nature to produce an outcome (which in turn will produce another outcome, interacting with other factors, etc.).  An interesting prospect is the hybrid, if we had AI’s processing capacity somehow merging with the unique human brain, it could move things along, but I wouldn’t bet on that.

John Notgrass
Apr 22 2024 at 1:10pm

At 9:10 in this episode, Russ said: “I alluded to this I think briefly, recently.” When I heard that, something triggered this thought in my brain: “He’s going to tell a story about his granddaughter.” I thought that was funny since this episode is about prediction.

LEAVE A COMMENT

required
required
required, not displayed
required, not displayed
optional
optional

This site uses Akismet to reduce spam. Learn how your comment data is processed.


DELVE DEEPER

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:37

Intro. [Recording date: March 26, 2024.]

Russ Roberts: Today is March 26th, 2024, and my guest is Teppo Felin, the Douglas D. Anderson Professor of Strategy and Entrepreneurship at the Huntsman School of Business at Utah State University. He's also an Associate Scholar at Oxford University.

This is Teppo's second appearance on EconTalk. He was last here in July of 2018 to discuss rationality. We have an even broader topic today, thinking: What is it? What does it mean for human beings to think? Are we about to be surpassed by artificial thinking? Many people think so, but not Teppo, as far as I understand him.

Teppo, welcome back to EconTalk.

Teppo Felin: Thanks for having me, Russ.

1:19

Russ Roberts: We're going to base our conversation today loosely on a recent article you wrote, "Theory Is All You Need: AI, Human Cognition, and Decision-Making," co-written with Matthias Holweg of Oxford.

Now, you write in the beginning--the Abstract of the paper--that many people believe, quote,

due to human bias and bounded rationality--humans should (or will soon) be replaced by AI in situations involving high-level cognition and strategic decision making.

Endquote.

You disagree with that, pretty clearly.

And I want to start to get at that. I want to start with a seemingly strange question. Is the brain a computer? If it is, we're in trouble. So, I know your answer, the answer is--the answer is: It's not quite. Or not at all. So, how do you understand the brain?

Teppo Felin: Well, that's a great question. I mean, I think the computer has been a pervasive metaphor since the 1950s, from kind of the onset of artificial intelligence [AI].

So, in the 1950s, there's this famous kind of inaugural meeting of the pioneers of artificial intelligence [AI]: Herbert Simon and Minsky and Newell, and many others were involved. But, basically, in their proposal for that meeting--and I think it was 1956--they said, 'We want to understand how computers think or how the human mind thinks.' And, they argued that this could be replicated by computers, essentially. And now, 50, 60 years subsequently, we essentially have all kinds of models that build on this computational model. So, evolutionary psychology by Cosmides and Tooby predicted processing by people like Friston. And, certainly, the neural networks and connectionist models are all essentially trying to do that. They're trying to model the brain as a computer.

And, I'm not so sure that it is. And I think we'll get at those issues. I think there's aspects of this that are absolutely brilliant and insightful; and what large language models and other forms of AI are doing are remarkable. I use all these tools. But, I'm not sure that we're actually modeling the human brain necessarily. I think something else is going on, and that's what kind of the papers with Matthias is getting at.

3:49

Russ Roberts: I always find it interesting that human beings, in our pitiful command of the world around us, often through human history, take the most advanced device that we can create and assume that the brain is like that. Until we create a better device.

Now, it's possible--I don't know anything about quantum computing--but it's possible that we will create different computing devices that will become the new metaphor for what the human brain is. And, fundamentally, I think that attraction of this analogy is that: Well, the brain has electricity in it and it has neurons that switch on and off, and therefore it's something like a giant computing machine.

What's clear to you--and what I learned from your paper and I think is utterly fascinating--is that what we call thinking as human beings is not the same as what we have programmed computers to do with at least large language models. And that forces us--which I think is beautiful--to think about what it is that we actually do when we do what we call thinking. There are things we do that are a lot like large language models, in which case it is a somewhat useful analogy. But it's also clear to you, I think, and now to me, that that is not the same thing. Do I have that right?

Teppo Felin: Yeah. I mean the whole what's happening in AI has had me and us kind of wrestling with what it is that the mind does. I mean, this is an area that I've focused on my whole career--cognition and rationality and things like that.

But, Matthias and I were teaching an AI class and wrestling with us in terms of differences between humans and computers. And, if you take something like a large language model [LLM], I mean, how it's trained is--it's remarkable. And so, you have a large language model: my understanding is that the most recent one, they're pre-trained with something like 13 trillion words--or, they're called tokens--which is a tremendous amount of text. Right? So, this is scraped from the Internet: it's the works of Shakespeare and it's Wikipedia and it's Reddit. It's all kinds of things.

And, if you think about what the inputs of human pre-training are, it's not 13 trillion words. Right? I mean, these large language models get this training within weeks or months. And a human--and we have sort of a back back-of-the-envelope calculation, looking at some of the literature with infants and children--but they encounter maybe, I don't know, 15-, 17,000 words a day through parents speaking to them or maybe reading or watching TV or media and things like that. And, for a human to actually replicate that 13 trillion words, it would be hundreds of years. Right? And so, we're clearly doing something different. We're not being input: we're not this empty-vessel bucket that things get poured into, which is what the large language models are.

And then, in terms of outputs, it's remarkably different as well.

And so, you have the model is trained with all of these inputs, 13 trillion, and then it's a stochastic process of kind of drawing or sampling from that to give us fluent text. And that text--I mean, when I saw those first models, it's remarkable. It's fluent. It's good. It's remarkable. It surprised me.

But, as we wrestle with what it is, it's very good at predicting the next forward. Right? And so, it's good at that.

And, in terms of kind of the level of knowledge that it's giving us, the way that we try to summarize it is: it's kind of Wikipedia-level knowledge, in some sense. So, it could give you indefinite Wikipedia articles, beautifully written about Russ Roberts or about EconTalk or about the Civil War or about Hitler or whatever it is. And so, it could give you indefinite articles in sort of combinatorially pulling together texts that isn't plagiarized from some existing source, but rather is stochastically drawn from its ability to give you really coherent sentences.

But, as humans, we're doing something completely different. And, of course, our inputs aren't just they're multimodal. It's not just that our parents speak to us and we listen to radio or TV or what have you. We're also visually seeing things. We're taking things in through different modalities, through people pointing at things, and so forth.

And, in some sense, the data that we get--our pre-training as humans--is degenerate in some sense. It's not--you know, if you look at verbal language versus written language, which is carefully crafted and thought out, they're just very different beasts, different entities.

And so, I think that there's fundamentally something different going on. And, I think that analogy holds for a little bit, and it's an analogy that's been around forever. Alan Turing started out with talking about infants and, 'Oh, we could train the computer just like we do an infant,' but I think it's an analogy that quickly breaks down because there's something else going on. And, again, issues that we'll get to.

9:10

Russ Roberts: Yeah, so I alluded to this I think briefly, recently. My 20-month-old granddaughter has begun to learn the lyrics to the song "How About You?" which is a song written by Burton Lane with lyrics by Ralph Reed. It came out in 1941. So, the first line of that song is, [singing]:

I like New York in June.
How about you?

So, when you first--I've sung it to my granddaughter, probably, I don't know, 100 times. So, eventually, I leave off the last word. I say, [singing]:

I like New York in June.
How about ____?

and she, correctly, fills in 'you.' It probably isn't exactly 'you,' but it's close enough that I recognize it and I give it a check mark. She will sometimes be able to finish the last three words. I'll say, [singing],

I like New York in June.
______?

She'll go 'How about yyy?'--something that sounds vaguely like 'How about you?'

Now, I've had kids--I have four of them--and I think I sang it to all of them when they were little, including the father of this granddaughter. And, they would some say very charmingly when I would say, 'I like New York in June.' And, I'd say, 'How about ____?; and they'd fill in, instead of saying 'you'--I'd say, [singing]:

I like New York in June.
How about ____?

'Me.' Because, I'm singing it to them and they recognize that you is me when I'm pointing at them. And that's a very deep, advanced step.

Teppo Felin: Absolutely.

Russ Roberts: But, that's about it. They are, as you say, those infants--all infants--are absorbing immense amount of aural--A-U-R-A-L--material from speaking or radio or TV or screens. They are looking at the world around them and somehow they're putting it together where eventually they come up with their own requests--frequent--for things that float their boat.

And, we don't fully understand that process, obviously. But, at the beginning, she is very much like a stochastic process. Actually, it's not stochastic. She's primitive. She can't really imagine a different word than 'you' at the end of that sentence, other than 'me.' She would never say, 'How about chicken?' She would say, 'How about you or me?' And, that's it. There's no creativity there.

So, on the surface, we are doing, as humans, a much more primitive version of what a large language model is able to do.

But I think that misses the point--is what I've learned from your paper. It misses the point because that is--it's hard to believe; I mean, it's kind of obvious but it hasn't seemed to have caught on--it's not the only aspect of what we mean by thinking--is like putting together sentences, which is what a large language model by definition does.

And I think, as you point out, there's an incredible push to use AI and eventually other presumably models of artificial intelligence than large language models [LLMs] to help us make, quote, "rational decisions."

So, talk about why that's kind of a fool's game. Because, it seems like a good idea. We've talked recently on the program--it hasn't aired yet; Teppo, you haven't heard it, but we talked, listeners will have when this airs--we talked recently on the program about biases in large language models. And, we're usually talking about by that political biases, ideological biases, things that have been programmed into the algorithms. But, when we talk about biases generally with human beings, we're talking about all kinds of struggles that we have as human beings to make, quote, "rational decisions." And, the idea would be that an algorithm would do a better job. But, you disagree. Why?

Teppo Felin: Yeah. I think we've spent sort of inordinate amounts of journal pages and experiments and time kind of highlighting--in fact, I teach these things to my students--highlighting the ways in which human decision-making goes wrong. And so, there's confirmation bias and escalation of commitment. I don't know. If you go onto Wikipedia, there's a list of cognitive biases listed there, and I think it's 185-plus. And so, it's a long list. But it's still surprising to me--so, we've got this long list--and as a result, now there's a number of books that say: Because we're so biased, eventually we should just--or not even eventually, like, now--we should just move to letting algorithms make decisions for us, basically.

And, I'm not opposed to that in some situations. I'm guessing the algorithms in some, kind-of-routine settings can be fantastic. They can solve all kinds of problems, and I think those things will happen.

But, I'm leery of it in the sense that I actually think that biases are not a bug, but to use this trope, they're a feature. And so, there's many situations in our lives where we do things that look irrational, but turn out to be rational. And so, in the paper we try to highlight, just really make this salient and clear, we try to highlight extreme situations of this.

So, one example I'll give you quickly is: So, if we did this thought-experiment of, we had a large language model in 1633, and that large language model was input with all the text, scientific text, that had been written to that point. So, it included all the works of Plato and Socrates. Anyway, it had all that work. And, those people who were kind of judging the scientific community, Galileo, they said, 'Okay, we've got this great tool that can help us search knowledge. We've got all of knowledge encapsulated in this large language model. So we're going to ask it: We've got this fellow, Galileo, who's got this crazy idea that the sun is at the center of the universe and the Earth actually goes around the sun,' right?

Russ Roberts: The solar system.

Teppo Felin: Yeah, yeah, exactly. Yeah. And, if you asked it that, it would only parrot back the frequency with which it had--in terms of words--the frequency with which it had seen instances of actually statements about the Earth being stationary--right?--and the Sun going around the Earth. And, those statements are far more frequent than anybody making statements about a heliocentric view. Right? And so, it can only parrot back what it has most frequently seen in terms of the word structures that it has encountered in the past. And so, it has no forward-looking mechanism of anticipating new data and new ways of seeing things.

And, again, everything that Galileo did looked to be almost an instance of confirmation bias because you go outside and our just common conception says, 'Well, Earth, it's clearly not moving. I mean it turns its--toe down[?], it's moving 67,000 miles per hour or whatever it is, roughly in that ballpark. But, you would sort of verify that, and you could verify that with big data by lots of people going outside and saying, 'Nope, not moving over here; not moving over here.' And, we could all watch the sun go around. And so, common intuition and data would tell us something that actually isn't true.

And so, I think that there's something unique and important about having beliefs and having theories. And, I think--Galileo for me is kind of a microcosm of even our individual lives in terms of how we encounter the world, how things that are in our head structure what becomes salient and visible to us, and what becomes important.

And so, I think that we've oversimplified things by saying, 'Okay, we should just get rid of these biases,' because we have instances where, yes, biases lead to bad outcomes, but also where things that look to be biased actually were right in retrospect.

Russ Roberts: Well, I think that's a clever example. And, an AI proponent--or to be more disparaging, a hypester--would say, 'Okay, of course; obviously new knowledge has to be produced and AI hasn't done that yet; but actually, it will because since it has all the facts, increasingly'--and we didn't have very many in Galileo's day, so now we have more--'and, eventually, it will develop its own hypotheses of how the world works.'

18:37

Russ Roberts: But, I think what's clever about your paper and that example is that it gets to something profound and quite deep about how we think and what thinking is. And, I think to help us draw that out, let's talk about another example you give, which is the Wright Brothers. So, two seemingly intelligent bicycle repair people. In what year? What are we in 1900, 1918?

Teppo Felin: Yeah. They started out in 1896 or so. So, yeah.

Russ Roberts: So, they say, 'I think there's never been human flight, but we think it's possible.' And, obviously, the largest language model of its day, now in 1896, 'There's much more information than 1633. We know much more about the universe,' but it, too, would reject the claims of the Wright Brothers. And, that's not what's interesting. I mean, it's kind of interesting. I like that. But, it's more interesting as to why it's going to reject it and why the Wright Brothers got it right. Pardon the bad pun. So, talk about that and why the Wright kids[?] took flight.

Teppo Felin: Yeah, so I kind of like the thought experiment of, say I was--so, I actually worked in venture capital in the 1990s before I got a Ph.D. and moved into academia. But, say the Wright Brothers came to me and said they needed some funding for their venture. Right? And so, I, as a data-driven and evidence-based decision maker would say, 'Okay, well, let's look at the evidence.' So, okay, so far nobody's flown. And, there are actually pretty careful records kept about attempts. And so, there was a fellow named Otto Lilienthal who was an aviation pioneer in Germany. And, what did the data say about him? I think it was in 1896--no, 1898. He died attempting flight. Right?

So, that's a data point, and a pretty severe one that would tell you that you should probably update your beliefs and say flight isn't possible.

And so, then you might go to the science and say, 'Okay, we've got great scientists like Lord Kelvin, and he's the President of the Royal Society; and we ask him, and he says, 'It's impossible. I've done the analysis. It's impossible.' We talked to mathematicians like Simon Newcomb--he's at Johns Hopkins. And, he would say--and he actually wrote pretty strong articles saying that this is not possible. This is now an astronomer and a mathematician, one of the top people at the time.

And so, people might casually point to data that supports the plausibility of this and say, 'Well, look, birds fly.' But, there's a professor at the time--and UC Berkeley [University of California, Berkeley] at the time was relatively new, but he was one of the first, actually--but his name was Joseph LeConte. And, he wrote this article; and it's actually fascinating. He said, 'Okay, I know that people are pointing to birds as the data for why we might fly.' And, he did this analysis. He said, 'Okay, let's look at birds in flight.' And, he said, 'Okay, we have little birds that fly and big birds that don't fly.' Okay? And then there's somewhere in the middle and he says, 'Look at turkeys and condors. They barely can get off the ground.' And so, he said that there's a 50-pound weight limit, basically.

And that's the data, right? And so, here we have a serious person who became the President of the American Association for Advancement of Science, making this claim that this isn't possible.

And then, on the other hand, you have two people who haven't finished high school, bicycle mechanics, who say, 'Well, we're going to ignore this data because we think that it's possible.'

And, it's actually remarkable. I did look at the archive. The Smithsonian has a fantastic resource of just all of their correspondence, the Wright Brothers' correspondence with various people across the globe and trying to get data and information and so forth. But they said, 'Okay, we're going to ignore this. And, we still have this belief that this is a plausible thing, that human heavier-than-air--powered flight,' as it was called, 'is possible.'

But, it's not a belief that's just sort of pie in the sky. Their thinking--getting back to that theme of thinking--involved problem solving. They said, 'Well, what are the problems that we need to solve in order for flight to become a reality?' And, they winnowed in on three that they felt were critical. And so: Lift, Propulsion, and Steering being the central things, problems that they need to solve in order to enable flight to happen. Right?

And, again, this is going against really high-level arguments by folks in science. And they feel like solving those problems will enable them to create flight.

And, I think this is--again, it's an extreme case and it's a story we can tell in retrospect, but I still think that it's a microcosm of what humans do, is, is: one of our kind of superpowers, but also, one of our faults is that we can ignore the data and we can say, 'No, we think that we can actually create solutions and solve problems in a way that will enable us to create this value.'

I'm at a business school, and so I'm extremely interested in this; and how is it that I assess something that's new and novel, that's forward-looking rather than retrospective? And, I think that's an area that we need to study and understand rather than just saying, 'Well, beliefs.'

I don't know. Pinker in his recent book, Rationality, has this great quote, 'I don't believe in anything you have to believe in.' And so, there's this kind of rational mindset that says, we don't really need beliefs. What we need is just knowledge. Like, you believe in--

Russ Roberts: Just facts.

Teppo Felin: Just the facts. Like, we just believe things because we have the evidence.

But, if you use this mechanism to try to understand the Wright Brothers, you don't get very far. Right? Because they believed in things that were sort of unbelievable at the time, in a sense.

But, like I said, it wasn't, again, pie in the sky. It was: 'Okay, there's a certain set of problems that we need to solve.' And, I think that's what humans and life in general, we engage in this problem-solving where we figure out what the right data experiments and variables are. And, I think that happens even in our daily lives rather than this kind of very rational: 'Okay, here's the evidence, let's array it and here's what I should believe,' accordingly. So.

Russ Roberts: No, I love that because as you point out, they needed a theory. They believed in a theory. The theory was not anti-science. It was just not consistent with any data that were available at the time that had been generated that is within the range of weight, propulsion, lift, and so on.

But, they had a theory. The theory happened to be correct.

The data that they had available to them could not be brought to bear on the theory. To the extent it could, it was discouraging, but it was not decisive. And, it encouraged them to find other data. It didn't exist yet. And, that is the deepest part of this, I think.

26:14

Russ Roberts: Let me phrase it a different way, and see if you agree with this. Because, I think it is easy to get lost in the logic of this.

At one point in your paper, you talk about a quote from Yann LeCun, the AI person: "Prediction is the essence of intelligence," quote/unquote. And that seems reasonable. And, usually, to predict something, we have to go back to the past and build a model of causal interaction, of variables that we have data on from the past. And then we forecast--that's the prediction part--we forecast the future.

And, in economics, for example, a lot of people believed--and I think Milton Friedman would be one of them--believed that that's the essence of economics. It's not trying to understand how human beings, say, make decisions as consumers, or how those decisions aggregate into a market. It's: We don't need to know that. That would be nice, but that's not our goal. Our goal is to predict what people are going to do.

So, what they actually are doing inside the thing called their head, that's a black box. As a social scientist, I don't know what that is. All that really matters is that I can forecast what they're going to do. And, if I can forecast, then that's either useful or even illuminating.

And, what you're suggesting is, is that--first of all, that is not the only thing we do, because a lot of times we don't have data on the past to allow us to assess the causal connections that we need to forecast.

And, what that really means, as I understand what you're writing, is that forecasting is only one aspect of human cognition. Thinking about how to forecast--that is the underlying models that we use for how the world works--is what allows us to generate the data that allows us to test our theories and models and confirm they're more--or at least, say, that the data are consistent with them.

But almost by definition, if you have an enormous amount of data--which now is available to human beings through the Internet--you have to decide which data to use, because there's a lot of it, and it's not all consistent.

And, even more importantly--and this is what is so cool about the Wright Brothers--sometimes you don't have data yet, even though you have an immense amount of it that would allow you to assess the correctness of how you think the world works.

Teppo Felin: No, absolutely. A couple of reactions. I mean, there was this article that was written in--it was in Wired Magazine, by, I think it was the Editor, Anderson--and it was called "The End of Theory," and it was subtitled something like 'Why the Data Deluge Makes Theory Obsolete,' basically. And, the idea is that we can run regressions and we can run correlations and associations and things like that, and that's going to tell us the truth. Right?

But it's precisely the theory that tells us which data to look at. And, not only which data to look at, but how to interpret that data. Right?

And so, Joseph LeConte, for example, was looking at birds. So were, actually, the Wright Brothers. But they were looking at birds differently. Right? And so, the key variable that Joseph LeConte focused on was the weight of birds. What the Wright Brothers--and they did very careful studies of this--they actually looked at wing shape and looked at how they flew. And, it was a completely different focus for them because they were interested in, again, solving those three problems that I mentioned earlier: lift and steering and control.

And so, the key point is that that's something that computers can't do. They take existing data as a given, whereas we as human beings find and create, through experiments, new data.

And, I guess the important point I want to highlight here is we're talking about thinking in general, but we've set ourselves up as scientists. As: Okay, well, we do this theorizing and thinking, and then we watch people and they're biased and so forth. And, I'm trying to create an equivalence in some sense to say that: Actually, here we have two bicycle mechanics who are engaging in the highest level of theory in some sense, right? They're engineers; and they're proving scientists wrong.

And, they're finding--and again, to get back to your point about prediction, they're not predicting, per se. They're problem-solving and doing, which is different, right? And, prediction, like you mentioned, is inherently based on past data. And so, we need some forward-looking mechanism to kind of bootstrap novelty and to see things in a different way.

Einstein has this quote that I just absolutely love, and we can think about this in the context of Galileo or the Wright Brothers, but he says, 'Whether you can observe a thing or not depends on the theory which you use. It is the theory which decides what can be observed.' Right? And so, it's not that the data lights up somehow and says, 'Hey, I'm relevant, you should look at me.' It's the theory that tells us what to look for and what's important.

And I think, even, like I said, in our daily lives, what we have rattling in our head--what we're thinking about--starts to structure what becomes salient and important. And so, I think that we are engaging in a quasi-scientific exercise in any interaction that we have with the world: but it's not one where it's world-to=mind. It's mind-to-world. And that. I think is something that, I don't know how you would program a point of view into a computer.

32:32

Russ Roberts: You talked about the idea that we don't need theory anymore. I've heard economists tell me this. They say, 'I don't like theories. I just listen to the data.' And we've talked about this many times on the program: and you have to decide what data to listen to. There's a lot of it. And, theory helps you do that, of course, as Einstein was pointing out.

And, I think the other challenge--the other way to think about this--is that forecasting is--as you said it a little better than I did--forecasting essentially is built on backcasting. You go back and you look at the past data and you build a model that fits that data. That's the backcast.

You then go forward and you say, 'Well, the forcing variables--the so-called independent variables--are going to be different in the future. And that allows me to predict, once I know what those are, I could set them in theory as levers of policy or behavioral manipulation. I will change those and then I know what I'm going to get: Because, based on the backcast, I can now forecast what is going to happen.

And of course, that often does not work. And, the reason it doesn't work is fascinating. It's embedded, I think, in Hayek's Nobel Prize address, "The Pretence of Knowledge," which we'll link to. If you have not read it, listeners, it's a phenomenal article you can benefit from even as a non-economist. It's not a technical article.

But, the point is that, whatever I generated--whatever model I created out of the past data--it's based on a limited number of variables. And, the future is, excuse me, it's based--let me re-say that.

The backcast is generated by looking at a limited number of variables and trying to tease out, using statistics, their relationship to one another. There are other things happening in the background we[?] don't have data on--the other variables. And, some of them are relevant. But I don't see them. I don't know which ones they are.

And so, when I forecast, I often go wrong because those unnoticed variables have changed. And, as a result, my forecast does not match the quality of my backcast. My backcast, I'm extremely accurate. It's really good at fitting the data, using--fitting the model to the data I already have. But the model does a very poor job of forecasting what's going to happen in the future because there are other variables that I did not control for that are still oscillating and changing.

And, I think that underlying complexity of the world is what makes the world interesting and why algorithm-based pattern identification is often going to fail, whereas a human forecast of, quote, so-called "irrational intuition" can do fairly well.

The puzzle--and the challenge, I think, to your view--is that--and I think this would be the common response--is that, 'Oh, that's now. Oh yeah, we don't have all the data, but give us enough time. Yes, maybe what I ate for breakfast in the morning will affect my purchasing behavior or whether I got into an argument with my wife or how much traffic I got into on the way to the grocery. But so, we'll update on all that. The whole world will be mapped out and categorized and cataloged, and the models of the future will take all of that into account.'

And, I come back to Nassim Nicholas Taleb's point: Big data, bigger mistakes. That increase in data without a theory to help you decide what is irrelevant, not going to necessarily get you to better answers.

Teppo Felin: Right. I mean, you know, in some ways you're talking about backcasting and forecasting based on past data. I mean, in some ways things do stay the same. And sometimes you can make predictions when things are roughly do stay the same--over evolutionary time, for example. I was just thinking of the--I've been working with evolutionary biologist, Stuart Kaufman, and we've been wrestling with: How is it that large evolutionary changes happen? And, I'll just share a really quick example of this.

It turns out that over evolutionary time, early on, humans hunted by persistence hunting, which was simply: we would run after animals, and eventually, because--I'll take a deer for example--deer can't sweat, they can't regulate, we could catch a deer. It turns out that humans can catch a deer by persistence hunting by over a four-hour period. And, it turns out that multiple different groups across the globe, this is how they hunted.

And, you can think about that as a simple evolutionary game where we know the variables. The variables are: It's my endurance versus the deer's, how fast they can run in the short term, and whether they can hide. And, it turns out it was not a super stable thing. There was in some situations--you do this at the heat of day, humans would just pass out and die as they're trying to catch the deer. But, at some point, somebody--and there are these evolutionary transitions that disrupt how we get calories, for example--but at some point somebody said, 'We're chasing these deer. Our uncle just died last week because he was sweating.' And, somebody says, 'Well, why don't we just dig a large pit? Rather than run behind the deer, why don't we just use the ground here?' And so, that's a variable that nobody thought about. For whatever reason, we thought the ground is great for running on and it's great for building things on, but what if we dig a large pit? And, what if it's large enough where we don't just catch deer, we actually catch a mammoth or an elephant. And, now, all of a sudden you're getting way more calories and you're passively hunting.

And, I think that there's something in humans where we have these simple models where we think we know the variables, but then somebody says, 'Okay, well no, let's think about this problem differently. We might actually solve of our issues in terms of calorie expenditure by thinking about the problem in a different way.'

And, in a paper that we recently published, we highlight evolutionary leaks[?] or transitions like this. And, a lot of people like Dennett and others say, 'Well, things largely just stay the same, and then we somehow get lucky and something new emerges.' But, I think that there's actual problem-solving going on on the part of humans in terms of how to solve something very fundamental like getting calories.

And, I think that's a theoretical, quasi-scientific, proto-scientific activity that we need to understand. And, I don't know how that would get programmed. I mean, I think you can try to reverse engineer some of that and get it into computers and AI, but I think that that's still a unique capability that us as humans have that current systems don't.

39:29

Russ Roberts: Yeah, the fancy name for that is creativity. That's the jargon, right? Or innovation.

I think about this wonderful example--I don't know if it's true. It doesn't matter. If you put a dog on a leash, you tie the leash to a pole and then you wrap the leash with the dog on it around another pole, and after the length of the leash is taken into account, the dog is now a foot away from the steak that it wants to eat. And, the dog will bark and try to stretch the leash and try to get closer and closer to the meat, but the dog doesn't think of backing up and turning this right angle of the leash wrapped around the pole into a diagonal where then the dog could easily reach the meat.

Whereas a squirrel supposedly can do this kind of thinking. And, the idea--the clever part of this example--is that squirrels live in trees. They have to live in three dimensions. Dog lives on the ground. They have to think in two dimensions. And, this is a sort of two-and-a-half dimension problem. I don't know what you would call it.

But, what's fun about it is that most of us, we're like the dog. When we're told, how do you get the meat? We think, well, try to get closer to it, and we stretch the leash, and we might think, 'Well, maybe the leash will break.' We have certain forms of thinking that we would call inside the box. And, the ability to think outside the box is what distinguishes the Wright Brothers.

Or my favorite example of this I've talked about before is Andrew Wiles's solving Fermat's Last Theorem. After his first proof led him to great accolades in the front page of the New York Times, shortly afterwards, it was discovered his proof had a mistake in it. He hadn't solved the problem of Fermat's Last Theorem. And his life spiraled downward into despair. And then, you can watch this online--if you pull up Wikipedia we can find it; it sometimes is hard to find. But, at some point he recounts that he just thought of a different way of thinking about it. He doesn't understand it. And, when he tells the story, he's near tears, because it is one of the most beautiful and poignant examples of being a human being, that we're capable of these creative leaps.

Even--very appropriate for this conversation--I was watching a video of Ray Bolger, who is the scarecrow in The Wizard of Oz singing on the Judy Garland Show. And he rhymes, 'deserve you,' but with 'worthy of you.' Now, they don't rhyme, but he sings it. [singing]: 'Someday I'd deserve you/I'd even be worthy erv you'--meaning 'of you'. And, that's why it's a genius song; and it's a beautiful song. And, a computer would struggle to make that leap of creativity. But maybe it could someday. I'm not saying it can't.

And, I think the challenge--you can react to any of that that you want--but I think the other challenge is: those of us who are skeptical about the ability of AI to solve all human problems, we have to confront the fact that we said similar things in the last 10 and 20 and 30 years, and AI solved them all. 'I don't think it'll ever be as good as a human chess player.' Oh, yes, it is. 'Oh, not Go. Go is too complicated.' Well, it did that easily. And, now we're saying things, 'Yeah, but it'll never write a great sonnet.' Well, it writes some pretty good sonnets. 'Yeah, but not a really great sonnet.'

And so, you're saying--and maybe you want to say more--but right now what I hear you saying is, 'Okay, AI is pretty impressive, but what it won't do is come up with problems that are outside the box, solutions to problems that are outside the box. It won't think of a pit as a way to catch animals. It'll only want to try to think of catching them more effectively--run faster, start earlier in the morning when it's not as hot, etc., etc. There is a worry that in 10 years you're going to look like an idiot, Teppo; maybe 10 months.

Teppo Felin: I'm a user of AI. I love these models. I think they're fantastic, but I still am pulling for Team Human, I guess. And, I think eventually it's going to be human and AI hybrids. And, in the end, these models are building on human knowledge. It's all the text that we as humans have written, and it's people programming these models, these neural networks and transformer models and so forth.

I mean, at this stage, and there's been studies, Alison Gopnik, who is at Berkely, a developmental psychologist--she looked at large language models versus three-to-seven-year-olds and looked at how innovative are they. And, her paper is fantastic. It just highlights how three-to-seven-year-olds are more innovative in terms of how to think about tool use, for example. And, she has many, many examples, and we can link to that.

I think that there's something--I think this contrast between kind of a digital intelligence and a biological one, I still think that there's something fundamentally different. Jeff Hinton says--recently, in several talks--who is one of the AI pioneers, says that digital intelligence beats biological intelligence. And, I don't think that's the case. I think biological intelligence has this propensity to kind of hypothesize, to theorize, and to problem-solve in a way that's different. And, we have now models of causal AI that are trying to get at some of this logic.

I'll add one on one other person to this. Judea Pearl--he's a skeptic of kind of just correlational and associational approaches. And, that's inherently what AI is. It's a correlational, associational approach. Whereas he says, 'We humans try to figure out some kind of causal logic for intervening in the world, for thinking counterfactually.' Right?

And, I think we can maybe prompt AIs around those types of things, but I still think that the human actor is important in terms of coming up with how to do that. And, we're very much forward-looking. And, again, AI is building on past data.

So, I don't look foolish in 10 years or 10 months from now or whatever it is. I think that the exciting things will be the hybrid AI and human solutions that generate new scientific insights. And, there's lots of scientific disciplines where AI is fantastic.

I'll just say on the poetry bit: I've talked to--my daughter is getting a Ph.D. in modern poetry at Oxford. And, if you read her the poems--if you read a person that's an expert in poetry--the poems that ChatGPT [Chat Generative Pre-trained Transformer] comes up with, an expert will not be impressed by it. And, I'm sort of a fan of Emily Dickinson and Wallace Stevens. And, I've tried to get it to generate sort of interesting poetry. And I have to say: I'm kind of skeptical. I've seen examples on Twitter or X and elsewhere of things, but I think it's sort of mediocre poetry. It's okay. But, if you go to Wallace Stevens or Emily Dickinson, you find just tremendous insight and ways of saying things.

And, again, the AI obviously can then build on that. You could create a specialized large language model that's just trained with the corpus of a particular poet. And, I've seen people generate a Shakespeare-generator, generative AI to generate more Shakespeare plays. And so, it can riff off of that. But, I do think that there's something there that humans are doing, and people like Judea Pearl and Alison Gopnik and others are getting at that. And in the context of--economic context--I'm very interested in how entrepreneurs and startups and managers do this because I feel like this notion that AI is going to take over or should take over our decision-making, I find that just far-fetched.

48:03

Russ Roberts: I want to confess, I'm on your side. I want Team Human to do well. And, I worry that some of my skepticism is just driven by that cognitive bias. I don't want to live in a world where we have nothing to contribute. That world doesn't seem like a pleasant world to live in. That's one reason.

I do think it's at the heart of, though, to defend the Team Human perspective intellectually rather than emotionally. Emotionally, I think it's totally fine to root for Team Human. It might be wrong. But, intellectually, I think the case for Team Human is that we don't understand how Andrew Wiles solved Fermat's Last Theorem correctly in that second attempt. He doesn't understand it, and neither do we. We don't understand how Emily Dickinson wrote the poetry that she did. We don't understand--well, Beethoven I think claimed that it was easy. He wrote the note that came next. He just wrote it down. It was not hard. Wrote the note that was the right note. And, again, then that's a beautiful example of the sort of algorithm that large language models use. It looks for the next word using a stochastic process, and it's really amazing at it. But, was that what Beethoven did with a smaller data set? No. He did something else. And, how he did that, we don't understand.

So, I guess my level of defense of Team Human is that until we have a better understanding of how Team Human produces its greatest works--another would be Michelangelo's David, which is supported by a little tiny stump on the ground, which was a fraction of the size that other sculptors were using. He was taking a 17-foot-high piece of marble and supporting it with--kind of like the Wright Brothers--a scientifically unattractive stump to keep it from keeling over and out of marble that he sculpted. If you look at the back, you'll see it there. It's holding it up, in theory. We don't understand those things. We don't understand that process by which we come up with the pit instead of the race harder, train better, etc.

If we understood that, I guess we'd be able to model it more effectively using, perhaps, non-biological machines--as opposed to biological machines, which we are. So, I don't know. That's where I'm at.

Teppo Felin: Yeah. I feel like the essence of what humans do is this--I like this quote by Søren Kierkegaard, who is a philosopher. He says that: 'Life can only be understood backwards, but it must be lived forwards.' And, I think there's something human about looking to something else or having a belief about something. And, again, these are the basis of irrationalities and things like that, but it's also the basis of progress and of new ways of seeing things. And, I think that it's inherent and intrinsic to humans.

And, I think that, again, the AI art and things that are coming out, those are tremendous things. I think they're amazing tools, but I think that Team Human is always looking for new variables that we just hadn't thought were very important before. And, I don't think it's a computational process, that thinking. I think that there's something else that's going on. And, that's, I think, the beauty of cognition, human thinking.

Russ Roberts: Give me the Kierkegaard quote again.

Teppo Felin: 'Life can only be understood backwards, but it must be lived forwards.'

Russ Roberts: So, we look back at our lives and we see certain things that we--choices we made that led to certain outcomes. Some of them we were happier with than others, some we might be outright unhappy with. So, we start to understand something about how the world works through reflecting back on the past. It helps--that reflection--I believe, helps us lead a better life going forward, which is how we do have to live. We don't relive our lives and then tweak our decisions. History doesn't repeat itself. It only rhymes. And so, when we come to a situation and encounter with a certain type of person, and we think, 'Oh, yeah, when I dealt with that kind of person before, I did X, and that turned out to be a mistake. I won't do that again.' And, you can learn something from your past that tells you something about how to live in the future.

But, of course, those are things that are like--that example I gave is an example of an encounter that's like one I had before. What do you do when you have one that's not like any that you had before? And, one answer would be, of course: Oh, it is like, you just don't realize it. And a machine would recognize it and realize that it's similar.

But, I do believe that fundamentally--and I have a book on this, Wild Problems--that many, many of our decisions that we make, it's not just a question of not having enough data. There's no amount of data that would tell you whether you're marrying the right person or whether you should have a child or a second child, or even what you should major in college, or what career you should go into. You can have some information about it. But, this romance that we have, almost religious, about the value of data and facts to make decisions is--it seems to me[?be?] fundamentally irrational. So, that would be my claim.

Teppo Felin: I mean, I think in some sense, once you have a belief or commitment, you start to create the facts. And, that's part of the problem is that there's this confirmation bias. So, if you're picking a spouse, for example, you can make salient certain things that are good or bad. Right? And so, it's a commitment in some ways. And, I think the same thing happens with a startup. They're taking this leap, but then they're going out there and they're trying to experiment and create the conditions under which they make it true. And, again, it could be that it doesn't become true. It could be that Airbnb doesn't become this large company worth whatever it is, $70 billion right now. But, early on, different investors said that this basically isn't a business that is going to actually create value. And, Fred Wilson famously invested in Instagram and Kickstarter. And, lots of different businesses said, 'This is just for couch-surfers and hippies.' But, nonetheless, these founders decided that if we solve certain problems, we can create this reality.

And, actually, William James--he is beautiful on this. He has this book, Will to Believe, as has James [Charles--Econlib Ed.] Peirce, around how we can create the conditions under which we find the facts and create those realities in a way that makes them true where they weren't previously.

Russ Roberts: Did you mean Charles Peirce? Did you say James Peirce?

Teppo Felin: I said James Peirce. Yeah. William James and Charles Peirce.

55:53

Russ Roberts: It reminds me of a story, which I think is actually true--I've probably told it before on this program--of Fred Smith. So, Fred Smith, famously, writes supposedly a paper for a course where he outlines an idea for overnight delivery, and famously gets a C when a C was a bad grade. It's an even worse grade now, of course, but it was still a bad grade then. So, Fred ignores the grade, decides his professor doesn't understand. This professor is like the guys that--only turkeys 50 pounds is the cutoff.

So, Fred Smith starts the company; and it fails. The first night, he had two packages, and one of them was a birthday present to his mom. So, he had one customer the first night. And, weeks or months--I don't know how long--go by and he can't make payroll, so he borrows more money. And then, finally his bank says--his bankers in Chicago--say, 'Nope, no more. It doesn't work. All the data--we have lots of data--it is not a viable project.'

And of course, the difference between a startup and a marriage is that whether marriage has failed or not is a inherently non-objective, subjective, assessment by at least one, if not both, the parties. Whereas if you don't make your payroll, your business has failed. So, he's not--but, maybe not yet. I mean, maybe so far it has failed, but it could, in theory, recover.

So, Fred Smith has this will to believe. He rejects the data. He believes in the concept. He's full of cognitive bias, I am sure, by the way. He's overconfident beyond what any outsider would tell him. And I'm sure--I don't know if he's ever written about it--I'm sure all his friends and family said, including his sisters who shared a trust fund with him, that he was wrong. He had made a mistake and he needed to shut this thing down. And, he's got to fly back to Memphis to tell his staff that they haven't paid for a while, and that's over. They need to go find another job. And, instead of going back to Memphis at the airport, he sees a sign for Reno, a flight going to Reno. He goes to Reno, goes to the blackjack table or the roulette wheel, I don't know what. Takes his last remaining money--I think out of his sister's trust fund, their shared trust fund--and rolls the dice, literally and figuratively, and sustains the business for a little longer. And, it makes it. It doesn't just make it: It becomes an iconic transformation in how we think about delivery and the world.

And so, who was right? Was Fred right or wrong, that before he--it's almost like Schlesinger's Cat [Schrodinger's Cat--Econlib Ed.]. FedEx is a failure and a success. It turned out because of his irrationality, he was able to generate the data that proved it worked, but no one else believed in it. He was insane. He was literally an insane, crazy man, not rational. And yet, he was much smarter than anyone else. At least that one. Many other companies have had a similar insane founder who lost all their money. They did fail.

Teppo Felin: Absolutely. I mean, I think luck can definitely play a role.

I don't know if it's Napoleon or something like that, but [?he]? has this quote that luck favors the prepared mind. And so, there's still something that Fred Smith needs to do to figure out the problems that he had with the business. And so, how do you develop this underlying causal logic of how it is that we quickly get packages to their destinations? And, clearly, that was a problem, or set of problems that he kind of compiled and pulled together as an architect and was able to create value.

But, yeah, luck also plays a role in these types of situations. But, I do think that humans have an ability to think about those underlying causal logics of how it is that we can solve certain problems that will enable them to, in this case, create economic value.

Russ Roberts: Well, the crazy part about Fred Smith--and Vernon Smith, no relation as far as I know, pointed this out, I think on an EconTalk ages ago--the value of the hub system is a genius idea, which looks like stupidity. So, if you're flying a package from San Francisco to Oakland, you fly to Memphis and then it changes planes and then it goes out to Oakland. And, that seems remarkably inefficient and stupid. It's a genius idea because it means you have to have many, many fewer planes in the air at night. I mean, it's such a clever idea. It's the equivalent--not to insult Fred Smith--it's the equivalent of figuring out if you dig a big hole in the ground, you're going to maybe be a more effective hunter. It's so outside the box. And, I'd love to know how he came up with that, if he saw an analogy somewhere or saw something that made him think of it, or whether it was just a flash of what we call a flash of insight. I don't know.

Teppo Felin: I mean, just to turn that on its head--I mean, when Southwest Airlines started, most airlines had a hub-and-spoke system by then. And, when investors were looking at Southwest Airlines, they said, 'Well, you're flying point-to-point. That makes no sense. We know that the best model is a hub-and-spoke system.' And so, they were systematically--there's good research on this--they were systematically undervalued for a long time for investment analysts who said, 'This is not how we do things.' But they said, 'No, we have a system. We have a causal logic for why we're serving these underserved markets, where we don't have to pay as high airport fees because we're flying from Lubbock to wherever it is in Missouri.' So, they upended and sort of offered a contrarian belief and model for the world in that setting. Which ended up creating tremendous amounts of value. But, it was contrarian at the time. Again, they were undervalued systematically.

And so, I think that that is what humans do. Like: The data says, 'No, all airlines use this hub-and-spoke system.' They say, 'No, we're not going to do that. We're going to do things differently.' And, I think that's at the heart of what humans do; but the heart of economic value creation and strategy is uniqueness and difference.

Russ Roberts: So, the lesson of today's episode, dear listeners, is: It's easy to be successful. Just think of things that no one else has thought of--that work.

Teppo Felin: That's right. Yeah, yeah, yeah. Yeah. I mean, I think it's something that--I'm involved in studies where we're trying to elicit this from entrepreneurs and startups. And so, I think there's things that we can do. I think different or be creative. Those are catch shorthand for an underlying process for how it is that you carefully think about how to be contrary and indifferent in the midst of everybody else thinking a certain way. And so, I think that there's ways of eliciting that out of people. And, that's certainly what we thought, when I was in venture capital, is that there's ways in which we can influence these entrepreneurs for thinking about their context in a different way and coming up with contrary beliefs that enable them to create value in ways that others just didn't envision. And so, I do think there's things that can be done there. But yes, Think Different sort of summarizes that.

1:03:19

Russ Roberts: And we'll close with this. I think what's so hard about being a thoughtful human being is that it's so easy--because of our cognitive biases, no doubt, or evolutionary inheritance--it's so easy to do what everyone else does. It's so easy to do what we've always been doing. Because it usually works. And, the idea--finding ways to be reflective and to think deeply about why the patterns of living and choices that you've developed for yourself might not be the best way to live, is really, really hard. And, for me, to the extent that I've found ways to do that, it's not through systematic thinking about it.

This is the irony of this conversation for me: I think, to the extent I have found ways to think outside the box, I think that the result mostly of experience, which is basically the accumulation of data that my non-conscious mind works on without my help and creativity, is--it's really hard to teach.

But, I think the point I'm trying to make, and I'm not saying it very well: It's really hard to break out of one's own habits of mind and thought, and to stop, and to reflect. Life is so hard. It's moving so quickly.

One analogy would be the idea that you're going to just head the bus in a different direction, when really all you're trying to do is to change the tire while repairing the flat while the bus is moving. That's mostly what we're doing: Just keep that bus going. It's really hard.

And, take the time to think about different ways to behave and make decisions is just really hard, it seems to me.

Teppo Felin: Yeah, I mean, I'm at a university. I've been at universities for several decades. And I teach science: I teach things that most people believe. Like, we're all teaching the same things, things that are established facts in some sense, right?

But, what I'd like to teach is sort of a higher level that says, 'Okay, these established facts are constantly evolving.' And, at some point somebody said, 'I'm going to think about this differently.' And, that's not very--you know, knowledge is kind of justified belief. It's justified by evidence. But, what we ought to also teach is this idea of: Okay, there's beliefs that are ahead of their time, potentially; and that can lead to all kinds of delusions and confirmation and bias and so forth. But they can also lead to novel ways of seeing things, like with Galileo or novel technologies like with the Wright Brothers.

And so, I think that any mechanisms that we can incorporate into our teaching and how to think--to our students and more generally as human beings--to incorporate that kind of meta-level of being critical and so forth, I think are useful and important.

Russ Roberts: My guest today has been Teppo Felin. Teppo, thanks for being part of EconTalk.

Teppo Felin: Great. Thanks for having me.