Erik Hoel on Consciousness, Free Will, and the Limits of Science
Jul 24 2023

HoelWorldBehindWorld.jpg Neuroscientist and author Erik Hoel talks about his book, The World Behind the World, with EconTalk's Russ Roberts. Is it possible to reconcile the seemingly subjective inner world of human experience with the seemingly objective outer world of observation, measurement, and science? Despite the promise of neuroscience, Hoel argues that this reconciliation is surprisingly difficult. Join Hoel and Roberts for a wide-ranging exploration of what it means to be human and the limits of science in helping us understand who we are.

RELATED EPISODE
Iain McGilchrist on the Divided Brain and the Master and His Emissary
Psychiatrist and author Iain McGilchrist talks about his book, The Master and His Emissary, with EconTalk host Russ Roberts. McGilchrist argues we have misunderstand the purpose and effect of the divided brain. The left side is focused, concrete, and confident...
EXPLORE MORE
Related EPISODE
Patrick House on Consciousness
How does the mind work? What makes us sad? What makes us laugh? Despite advances in neuroscience, the answers to these questions remain elusive. Neuroscientist Patrick House talks about these mysteries and about his book Nineteen Ways of Looking at Consciousness...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Ethan
Jul 24 2023 at 4:23pm

I see this as two separate critiques

Self-Referential Issues – I am skeptical that this has so much bearing on the limits.  My favorite example is the Halting Problem.  They are very clever and quite important descriptions of the limits of logic.  But while it is appealing to show that this could exist in the Brain, I don’t think the brain is a mapping or a self-referential machine.  It can abstract out.  I can understand the inner workings of my body.
The other critique is chaos.  There is a reference to this in a Taleb Episode where he talks about Russian Scientists proving the incalculability of some things based on billiard ball movements.  This is where I agree with Erik.  Meaningful prediction about consciousness is impossible just like meaningful prediction about the weather.  Maybe it is Hayekian, maybe it is Chaos theory, or complexity theory.  But it is where I get off the bus with social sciences and many biological phenomenom like consciousness

Stephen Eastridge
Jul 24 2023 at 5:08pm

I was mesmerized by this discussion. I’ve been an EconTalk listener from the late oughts. Podcasts like this are the reason.

William Mayer
Jul 24 2023 at 10:21pm

This is was an excellent discussion on a topic I find deeply interesting, but I couldn’t shake two thoughts while listening to this episode that add some important context.

There is a large body of literature around paradoxes and self references, there are some interesting paradoxes that are generally accepted to not be caused by self reference, a good overview is https://arxiv.org/abs/math/0305282 for anyone interested. This literature helps ground some of the ideas discussed and make them more concrete.
Some of the illusions to the limits of science have been addressed in alternative ways. Robert Rosen a theoretical biologist wrote many books about the need to expand science beyond strict mathematical models dealing only with syntax, to models capable of rigorously describing sematic content. This project was largely undertaken using category theory, as an alternative to the well known limits of set theory.

Shalom Freedman
Jul 25 2023 at 6:18am

Outstanding talk. Hoel and Roberts deep in understanding beyond mine.

Still a few isolated thoughts. Isn’t it naïve to expect complete understanding of anything?

As is pointed out reality and experience changing all the time, so how can we expect any mapping of the future will include its laws and details?

Isn’t wisdom in hoping for improvement of any understanding rather than in total explanation and understanding?

As Hoel and Roberts both point out all kinds of physical systems simpler than human mind, it is understood, cannot be understood completely?

Isn’t basic internal -external distinction itself simplistic and incomplete?

Better to know a little more than to know once again we humans can never know everything about ourselves and our future?

More difficult even than reconciliation of omnipotence and omniscience is their reconciliation with omnibenevolence.

 

Mike B
Jul 25 2023 at 8:12am

I found parts of this episode very interesting but ultimately it left me very frustrated with both the host and guest. I do have a degree in neuroscience and found most of what Dr. Hoel said to be convincing up until a certain point.

He said that due to the complexity of the brain and possibly due to paradoxes inherent in self-representational systems it might be impossible to build a model that can determine the future. He uses the weather as an example of a similarly complex system (no matter how good science gets we will never be able to predict the weather one year from now). This is a good argument for dismantling determinism but it doesn’t create any room for free will. Just because we can’t predict the weather doesn’t mean we somehow imagine that the weather is free to choose what it would like to do. In the same way, just because we may never be able to create a predictive model for how a brain works doesn’t mean that the brain isn’t still beholden to physical laws. Neuronal firing is a physical process and no amount of structural complexity can escape this fact.

William Mayer
Aug 23 2023 at 4:34pm

Yes, neural firing is the underlying physical process but as discussed in the episode neuroscience has failed to establish an exact correspondence between neural states and actions/decisions. This leaves room for other factors to intervene or contribute to the process that gives rise to consciousness. This is where all the debate occurs. Structuralism is one way this is theorized to occur, so rather replace neural processes it should be thought of as an additional level of explanation, although a strict structuralist would argue you can ignore the neurons since the contribute nothing important.

Saja
Jul 25 2023 at 12:51pm

This was one of the most frustrating Econtalks I’ve ever listened to. So many strawmen everywhere. You don’t need to be able to perfectly predict the future to dispel the notion of libertarian free will, just recognize that the universe is made up of random and/or determined events, and none of them act contra-causally (this is Sam Harris’s argument).

Also the encoding problem is that you can’t perfectly represent the entire island in a map without the map being an entire copy of the island (thus the absurdity of putting a copy of the whole island on the original island). But a paper map can still be handy to navigate. Think of the saying: “all models are wrong but some are useful.”

Maybe Russ can have the neuroscientist Jeff Hawkins on to talk about his theory of how the brain works.

Ruth Fisher
Jul 25 2023 at 2:52pm

I loved the discussion on the importance of having the appropriate language for being able to explore new areas. If concepts haven’t been defined and/or you don’t have the appropriate language to describe those concepts, it’s difficult to communicate about those ideas. David Wotton discusses this in The Invention of Science:

A decade before Galileo’s telescopic discoveries, William Gilbert, the first great experimental scientist of the new age, had acknowledged: Sometimes therefore we use new and unusual words, not that by means of foolish veils of vocabularies we should cover over the facts [rebus] with shades and mists (as Alchemists are wont to do) but that hidden things which have no name, never having been hitherto perceived, main be plainly and correctly enunciated.

A revolution in ideas requires a revolution in language.

Grant Castillou
Aug 1 2023 at 1:10pm

It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first. What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing. I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order. My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Sandy Mount
Aug 8 2023 at 1:53pm

Edward Feser has written a few articles that touch on many of the issues raised in this podcast.

He would be a good guest on Econtalk as I feel a lot of scientists and philosophers talk past each other on this subject.

 

https://edwardfeser.blogspot.com/2012/05/kripke-contra-computationalism.html

https://edwardfeser.blogspot.com/2015/02/accept-no-imitations.html

https://edwardfeser.blogspot.com/2018/06/godel-and-mechanization-of-thought.html?

 

 

Comments are closed.


DELVE DEEPER

Watch this podcast episode on YouTube:

This week's guest:

This week's focus:

Additional ideas and people mentioned in this podcast episode:

A few more readings and background resources:

A few more EconTalk podcast episodes:

More related EconTalk podcast episodes, by Category:


* As an Amazon Associate, Econlib earns from qualifying purchases.


AUDIO TRANSCRIPT
TimePodcast Episode Highlights
0:37

Intro. [Recording date: June 21, 2023.]

Russ Roberts: Today is June 21st, 2023, and my guest is neuroscientist and author Erik Hoel. You can find his essays on Substack at The Intrinsic Perspective. They're fantastic. His latest book and the subject of today's conversation is The World Behind the World: Consciousness, Free Will, and the Limits of Science.

This is Erik's third appearance on EconTalk. He was last here in April of 2023 talking about the threat to humanity from artificial intelligence.

Erik, welcome back to EconTalk.

Erik Hoel: Thank you so much. It's an absolute pleasure to be back on here, Russ.

1:11

Russ Roberts: Now, you start your book discussing a really fascinating thesis about our self-knowledge as a species, which is a topic I've never thought about. Like, how do we know about ourselves and when did that begin? But, this thesis comes from Julian Jaynes in a book that came out in 1976, The Origin of Consciousness in the Breakdown of the Bicameral Mind. What was Jaynes' idea?

Erik Hoel: Jaynes' idea was that if you went back and you looked at very early historical texts--particularly for him, the Iliad--the way that characters talk about their own minds and their own drives is extremely unusual, or you might say not at all modern. And, an example would be that they're constantly sort of driven by the will of the gods and they don't seem to have much emotional understanding of their own selves.

And, he thought that a very interesting interpretation of this would be that it's really that they actually are hearing, say, the commandments of the gods when they go to do actions. But, what's actually going on, given what we know about the specialization between the hemispheres of the brain, is that essentially one side of the brain, one hemisphere of the brain is really communicating to the other. And, full human consciousness had not really been established at that time.

And, he marshals what's actually quite an impressive array of textual evidence. And, the book itself is very well written. It's a cult classic. I read it when I was young. It was one of the things that got me into consciousness research as a science.

But, it was always a bit of a wild theory. Right? It was always extremely evocative and it had all sorts of counterintuitive implications. For example, he thought that consciousness would therefore necessarily require language, and so he sort of had to be like Descartes where he thought that all animals were automata. And, there's all sorts of counterintuitive outcomes of this.

And, I think as a whole, the book has, like, a really interesting place in the intellectual development of thinking about consciousness because a lot of people have read it and enjoyed it. But a lot of people have also pointed out this critical fact that beyond that textual evidence, we just don't have a huge amount of evidence or really any evidence that human consciousness sprung into existence during the Homeric ages.

3:59

Russ Roberts: But, the reason I think it's interesting--and the reason I like that it's how you open your book--is it forces you to think about the fact that the way we think about the world as moderns in 2023 is not the same. Now, it's an extreme claim that they didn't think about it at all at the time that the Iliad was written. But, the idea that how we see ourselves--our role in the world, our place in the cosmos--is not a constant. And, it's hard to remember that. And, it's particularly important when we go back to works of art like the Iliad or the Bible or even in the Middle Ages, that it's not consumed in the same way as we consume it today.

So, I want you to react to that, but also talk about--a lot of people challenged Jaynes's thesis by saying, 'Well, wait a minute, what about this old poem? It looks pretty self-reflective. It looks pretty much like people were aware they had emotions and drives and urges. And, how could he claim that this isn't true?' And, I think the very way he defends it also forces you to think about this evolution in our self-awareness.

Erik Hoel: Yeah, absolutely. So, he has a very clever defense against this.

And first, let's just go over briefly some of the criticisms. At the time there was a philosopher, Ned Block, who is probably one of the most prominent current analytic philosophers of mind. He's at NYU [New York University]. He's sort of one of the last greats, I think, of late 20th century analytic philosophy. And, back when Jaynes' book first came out, he wrote a review in the Boston Globe immediately pointing out, Jaynes seems to have confused something here, which is that it's not necessarily the case that the ancients had minds that were themselves incredibly different. It's maybe that the understanding that they had of those minds is what is so different. So, they had a very different interpretation.

And, Jaynes does have a pretty clever defense against this. And, that's that: translation, particularly modern translation, often reads in more of what I call the intrinsic perspective, which is the view you can take of individuals that examines, like, the depth of their mind, the richness of their interiority. When we talk about minds, we're using the intrinsic perspective. And there is just--and Jaynes points out that while you can find some instances of that, some of it is probably just added in to make the translations more literary and more modern.

And, I actually have an example here, and that's from a poem from a series called "The Beginnings of Songs of Delight." It's housed at the British Museum. This is from about 1,300 B.C., so this is extremely old. And, it's a call and response between a male and a female, the poem. So, it's a very contemporary technique.

And, it was actually read at my wedding, and I'll just read it very briefly:

And thou art to me as the garden
Which I planted with flowers
And sweet smelling herbs.
I directed a canal into it
That thou might dippeth thy hand into it
When the north wind blows cool
The beautiful place where we take a walk,
Where thy hand rests in mine,
With thoughtful mind and joyous heart
Because we walk together.
Love poem c. 1300 B.C. Source: Reprinted in Erik Hoel, The World Behind the World, p. 19.

So, that's a very contemporary love poem. You might think, 'Okay: the fact that someone in 1300 B.C. is writing this is sort of disproof of Jaynes' point.'

Russ Roberts: Not just because it's a love poem, but because it has this rich metaphor and--right?

Erik Hoel: Exactly. It has--it almost--'Thou art to me as a garden' implies an internal topography. Right? So, it's like: Well, you're really talking about the richness of someone's mind.

And, it's funny because when I was researching this book, I found another example of a translation of this poem that is not quite as nice for a wedding, right? So, this is a different translation about 100 years later. Not the poem itself: the translation is a more modern translation. So, it starts with,

I belong to you like this plot of ground
That I planted with flowers
And sweet-smelling herbs.
Sweet is its stream,
Dug by your hand,
Refreshing in the northwind.
A lovely place to wander in.
Your hand in my hand.
My body thrives, my heart exalts
At our walking together.

You know, it's still okay but it's not quite as nice for a wedding. Right? Just, like, "I belong to you like this plot of ground." It makes it clear the analogy is really extremely physical and materialistic.

And, even things like, "My heart exalts." Are they thinking about heart as the physical heart or are they thinking about it as the mental heart? Right? Like, it's unclear.

And, that's an example of--the translations is this art form. And, it's very difficult to figure out exactly what these ancients thought about minds.

9:21

Russ Roberts: I'm sure there's more than one Ph.D. thesis comparing, say, translations of the Iliad from Pope--in I think the 18th century, Pope? I'm pretty sure. I think that's right. Versus Fagles in the 20th century or early 21st. I can't remember. But, recently. And, it's not just that: Oh, it's a, quote, "better translation"--that we know more about what those words mean now. It's that it's written for a different type of head--

Erik Hoel: Yes--

Russ Roberts: A 2023 head instead of a 1723 head. When you start to think about that, it's kind of mind-expanding.

There's a wonderful book by George Steiner called After Babel, where he talks--he opens the book with a bunch of--he opens with Shakespeare, I think. And he says: 'Can you understand this?' And, the answer is: 'Well, some of the words are archaic and we don't know what they mean now, but we can look them up.'

But, he makes the point that it's so much more than that. It's that many of the words that we know now didn't mean then what they mean now. You need multiple translations.

And, he then brings it forward to a passage from Jane Austen, then a passage from Noel Coward. And, he shows you that even Noel Coward, who is writing the early part of the 20th century, he's referencing linguistically, not events that we don't remember, but usages that have changed. And, language is dynamic. And that's in English. This is not translation: it's just understanding.

And, his theme of course, is that everything is translation. All language is translation. When you're talking in English to a contemporary. And, it's a brilliant and deep idea that haunts me. Your example, Jaynes' examples, and your discussion at your wedding forces you to remember that language is so complex. It is so extraordinary.

Erik Hoel: And it's not even--it's definitely language. And it's also the concepts behind the language that we have now. And, that's actually a big part of this book--of my book--which is saying that--listen, everyone acknowledges it's a story incredibly well told, that we've made a huge amount of progress as a civilization in developing the scientific worldview. And, the scientific worldview traditionally views everything as extrinsic and materialistic. By extrinsic, I mean like an engineering diagram. That would be something that's extrinsic. When you think about your car's engine, you're thinking extrinsically about your car's engine. You're thinking about all the different parts and their causal relationships.

And, everyone knows that we've gotten a lot better at understanding that side of the world. But, we've also gotten better at understanding consciousness and minds. And, that's very obvious if you go back and you look at these early texts. I do think that this is what Julian Jaynes is picking up on, that there just aren't--like, go find strong emotional reactions in really ancient literature. It's extraordinarily difficult. And then, when you do, it's often then a debatable subject of translation. They just don't have a rich language for describing minds.

And, an example that I give in the book is one of the oldest tales that we have, called "The Story of the Shipwrecked Sailor," which goes back to around 2000 B.C. And of course, this story of a sailor who gets shipwrecked on an island with a monster crops up again and again across human history. It's sort of a tale that's retold.

So you can look at this one tale and see: Well, how is it told in ancient Egypt? How is it told in ancient Greece? And then: how is it told now? And, in the ancient Egyptian one, the character is confronted by this snake that's incredibly huge and long and has a beard and rears up over him with gem-like eyes. And, there's just nothing about his emotional reaction to this creature. It's just, like, 'Oh, I saw this huge snake.' Right? They either didn't have a great way of communicating that terror or something, or they just didn't feel it was really worth noting in their stories. But, either way, they just don't seem like they have this really well-developed.

And, if you go over to the Odyssey, which is basically the same tale where they land and there's the Cyclops and they have to confront the Cyclops, already there, the terror of the men at encountering the Cyclops is clearly mentioned, although it's not really dwelled upon.

And then, you can go to, like, a really modernized version of that, which is James Joyce's Ulysses, which is technically a retelling of the Odyssey. So, sort of the same events occur in this highly literary way. And, the equivalent passage in James Joyce's Ulysses is just a guy going into a pub and seeing this giant with an eye patch who he's going to end up getting some drinks with and sort of getting into a political argument with. And, he sees a dog--which is the monster--and he has this pretty emotional reaction of like, 'Oh, I hate this dog. I'm scared of this dog. This dog makes me nervous,' and so on.

And, that transition really highlights how much more well-developed we are at telling stories about minds, where we can take even this really inconsequential event--just going to a bar--and fill it with all sorts of fantastical wonder. And, when you actually meet the huge bearded serpent, there's just nothing when you go back and look at ancient Egyptian literature.

15:32

Russ Roberts: Well, the power of this, for me, is that the causation runs in both directions. I don't know if you agree with that, but I think you do. Human beings didn't have language by definition for every emotion or the subtleties of various emotions--the nuances--in ancient times. And, modern literature, which you reference very thoughtfully help develop our understanding of ourselves and used words in ways that hadn't been used before, which helped us understand how we were feeling. At the same time, how we were feeling and our awareness that we had different kinds of feelings encouraged us to develop those words.

So, I'll read a paragraph from the book, which I love. And, just to be clear, going forward in our conversation, the intrinsic perspective is the things going on the inside of us, the extrinsic perspective is the material world outside of us. And, that's the contrast that you talk about a lot in the book. But here's the paragraph:

While the richness of our consciousness, what it is like to be us, overflows our ability to express it, we have a language around consciousness that allows us to fluently talk about minds. We regularly refer to thoughts, feelings, memories, inclinations, emotions, sensations, perceptions, confusions, illusions--these are not just the building blocks of our daily lives and the minutiae of our streams of consciousness, but also the material out of which the greatest artists and writers make their art. A modern human is fluent in these concepts, able to deploy them to discuss their friends, their family, their enemies, themselves.

End of quote. And of course, I would argue that to some extent, poetry--which you don't reference much in the book--poetry is the way we express things that we don't always have words for.

Erik Hoel: Oh, that's interesting. That's interesting. I like that a great deal.

I think one of the things I wanted to get across when I was discussing this development of the intrinsic perspective, a lot of which I think occurs in literature, is precisely this point you made earlier wherein we now have concepts about ourselves--concepts of character development, concepts of emotionality--that allow us to so fluently talk about minds. And, it's not like a coincidence that we have those. Those were built. Just in the same way that science had to develop all sorts of concepts like mathematics and causal relationships and so on, and developing these concepts enriched our understanding of the extrinsic world, so we had to develop a huge number of sort of cognitive concepts, and that enriched our understanding of the intrinsic world. And, one of the reasons that minds get portrayed so much more richly in contemporary literature, or by the time you get to the Enlightenment and some of the first early modern novels is because those concepts had been introduced. And, it's difficult to go back and imagine what it would be like to not have a bunch of understanding of exactly how minds work--which we all sort of very naturally have and possess now--and then go say, write about minds very early on in history before some of those concepts are around.

Russ Roberts: Here's another paragraph you wrote that I really like a lot. You say,

Rather, the mature intrinsic and extrinsic perspectives had to be constructed, sometimes laboriously, over millennia. It would take two discoveries, that of literature and that of science, to mature them fully. Humanity started off instead with merely a baseline view on the universe consisting of whatever was useful to hunter-gatherers. It was the naïve perspective of primates, who cared only about using tools for personal advantage and about manipulating social hierarchies and that's about it. So it's a civilizational achievement to be able to extrinsically see the universe "from the outside." It is also a civilizational achievement to be able to intrinsically see the universe "from the inside." The two perspectives of the sources of our greatest triumphs, like our ability to observe galaxies light-years away, and also the elegance and beauty of the stories we tell.

Erik Hoel: What was interesting to me and what I didn't really quite understand when I started researching this book and taking this sort of--this is sort of extremely big history. It is extraordinarily--I'm talking about entire millennia in one chapter. So, admittedly this is an--and the historical part is really just the first part of the book. And so, I fully admit that.

But, one thing that I did not expect was that I think in general there's a pattern wherein both perspectives leap forward at the same times in history. So, everyone knows that science is developing in ancient Greece, but the literature is also developing in ancient Greece at around the same time. And, you have the first plays and the first in-depth description of characters. Medea was called one of the first real modern characters in literature.

And, these early plays are all sort of occurring around the same time. And, I think that this is, like, a general pattern. And, it keeps repeating. Even up into the 1900s: Einstein introduces his sort of geometrical perspective of space/time in 1905, and Picasso generates cubism in 1907. And, both of them were supposedly influenced by this book by Henri Poincaré called Science and Hypothesis in which it talks about how to draw and understand the fourth dimension. And so, for Einstein, of course, the fourth dimension ends up being time. That's how he interprets it. But, for Picasso, he's so interested in this because he's, like, 'Well, I want to draw it from all dimensions.' And, what is cubism but a view from this out-of-time, layered perspective? And so, Picasso was paying attention to the science of the day.

And so, I think you see this concurrency where these two perspectives on the world jump forward at the same times and then often in the same places historically.

22:40

Russ Roberts: So, I'm going to try something insane about economics. You can take it or leave it. We'll see how it goes. I just thought of it, so we'll see what comes out.

I have--as a young economist, and I think most economists still today are very focused on the material. They focus on what is the extrinsic perspective? Standard of living, command over goods and services, wealth: stuff. And, GDP [gross domestic product] is just a dollar denominated or currency denominated measure of stuff--goods and services--as a measure of the productivity of the economy. And, a lot of people don't like economics and they make the point, 'Yeah, but there's things besides stuff. We don't just care about stuff.' And economists, 'Oh, of course, you're right.' And, they always, 'Yeah, yeah, yeah.'

But, what the problem is--for me, in my view of this--is that economists certainly will concede that there are things other than stuff, but they have trouble incorporating the non-stuff stuff into their math. So, they kind of just say, 'Well, that's something else.'

Like, so, for example, the despair of losing your job. And, most economists are not going to focus on that. They're going to focus on the fact that your material wellbeing will take a dip downward. If you have savings, you can insulate yourself, insure yourself against some of that. You might have friends you can rely on and so on. But, the despair part or the fact that you don't feel like you have a meaningful life because you're unemployed--that you've let down your family, say,--that's not in there. That's for other fields. And, I think that's a fundamental misreading of the human condition. It's the biggest shortcoming of economics.

And, I think the scientist can make the same point about your pairing the intrinsic and extrinsic perspective. They'd say, 'Okay, yeah, yeah, yeah, yeah. Sure: there's literature and there's advances in our understanding of ourselves, the different words. And that's lovely. Or Picasso, that's lovely. But, come on, that's just--that's not real. The real stuff is we understand now about the red shift. And, we understand now about how to make a computer, and a cell phone, and a car, an internal combustion engine, and a steam engine, and a train. And those are the achievements of humankind. That other thing you're talking about--the intrinsic perspective--that's just what goes on in our heads. That's not important. That's not even real.' It's just a bunch of--' well, chin music if you were a Woody Allen fan or somebody. It's an old Yiddish kind of thing. 'It's just nonsense. It's fluff. It's not important. We shouldn't even be studying that. We shouldn't even be thinking about it.'

And, the humanist responds--and I know you have both sides of your brain active because you wrote a novel--the humanist says, 'Well, no, no, no. Wait a minute. If you really have a ton of stuff and you can fly to any part of the world, but your life doesn't have any meaning, that inner part of you that's nagging at you about the meaningless of your life, it actually might be more important.' So, react to that.

Erik Hoel: Well, I think it's certainly a point that gets made, particularly the skeptical side of: Well, why even talk about this sort of thing?

But if you look historically, science focuses primarily on the extrinsic because it was decided that would be the case. So, Galileo Galilei, he was one of the first people to really sort of officially say this. But, he says--and this is right when science is getting established, and they had this problem of, like, where do you put the soul in science? And, the solution that all these people effectively accepted was that, 'Well, let's just bracket that problem aside.'

So, all these qualities--which now we might use the term experiential qualities. So, things like, you know, the redness of red or how beautiful a sunset looks, or all these things. He said, 'Listen, we'll just take all that out and we'll bracket it aside and we'll just focus on the rays of light as the sun sets.' Right?

And, this is saying that science should be basically committed to the extrinsic perspective solely. And so, he thinks--and that's why science is in the language of mathematics. And, he thought science should focus on essentially a small number of properties of matter, like size, shape, location. Right? This is, like, the billiard ball view of the universe.

And, that was massively historically successful. Right? I mean: that bracketing-aside where it said, 'Well, let's just not really worry about this half of the world, the extrinsic half of the world. Minds--we'll just put that aside.'

And, the problem is, is that: Of course, human beings have physical brains. And then, at some point scientists began to think, 'Well, I want to know how the brain itself works.' So, they start to apply this extrinsic thinking to the brain, and that's how you get contemporary neuroscience.

But, if you could somehow--Philip Goff, who is a contemporary philosopher, makes this point in one of his books--but, if you could somehow bring back Galileo to the current age and show him contemporary neuroscience, which is trying to figure out how consciousness occurs from firings of neurons, he would probably say, 'Well, wait a minute, what are you doing? The whole point of how I designed science is that it's only supposed to deal with physical quantities. It's not supposed to deal with the qualities of your mind. This was a purposeful choice that allowed us to proceed.'

And, what I've always been interested in, and what this book is ultimately about, is the paradoxes that occur when you do try to fundamentally merge these two perspectives. Right? We have this really well-developed perspective of how minds work and we have this really well-developed perspective of how the physical world works. And then, what is neuroscience? Current contemporary neuroscience is the attempt to sort of explain one within the other.

And then, the question is: Well, how is that going? And, I think it's like a land of paradoxes. I think it is a place with massive epistemic shifts and pits and traps, and it's very difficult to figure out. And there's probably a good reason for that, which is that science wasn't, at least in its initial design, really supposed to even be looking at that.

29:30

Russ Roberts: Yeah. Well, let's turn to that, but before we do, I just want to add a couple of things. The listeners may remember that I'm a big fan of Arcadia by Tom Stoppard; and Arcadia is really about the tension between the extrinsic and intrinsic perspective. I've never thought about it. And of course, written in the 20th century--I think; I think 20th, if not early 21st--Arcadia is in a world--it has two different time zones: present and past. But, in the present part of it, it's a time when the extrinsic perspective has dominated the intrinsic perspective. Right? The success of science has emboldened it so dramatically that the humanists--the specialists in the intrinsic perspective--are on the run. That they have the intellectual low ground. They don't get the big grants. They don't get the prestige. Maybe in Galileo's time it was, like, half-and-half. Now it's more like 90-10, the world we live in.

I also want to reference the conversation I did with Iain McGilchrist, which is also--his book, The Master and His Emissary or The Master in its--I think it's The Master and His Emissary--he's trying to explain this historically, that, as we've emphasized--he doesn't call it this, but your term--the extrinsic perspective, it has taken up more and more of the oxygen in the room and more of the bandwidth. And, I think that's the world we live in right now.

And, I see your book in some sense reclaiming--not in the sense that you're an advocate for the humanities--but reclaiming the importance of each perspective and the respect we should have for both of them.

Erik Hoel: Yeah. And, that's something that came out of really realizing how difficult it's been for science to explain human consciousness. And, I don't just say this as a layman and an author. You know, I have a Ph.D. in neuroscience from--I worked at probably the top lab in the United States for trying to understand the neural basis of consciousness. And, my conclusion from about 10, or about 15 years, in the field is that we're no closer today than sort of when I entered the field in terms of understanding it.

And, this is a pattern that repeats in neuroscience a lot.

I just saw a very interesting paper, just, I think, yesterday, that someone shared on Twitter, actually, about how cancer survival rates have been going down slowly but--cancer survival rates have been increasing, sorry--slowly but gradually as scientific investigations into cancer have been making incremental progress. So, there hasn't been any major breakthroughs in cancer research, but your chances of dying from cancer are much better than they were even 15 or 20 years ago.

There is not really an incremental understanding of the brain that we can point to that neuroscience has been consistently generating. It's sort of a field of boom and bust narratives.

And, your listeners might remember some of them. Right? Things like: Oh, in the early 2000s there was mirror neurons. So, mirror neurons were going to be this massive explanatory thing. You had really famous names writing about how this was going to unlock the key of understanding what made human beings special.

And, if you go back and you actually look at what happened with these hypotheses, they mostly kind of just fall apart and complexify to the point of explaining nothing.

And, I refer to neuroscience as a land of, like, dead narratives. Right? It's just: this new thing comes in, and it could be a methodology like brain imaging or so on. And, I'm not talking here--just to clarify--because there is this response, which is that, 'Well, our understanding of how neurons work, and molecular biology is much better now.' So that the aspects of neuroscience that deal with the individual, like, neurons of your brain--you know, that aspect of neuroscience has sort of increased. We know much more about how neurons individually function.

But, realistically, that's almost never what anyone wants explained by neuroscience. Like, we want the big picture. We want to understand how does thought occur? How does cognition occur? How does consciousness occur?

And, these are things that--if there's one story of neuroscience over the last 20 years, it's just that: Wow, the brain is really more complex than we thought. And that's basically--I hate to say it, but basically it. I've asked people to try to explain what is the major advancement of neuroscience in the past 20 years. And, there are some that I myself could put forward. But there again, like, figuring out that the brain gets washed with cerebral spinal fluid when you sleep, I think that that's really important.

But, there's no real serious progress on sort of the questions that motivate people to become neuroscientists to begin with, most of the time.

And, a big part of that is because I think there's a strong claim that neuroscience is the most difficult field in all of science, because it's the field where you have to put back in the intrinsic perspective and it meets the extrinsic one.

And, we can sort of bracket aside all of these other qualities in every other area of life, but you can't really do it in the brain because there is a fundamental truth about that people have a stream of consciousness and they're experiencing things; and we want some sort of explanation of how and why that's occurring and under what sort of laws are experiences related to neural events. And, the simple truth is that does not exist in the field.

35:46

Russ Roberts: The way I think about it--and tell me if you think this is a good way to say it--our knowledge of the physical universe we live in has grown. Our knowledge has grown continually, with some, of course, a few steps back, but many more steps forward in the last 500 years--400, depending on when you want to start.

We learn more and more. We know more about cancer. Not enough that we can avoid it, but we know more. We know more about the universe. We know more about all of our body. We know more about geology. We know more about everything.

Economics is a separate question, by the way. I once heard Robert Skidelsky say, 'It's not a progressive field.' He didn't mean ideologically, he meant we don't make any progress. That it's more like neuroscience, maybe.

But, basically we learn more and more.

And, we're the only part of the universe that, as far as we know, has any thoughts about it. We wonder about why it's here and we learn more and more about it.

But, the only part of that universe that we don't make much progress on is the piece that we use to understand the universe. That is: our brain.

And, that's awkward. And, you could say, 'Well, but it's just such a small part.'

And, yet it is incredibly ironic to me that the part that's doing all the work is the least-understood part. And, that is disturbing. It is deeply troubling. It is thought-provoking.

Erik Hoel: Yeah. And, you know what? That view--so this--that view I sort of land on in the book is something I call 'scientific incompleteness.' And, the idea is that maybe science really is fundamentally incomplete. Like, there really are some things that can't be known. And, it's precisely because of this troubling aspect, as you put it, that the very system that's trying to figure it out, suddenly needs to include itself in the calculation.

And, actually, I think possibly the first person in history--as far as I can tell--to make this argument, was Friedrich Hayek, who wrote a little-known book in 1952 called The Sensory Order. And, it's Hayek's take on psychology.

And, he seems to sort of imply this. And, I actually have a quote here. So, he says: The mind must 'remain forever in a realm of its own... we shall never be able fully to explain or to 'reduce' [it] to something else.'

And, he sort of lays out his reasoning clearer in a different paper where he says, 'Any apparatus for mechanical classification of objects will be able to sort out such objects only with regard to a number of properties which must be smaller than the relevant properties which it itself must possess.' Or, to put differently: an apparatus for classifying according to mechanical complexity must always be of greater complexity than the object it classifies. And he says, 'If so, then the mind can be interpreted as a classifying machine, which would imply that the mind can never classify and therefore never explain another mind of the same degree of complexity.'

And, I thought that that's a very interesting thesis. I don't think that it's, likw, the best or the most accurate way to sort of put it, but it was very interesting to do the research and try to find the earliest instances of people really talking with this language and really putting their finger on the problem. And, I think Hayek might actually be the first.

39:23

Russ Roberts: Now, I assume that people in the rest of your community or the neuroscience community would say, 'Well, okay, not right now.' Right?

Erik Hoel: Yeah.

Russ Roberts: It's only a matter of time. And, I think that, by the way, is an unbelievably common view of everything. You're talking about scientific incompleteness. Most people would say, and certainly most amateurs--smart people who aren't scientists--they would say, 'Oh, there's no such thing. Everything will fall before the power of the scientific method, and it's just a matter of time.' Is that commonly said? And, do you agree?

Erik Hoel: Well, it's probably commonly said. I think it's sort of, like, already been proven wrong in that we know--like, there are literally proofs in science itself, particularly within physics, that some physical properties are undecidable. So, I won't go into the deep details of some of these physical properties, but there's things like this question of what's the spectral gap of any sort of, like, physical material, which is this particular physical quantity. And it has been proven--it was proven in a Nature paper just, like, a couple of years ago--that it's just an undecidable property. There's just no actual scientific answer to this frameable physical question. Right? And it's due, by the way, to self-reference. It's because the people were sort of clever enough to figure out--and I can't speak to the exact details of how they did this--where they find, sort of got a spectral gap--to, like, refer to another spectral gap. And it created this weird paradoxical cycle of self-reference because that's where almost all these sort of incomplete undecidable results come from. They all come from self-reference.

And so, we know this for all sorts of things. The game--the card game "Magic: The Gathering,"--there is no fundamental winning strategy to that card game because it's incomplete. It's provably incomplete. It's undecidable what, like, the best move of "Magic: The Gathering" is.

And so, if you think about it like that--if you think about: Well, listen, we know that there is these very simple systems where these paradoxes crop up--it would be relatively unsurprising to think: Well, when we just talk about reality as a whole, should we expect it to be paradoxical or non-paradoxical? Right? When we talk about the system as a whole?

And, I think it's quite reasonable to say: Well, we should really expect some paradoxes.

And, I think the scientific reply to the physical properties being undecidable has so far been, 'Well, those things don't really seem to matter that much.' They just--the exact--you can have other heuristics for calculating things like these undecidable properties. They just don't really crop up so far in physics in ways that we really care about, so it's fine. But, I suspect that there's something really fundamental going on there when it comes to observers themselves. Right?

And, you know, if you think about what is the neuroscience of consciousness, it is an observer's attempt to explain themselves from within the system that they're existing. And, that looks a lot like some sort of self-referential--a problem.

And in those instances of self-reference, it's very easy to get paradoxes. Like, the very famous liar's paradox of: 'I am lying right now. Well, if I'm lying right now, I'm either telling the truth, in which case, wait a minute, I'm not lying, right? Or I am lying because I'm telling the truth. Or I'm not telling the truth.' Right? But either way, there's no answerable conclusion from that. And these things are very easy to create. So, I think just broadly, it would be incredibly surprising to me if something as complex as reality and science didn't have some weird paradoxical holes in it. We know that even mathematics does and mathematics is one of the things we're, like, most sure about in the world.

43:41

Russ Roberts: I'm going to read this passage--I don't 100% understand it--from your book, but I love it. You can add anything you want to it, then I'm going to find you a different one. A different example of this similar thing. You say, you write:

A metaphor. Imagine a perfect map of an island. And, I do mean perfect--even though it need not be as large as the island, it is exactly to scale, such that every rock, tree, and even grain of sand is represented on the map, in incredibly fine detail. Astounding, but still, at first, conceivable. Now imagine that the map is on the island itself. What happens? The observer is now in the observed. For if we think on that perfectly detailed map, we see that it must contain, within it, a map of the map. And, that map must also be perfectly detailed, and contain a further map of the map. An infinite recursion. And what is a brain, if not a map of the world? Like maps, brains represent the world around them, creating a world model. But the brain is part of the world.

I love that. You could add anything you want to it. I'm not sure I fully understand it, but it's one of these self-referential paradoxes.

Erik Hoel: And, you'll note the broad takeaway from looking at paradoxes and perhaps the most famous and canonical is Gödel's Proof. Right? If you look at them just broadly--so, without getting into their specifics--they are almost always based--in fact, as far as I can tell, they're essentially always based--off of some sort of self-reference that gets introduced into the system by some sort of encoding.

So, in Gödel's Proof, this is called Gödel numbering. In the case of the island and the map, that's when you place the map back on the island. And, very broadly speaking, people don't conceptualize science as this formal system. So, I'm not saying it should necessarily be thought of as a formal system. But, one could imagine thinking of science as sort of this big formal system that's sort of proceeding by these rules or laws of empiricism and sort of grinding out axiomatically to find the truth about the world.

And, if you conceptualize science that way, then you have to find a spot for the observer in there. And it seems very possible that when you do that, you are triggering some sort of deep recursion or self-reference. And, we know that self-reference just destroys the epistemic purity of systems.

Like, Bertrand Russell spent decades trying to figure out ways to get it so that mathematics would be purified of self-reference. He was terrified of self-reference because he had come up with one of the first really concrete examples of the Liar's Paradox, which is in set theory, and it considered the set of all sets that don't contain themselves. Does that set contain itself? And, if it does, it doesn't and if it doesn't, it does. He was terrified of this and he had very good reason to believe so.

And it's sort of funny: When you think about neuroscience as a whole, the part that's self-referential is our attempt to understand scientifically where consciousness comes from. Why is it that particular experiences are related to particular neural firings? And, we're having so much trouble trying to figure that out. There is no well-accepted, leading, scientific theory of consciousness that anyone can put forward. You can sort of put forward: maybe we know a bit about attention or maybe we know something about how concepts are manipulated in the brain or something. But, fundamentally, if you just give a neuroscientist the most detailed map of neural activity of a human brain you could, there is no theory they could apply to figure out exactly what that person is thinking or feeling or experiencing at that moment.

They could throw some machine learning at it or something, but there's no formal theory the way that there are physical theories that tell us--okay, we have the calculus that explains how the leaf falls. There is no equivalent of that in neuroscience that explains why it is that this particular neurodynamic would be associated with this particular experience.

And, we seem to just be having a huge amount of trouble in this very specific case of self-reference, so it makes me very suspicious that maybe something is going on here.

Russ Roberts: I'm not sure the falling leaf is the best example--

Erik Hoel: Yeah. There's a lot of complexity in there--

Russ Roberts: subject to air currents, chaotic jostling, and so on.

48:42

Russ Roberts: I'm going to read you--I think you'll like this if you don't know it. This is a short parable. Supposedly an Arabic tale. I've seen a version of it in the Talmud. But, the most eloquent version of it is told by Somerset Maugham in a play that he wrote in 1933 called Sheppey.

Now, this excerpt is Death narrating. Death speaking. Okay? So, this is a little monologue by Death. If you want, you can imagine it as a skull carrying a scythe and, so on:

There was a merchant in Baghdad who sent his servant to market to buy provisions. And in a little while the servant came back, white and trembling, and said, 'Master, just now, when I was in the marketplace, I was jostled by a woman in the crowd, and when I turned, I saw it was Death that jostled me. She looked at me and made a threatening gesture. Now lend me your horse and I will ride away from the city and avoid my fate. I will go to Samara and there Death will not find me.' The merchant lent him his horse, and the servant mounted it, and he dug his spurs in its flanks, and as fast as the horse could gallop, he went. Then the merchant went down to the marketplace and he saw me, Death. He saw me standing in the crowd, and he came to me and he said, 'Why did you make a threatening gesture to my servant when you saw him this morning?' 'That was not a threatening gesture,' I said. 'It was only a start of surprise. I was astonished to see him in Baghdad, for I had an appointment with him tonight in Samara.'

And, of course, that's the paradox of free will, right? The servant thinks he's escaping death. Death knows the future, but the servant feels like he has free will. He doesn't realize that his attempt to escape his fate only fulfills it. And if he had stayed in Baghdad, he wouldn't have died because Death would have looked for him in Samara.

So, anyway, I think our anxiety and uncertainty about free will, as you recognize in your book, is somehow tied into this self-referential challenge. I don't know how important it is. You concede as much, as well, about whether these peculiarities or holes in our knowledge of certain things that are caused by these paradoxes--whether that's important or not. But, again, I come back to this point that our feeling about who we are seems--you'd want science to understand that a little bit, but it seems to be struggling.

Erik Hoel: Yeah. I would phrase it as: there's almost a sense in which people have gotten over their skis, in that most of the arguments, say, against free will are very much assuming that we will have this a priori, perfect, complete understanding of the world. Right? They'll start with something like--there was actually a great case of this with Alex Garland who did the TV show Devs. Alex Garland did--he also did Ex Machina which was the sci-fi movie about the Turing Test--

Russ Roberts: I really like his stuff--

Erik Hoel: Yeah. He's a great director.

But, he has a TV show called Devs. The plot is around this quantum computer that can sort of perfectly predict the future, and people are going to have to reconcile with the fact that it's very obvious that they don't have free will because they can see themselves doing things that they're going to do. And it's sort of the psychology of it. So, it's a great TV show.

But if you actually think about the machine that could somehow, like, perfectly model the world, and then people's--it includes people's reactions to what they see on the machine.

So, therefore the machine would need to have a perfect copy of itself also sort of running in the background, generating the future that the people it is simulating are seeing; and then that machine would need to--so you get it, right? So, you would immediately have this infinite regress of computation for such a machine.

So, it's not even clear if that's a conceivable thing. It sounds like a conceivable thing. It sounds sort of a priori obvious. But when you really start thinking about it, you're, like, 'Well, wait a minute: This machine seems like it really quickly jumps into a paradox of somehow it would need infinite computation because it also needs to model what it's modeling in the future and how the other people are reacting to it and so on in order to predict the future correctly.'

And so, oftentimes people will just assume when they're talking about free will, 'Oh, we have some sort of perfect--just imagine this perfect physical model of the universe and we just run it forward in time, and we see everything you're going to do,' and so on. And it's like: Well, that's not even clear that that's really a logically fully definable thing. Because, once you look at that and make reactions off of that, you need some way to predict what the thing itself is going to do. And so, again, you sort of end up in these paradoxical trips--these paradoxical traps.

And, I think that these crop up more than people realize. And that we really shouldn't--until we really fully reconcile the extrinsic with the intrinsic--until we really do have a full understanding of how that's reconciled--I think it's just a bit too early to say something like, 'Well, you don't have free will,' or, 'We can very clearly--we'd be able to predict your [?8-year?] actions ahead of time,' or something like that.

It's like--well, but first you have to complete science. You can't have an incomplete science--as it currently stands. And, maybe it's fundamentally incomplete, anyways. So, then you really can't ever complete it.

But, certainly I don't think you could argue that it's completed now, with the state of neuroscience being what it is and the difficulties that it has in explaining how we have experiences. Certainly, I don't think you can say that science is complete.

55:20

Russ Roberts: Well, it's not very different from--I knew you were very worried about AI [artificial intelligence]. But, one of the worries about AI is it will, quote, "Figure everything out," and then know exactly what you would do in different situations and be able to manipulate you.

And, I think that's--again, as I said back in 2014 when I talked to Niklas Boström--that's the medieval view of God. God, as seen in many religions, is omnipotent, omniscient. Knows everything that there is to know about the past, present, and the future. And, I am skeptical that a human being can create a system that could be omniscient: that computer that knows where every atom is at every point in time in the universe.

Now there's quantum arguments for why that's not possible. And then there's a debate about whether that really changes anything in terms of prediction, and so on.

But, these recursive self-referential problems seem to me to be relevant.

Erik Hoel: Yeah, and it's funny you mentioned that. You know, there's a small section in the book about Boethius, who is a Roman console and philosopher, and this is around the Fall of Roman and the beginning of the Dark Ages. And, he writes a really famous book, one of the best of medieval literature, called the Constellation of Philosophy, while he's awaiting his execution. And in it, he gives this argument--about that maybe--he's trying to figure out can we still have free will with an omniscient God? And, one of the things that--

Russ Roberts: An old problem--

Erik Hoel: Yeah. An old problem.

And, one of his ideas is that: Well, God is outside of time. So, it's sort of a mistake to think that God is jumping ahead and seeing what happens. Right? And then saying: Okay, I can predict what you're going to do. Because God has to see everything that happens all at once.

And so, He's not really making the same claim that you would make, that you're imagining you would make, if you could somehow see all of time. You wouldn't really make claims about--okay, I'm going to predict that this is going to happen because everything is happening concurrently.

But, I think there's even--we know so much more now. There was this revolution in terms of understanding of Chaos Theory. And, one of the big takeaways of Chaos Theory, and something that science as a whole still really hasn't dealt with I think--I think we're still figuring out exactly what this means and how to relate to it--is that there are some systems where there is no simplified model that you can use to really predict where they're going to be significantly far ahead of time in the future.

This is why weather is famously, like, unpredictable.

And its just, no matter how good our science gets, our predictions of weather will never be as good as, like, next year we know the temperature will exactly be this on this day. That is mathematically provably just impossible. Right? We know we cannot figure that out because these tiny variations in the system. The only way you could figure it out would be if you had an exact duplicate.

And so, the easiest way to talk about this is in terms of programs. And so, one way to say this is that there are some programs that the only way to figure out what the program is going to do is to run it.

You can't, sort of, look at it and cogitate ahead of time, 'Okay, exactly what's going to be the outcome of this program?' You just have to run the program,and then see what happens.

And, this brings us to the question of: 'Well, what is our brain more like? Is it more like a program that you have to run it to figure out what it's going to do, or is it like a program that you can figure out ahead of time?

I think we probably have pretty good reasons to think that it's the sort of program that you have to run to figure out what it's going to do.

But, then you're in a very strange circumstance, right? Because all you're saying is, 'Well, I can predict what you're going to do. But, see, the way I'm going to do that is that I create a perfect duplicate of you. And then I watch that duplicate of you do stuff. And then I go back to you and I laugh and I say, 'Ha ha; I know what you're going to do now.' Right? And, it's like, 'Well, is that even worth the word 'prediction' anymore? Are you really predicting what I'm going to do? Right?

It's so like this weird case of observation plus time travel. It's not really prediction in any fundamental sense.

And, you certainly couldn't predict what the instance you created of me was going to do. So, it seems like if we think about free will, it's like all you did was sort of shunt the free will over to the first clone. Right? And that clone is doing stuff that you never could have predicted.

And now you're coming back here and telling me I don't have free will. And now I can just sort of--I don't know, be like a--well, I have an identity with the first clone, so, and you couldn't know what I was going to do then, so I still have free will.

And, I don't think that modern understandings has really grappled with how deep the damage to prediction chaos theory has done and how bled of any real substance, the notion of prediction has become in these systems that are fundamentally chaotic.

1:00:23

Russ Roberts: Going back to economics now, it reminds me of the Lucas Critique, which is basically saying that policymakers are fundamentally constrained by the fact that what they try to impose on the population has a reaction on their part that they can't always anticipate, and therefore they can't really get to the outcome they think they can get to. I'm not sure I said that right, but we'll put a link up to Lucas Critique to treat it fairly. But, it's basically the idea that policymakers might think they have all the levers and dials of the economy, but it's an inherently impossible thing.

And, I think the Austrian Economics approach, which is Hayekian and Ludwig von Mises are the main proponents of it--most famous proponents--what they're arguing is that the knowledge of ourselves is in our brains. It can't be written down and it can't be put in a book.

Meaning: if I asked you, 'If the price of such and such went up, how would you respond to it?' Well, you don't know until you've experienced it. And, what prices do is coordinate the information that we have about ourselves that gets produced through the experience of price changes; and that in turn leads to a price out in the world that sends signals to both producers and consumers. I didn't say that very well, but the best version of it is actually written by James Buchanan. We'll put a link up to it. It's where he refers to what he calls the market process. It's not a set of mathematical equations that lets you predict what prices are going to be in advance, because the knowledge that you need to find those prices gets produced through the process of the change itself--that's a little bit better.

And, that's very similar to your clone example, and whether we could predict what a person's going to do. Well, we could, of course. It's like saying in economics, 'Yeah, when we have the equations--now we don't have those until life after the fact--but then we do have them.' But then, what's the point?

Erik Hoel: Yeah. Precisely. And, I think even if you look at somebody like Stephen Wolfram who wrote a book called A New Kind of Science in which he really dives into these new notions of chaos theory and whether or not systems are reducible--whether or not they are programs that you need to run to figure out what they're going to do or whether or not you can understand them ahead of time. He even--and this is a physicist--talks about free will in his book, where he says--this is a quote from him:

In traditional science it has usually been assumed that if one can succeed in finding definite underlying rules for a system then this means that ultimately there will always be a fairly easy way to predict how the system will behave....

But now computational irreducibility [Hoel: 'which is another term for this,'] leads to a much more fundamental problem with prediction. For it implies that even if in principle one has all the information one needs to work out how some particular system will behave, it can still take an irreducible amount of computational work actually to do this.

In other words, if you had a perfect model, you would still have to run the model to figure out what's going to happen. And then, we get into this weird state of, 'Okay, so in what possible sense--is this really the sense of that people use the term prediction?' And, if someone says, 'Oh yeah; physics can predict what you're going to do,' and what they really mean is, 'Well, if there was some sort of perfect clone of you in a different physical universe and we froze our universe in time and we ran that other clone forward in time, saw what it was going to do, we could then return to our universe, and you would do the same things.' It's just not a notion of prediction that seems very strong. I mean, there's all sorts of problems with that. And, I don't think that a lot of the arguments against free will have really grappled with how complexity theory has changed our understanding of complex systems and what it means to predict them.

Russ Roberts: To me, it's like: Draw a map of the island and when you draw the map, make sure you also draw the fact that the map is on the island. And, then you are saying--and that can't be done. That can't be done. It means a map within a map, within map, within map. And, although part of me says that's just a silly paradox, maybe it's not silly. Maybe there's a little more to it about how the world actually works; and maybe our brains can't master those things. I don't know.

1:05:19

Russ Roberts: Let's close with William James' depression. That's a nice way to end, I think. For three years he had a little trouble getting out of bed, and how he snapped out of it.

Erik Hoel: Yeah. William James was--he was a young man at the time. I think he was bedridden for--not quite bedridden--for three years, but he was in and out of this deep existential depression. And, it was due because he found it completely inconceivable how anyone could have free will. And so, he questioned his own will, his own purpose, his own meaning. A lot of the work that he later did, like Varieties of Religious Experience, was all based off of his really deep, deep depression that he suffered.

And, one day he basically just came to the realization that this is just impossible to solve. You can marshal endless philosophical arguments one way or another.

Now, he didn't have the benefits of our modern mathematical understanding of causation, or chaos theory, or any of these other things, right? So, I think maybe we can actually go a little bit further than that.

But, he came to believe, 'Listen, this is just impossible. And so, if it's unanswerable, I'm just going to choose.' And, he wrote in his diary: 'My first act of free will be to believe in free will.' And, he just sort of chooses, in a sense, to come out of this depression and get on with his life. I think that there's a lesson in that.

Russ Roberts: My guest today has been Erik Hoel. His book is The World Behind the World: Consciousness, Free Will, and the Limits of Science. Erik, thanks for being part of EconTalk.

Erik Hoel: Thank you so much, Russ. It's a pleasure to be on again.