Russ Roberts

Byers on the Blind Spot, Science, and Uncertainty

EconTalk Episode with William Byers
Hosted by Russ Roberts
PRINT
Caplan on Parenting... Harford on Adapt and the Virtu...

William Byers of Canada's Concordia University and author of The Blind Spot talks with EconTalk host Russ Roberts about the nature of knowledge, science and mathematics. Byers argues that there is an inherent uncertainty about science and our knowledge that is frequently ignored. Byers contrasts a science of wonder with a science of certainty. He suggests that our knowledge of the physical world will always be incomplete because of the imperfection of models and human modes of thought relative to the complexity of the physical world. The conversation also looks at the implications of these ideas for teaching science and social science.

Size: 32.3 MB
Right-click or Option-click, and select "Save Link/Target As MP3.

Readings and Links related to this podcast

Podcast Readings
HIDE READINGS
About this week's guest: About ideas and people mentioned in this podcast:
Web Pages: Podcasts, Video, and Blogs:

Highlights

Time
Podcast Highlights
HIDE HIGHLIGHTS
0:36Intro. [Recording date: May 2, 2011.] What is the blind spot of science? What do you mean by it? I guess it is the notion that models or mathematics or theory captures reality completely and that it's sort of an accurate picture of what's going on; and that is most general. And, I go on to talk about the nature of our rational consciousness--this idea that it can in fact provide us with information, which is--I'm not saying the information is not useful--but the idea that it provides information which is absolute, objective, and will never be changed, that is a blind spot for me. I think that is the way a lot of laypeople and some scientists think about their disciplines. We just push forward and we eliminate ignorance and we light up the darkness as we move along; and it's just a matter of time before we figure everything out. You are suggesting that's a little bit incorrect. I think it's simplistic. Even in the book I draw this analogy that we think that science proceeds as if you are exploring some new territory and you just sort of conquer one bit of territory and you move on to the next. I claim that actually, you never totally conquer any territory. You can always come back to questions which were raised in science in the past and understand them from a different direction, deeper. This happens all the time. Often we understand something from a certain perspective, with a certain paradigm in mind. When the paradigm changes, we go back to the old things we were talking about and find that they look different. I was struck by that; in fact, earlier on in the book you talk about the elusiveness of deep concepts. One of the examples you gave was randomness. You can look up a definition of randomness in the dictionary, but you never really understand it. You can always go deeper. Talk about that idea. Randomness is a wonderful idea. I love it myself. When I was starting to think about it, now written about it in various ways in two books; I did originally consult many of my mathematical colleagues and statisticians and probabilists who use it as a foundation for their subject; and I asked them: What is randomness? Interesting: it is almost an embarrassing question, because people have a lot of trouble making that concept concrete. So, on some informal level, everyone knows what it means, but on a very concrete mathematical level, there are actually multiple definitions of randomness and I want to say that not only do people not only not agree on what randomness is, but it means something slightly different as you move from discipline from discipline--from the theory of evolution, for example, to analysis of the validity of an experiment, to what the IBM researcher Gregory Chapin calls algorithmic randomness. Everyone's using the same word, but everyone's using it in a slightly different way. The thing about randomness which is so interesting is it's like an infinite progression. You can always understand randomness better because it seems to me that there is a sort of primal randomness which, in a certain sense, is inarticulate. Once you articulate it, then you are talking about articulated randomness. And you can work with that definition, do wonderful things with that definition. But, it's always possible, because randomness is that kind of open-ended concept, you can always go back to it and look at it from another direction. A lot depends on what you are interested in and what you are making of that concept. Even mathematical concepts are in some sense open-ended in that way. I don't think that is the normal way people think about mathematics.
6:01Here's what you wrote in the book, lengthy quote: "You cannot understand a definition by parsing it. You acquire...." Of course, you are referring to mathematics, probability theory, statistics, and science; but in economics, I've been struck--and the reason I underlined that passage in the book and it really rang true to me and very deeply, is that for me there are a handful of concepts in economics that are primal, that I feel the same way about. They are deep, they can be explored over and over again; and I find that my own understanding expands even when I understand it fairly well. I always find I can learn something. I want to give a couple of examples of that, which listeners will be familiar with from previous programs. In the Wealth of Nations, by Adam Smith, written 1776, you'd think we'd kind of mastered it by now. But he writes in there about the division of labor and exchange and trade. That concept--you can squeeze it into a 2x2 table, you can talk about gains from trade, you can draw a little diagram about it; but for me it is a rich concept that I am always finding I can go a little bit deeper. Another example which we'll come back to, I hope, is emergent order--the idea that complexity--which you write about later in the book in science; but of course it's a very important concept in economics. It's implicit in Adam Smith--it's the idea of the invisible hand--but the way it's been used in economics in the modern era by Friedrich Hayek and others is such a deep concept for me that I learn something new about it, I think, every week or month. I see it in different guises; I see it from different perspectives. It's richer to me than it was for me when I first learned about it 5-10 years ago. I think that way of thinking about science and understanding and depth is so valuable. I think you put your finger on a key point: in this entire book I basically, sort of, in part rhetorically compare different attitudes that one can have toward science, and this latter attitude, the exploration attitude, is an attitude which regards science as a continual work in progress. Often one things about, in terms of the extent of science, that there is more stuff to learn but what we know, we know. But I guess what I am saying is what you repeated: that in science, we actually work, as we do in mathematics, with terms that are undefined. But that doesn't mean we are totally ignorant of them. Terms like time, space, number--these things do not have precise definitions. We have some sort of response to these things and we have a great deal of experience with them. But, the idea that science is closed, even that an area of science is closed, I think actually is a dangerous idea. I recommend this kind of open-ended idea that one explore the discipline; and then one can always explore further. That gives us a totally different attitude toward the results that we get and the systems that we create.
10:48Another way you frame this in the book is you contrast a science of wonder with a science of certainty. Talk about that distinction. We've just been talking to it, and you cleverly led me up to this. In fact, the science of certainty is the idea that the theory is identical to what it's a theory of and that the two fit, so that if you have a Newtonian theory of mechanics, that actually is the world, in a certain sense--that the theory is interchangeable with the world, and it's over. A theory of wonder is a theory that one is an explorer. One loves the terrain, one loves what one does, and one is trying to get deep insights into the nature of whatever reality it is that your particular discipline is involved with. I think these attitudes are considerably different. I think a science of wonder emphasizes unending creativity. The science of certainty emphasizes that one day, we'll all be replaced with machines or supercomputers that will in fact be programmed with the latest theory and we can stand back, and human beings will not be needed any more because the theory is so absolutely perfect and complete. I think that's an illusion. An illusion and mythology that has existed since the time of the ancient Greeks: that people had this dream--I call it the dream of reason--that you can totally capture reality, pin it down. My colleague here at George Mason University, Robin Hanson, was on this program a few months back, and he said: The brain is just chemistry, so it's simply a matter of time before we figure it all out and then we'll be able to exploit machinery and technology to create better and bigger brains. It's the idea behind the technological singularity concept, idea that there's going to be this enormous explosion in technological ability once we can mimic a brain with a computer. I take it you are a skeptic about that. I'm totally skeptical. What I do is trace that dream back; and I think it first emerged in ancient Greece, in mathematics and in Euclidean geometry, the idea that we could in fact capture things like this. People think that this is remarkably new, but actually it's an old mythology but it's got a new technological manifestation, namely the computer. Now we are in love with computers and we think they are potentials are limitless. I always think that all the technology that we create basically is not really new. What it does is take a certain human ability and accelerates or amplifies it. So, a telescope amplifies the sense of sight and a computer amplifies our rational faculties. My claim is that human intelligence is more than reason. Reason is of course important, but human beings are larger than reason. Recently, in The Times, David Brooks had a column, I think called "The New Humanism," where he claimed that one of the problems with our culture is all the reliance on rationality in his words was to the detriment of things like emotion. But I would say in general, because we believe so strongly in this kind of what I call a mythology, this mythology that the machines are coming down the line that will totally replace human beings--we tend to downgrade human capacities. We don't realize that when we have a model, we feel in some sense that it will replace qualities like judgment. In my mind, you can never replace human judgment. In my mind, I'm operating from what you might call a humanistic perspective, which says that computers are wonderful--I love mine--but they are intrinsically limited in what they can do. I would think that your colleague, what could I say to him: In my mind, it's a projection of human inability to deal with uncertainty. Let me play Robin for a minute--which I'm a little uneasy about. One of the things I asked him was is a computer going to create Beethoven's 10th Symphony? What is the role for creativity? I think Robin and others would say: Well, we romanticize that. It's still just neurons firing. A great chess player we think of as an artistic genius, but a great computer has now surpassed almost all of the great chess players. Just again a matter of romance and time before we eliminate this idea that there is something more than the brain. I have a number of comments. One is that when we say, to take the particular example we always use, that computers play great chess, first of all I would claim there are actually two games there. There's computer chess and human chess, and it isn't exactly the same game. Humans play chess very differently than computers. In a certain sense you are comparing apples with oranges when you claim that. You want to stop and explain that? What happens is computers, because they have such a large memory capacity, and they do very simple operations so quickly, they can go through literally millions of possibilities in a very short amount of time. Human beings cannot do that, so they have to rely on other capacities they have that the computer doesn't have, things like intuition, judgment, and feel. Things which are difficult to quantify. In fact, human beings have all these capacities which are essentially non-quantifiable: That is to say, we have parts of our brains that operate non-digitally.
18:11I'm going to derail you a moment here because I think it's such an important point. One of the things in the book about idiot savants--people who can multiply large numbers instantly, can see patterns in numbers instantaneously. I think the assumption of many is that they just have a really good hard drive. They are just doing the calculations really fast. But your point is that no, they are not: they are just doing a different thing. They are not going digit by digit and doing multiplication or looking for prime numbers or factors or things like that. It seems to be true from the reading I've done and the testimony, some of which is in the book, of people who are savants but not idiots--that is, people who have high intelligence but also have savant abilities, and there have been a lot of scientists like this--and these people testify that they think differently about numbers than probably you and I think about numbers. Certainly than I do. I give this example of this wonderful Indian mathematician, Ramanujan, who had this feel for numbers. Like very large numbers were like people to him; they had individual personalities. Do you have that story when he was visited in the hospital by G. H. Hardy? He asked Hardy about the number on his taxi, and I think he says: It's 1729, not a particularly interesting number. And Ramanujan says: On the contrary. It's the smallest number that can be written as the sum of two cubes in two different ways. Unbelievable. What an intimacy that man had with the natural numbers. The question about that example is: When he heard the number that he had looked at before--if you asked me is 13 a prime number, I'd say: Sure it is. Because I know that. I don't have to think about it and ask what goes into 13. But 1729--had he thought about that before or did he see it in a flash? It seems like a cheap parlor trick, as if Hardy texted him before he got there and said: I'm in cab 1729; make sure you can think of something cute for me when I get there. That's not what he did. But is it just something that he saw instantly? This is pure speculation on my part, but let me give you another example, which is in one of Oliver Sachs's books about idiot savants who were really idiots. That is, they had low IQs of approximately 60 or less. And these people could tell you whether a 6-digit number was prime or not. So, in fact, the say Sachs describes these two twins who were his patients, they didn't even really have a concept of what multiplication was. And yet they could say--and not instantaneously, and yet somehow, with some process that went on, but there was clearly not dividing by every lower prime number. Something was happening which allowed them to draw the correct inferences. So, I just think myself that number is one of those really primal concepts. I think it's primal not only in mathematics but in every area where mathematics is used. Something very basic, connected to our ability to make order out of the world, which is what an intelligence does. And, mathematics is connected to that ability to order our experience. Clearly it's built into us genetically in some very basic way, and when you study these idiot savants you realize that perhaps there is another form of intelligence that they have access to, since the normal intelligence that's measured in IQ is clearly deficient in these people. It's interesting to speculate. And I think what happens in some very great mathematicians--if there is more than one form of intelligence, they have both forms of intelligence that has evolved to a very great degree. When you do research you have a mental map of what's going on, and not everyone has the same mental map, the same picture of a very complex situation. I know when you do research that there are people who have much more subtle, much richer mental intuitions of a situation than others. Of course, this kind of talk creeps some people out. Not me, but it does make some people very uneasy because it sounds mystical, a little bit like religion; you are suggesting there is something supra-rational, supra-natural about our abilities. Oh, no, I'm not saying that at all. I'm just saying that we have different mental abilities and I'm advocating that we tap all of our mental abilities. If you want to have a very simple way of talking about it, which I mention in the book, which is also you can think about as left versus right brain abilities, in fact. When we talk about intelligence, we usually talk about the left brain. But the right brain is also kind of functional. Some people think one side is more like digital and the other side is more like analog. That's just a vague way to try to grab it; those are just words that don't have clear meanings. But deep intuition is a fact. I think you said some things just sound mystical. I think sometimes mystical is a word that's used in situations where there is a limitation on the rational intelligence--that is to say, if things can't be reduced to a kind of rational system, then that by definition is mystical, you know. I don't think necessarily it's mystical at all in the sense of being other-worldly. It just means that the brain is more complicated than we think it is. Or that there are other aspects, as you talk about many times in the book, certain aspects of reality are not amenable to what we think of as the standard techniques of logic or certainty. We started to talk about this when we said that things like randomness or continuity don't have definitive definitions. Even randomness is open-ended in this way. That doesn't mean that you don't try to define randomness. Of course you try to define randomness. And if you get a really good definition, you get a really good theory as a consequence. If you do what you want to do, you know you could come back someday because you are interested in something else and get another definition of randomness. A lot of the definitions or randomness were originally connected to games of chance. But with the advent of the computer, these other ideas, like algorithmic randomness, were invented. In response to a problem, you revisted this idea of what does it mean? What does chance really mean? It's such a subtle idea. Exactly.
27:09And I want to just come back to that because I think it's such a subtle idea, the idea that by defining it you then say, well, I understand it now, suggests a recipe like a cookbook nature to reality, which is certainly not true. I remember giving a talk on emergent order to a group of economists and it fell incredibly flat. They were unimpressed; they said: Well, we know all that. There's complexity in the world. I wanted to say: But no, you don't. It's so rich; you don't see it the way I do. I want to switch gears for a moment and talk about teaching, which is not something you talk about in the book in particular. As a teacher, in economics, I'm often torn between the science of wonder and the science of certainty. It's very tempting to teach equations and graphs with neat, clean answers that make for good exam questions. But as I get older, I find myself really interested in conveying a sense of wonder. So, when I talk about the price system--which Hayek, by the way, called a "marvel," really capturing your concept of the science of wonder--it's much deeper and more important for my students to be amazed by the price system than for them to be able to answer an exam question about it. But then again, I have to give exams. And so, pedagogically, I'd love to give a set of wonder-full lectures rather than analytical lectures, so I do a mix of both. I'm curious if that tension arises in your teaching of mathematics. My wife's a math teacher and she and I talk about these issues all the time. I'd love to get your thoughts. Absolutely. In fact, mathematics is, in a way a very difficult field because for many years the ongoing general theory of mathematics was something called formalism, which said essentially that mathematics is nothing but its formal content--its theorems and definitions and proofs. And when you proceed in that kind of very formal way, of course it's easy from the point of view of setting examinations. But you find after a while, two weeks after the final exam when you bump into your students and realize that they don't recall a thing of what's happened all semester that you've failed to give them a grounding in the fundamental concepts, which will allow them to develop a kind of intuitive feel for the formalism they are dealing with. So my answer is: If you don't develop the other, and I grant that this makes teaching into an art and not a science--it's just easy to convey the fact--but it's hard to convey the wonder of the situation, how remarkable it is. And even in mathematics, these are problems that actually intelligent human beings found intellectually exciting and stimulating, and not just a set of techniques that you have to memorize. Or a set of blocks you have to arrange in a certain order. It's like building a Lego house. Like a proof is you just show the different steps and when you are done you have a house. I think you lose the poetry of the discipline, for sure. I think that's precisely my point. In my first book, which was about mathematics, I made this point about that notion that even every proof is based on an idea. You mention this wonderful analogy as a Lego system. We think of it as a kind of Lego system, but actually if you look carefully at any argument in any subject, you'll see that in fact it's built around an idea. If you've got the idea, in my opinion, you can forget about the details and reconstruct the argument. But, when you study a subject from a formal point of view, no one tells you what the ideas are. Funny how that works. That's something you talk about and evoke very nicely, which is the sense of exploration, the sense of discovery. Obviously if someone tells you how to get from A to B, it looks rather mundane. You just go here, take a left here, take a right there. But the actual journey, the person who first mapped it out, it's almost an unparalleled human experience. I'll give a math example of it. One of the most beautiful things that I've ever seen is the documentary on Andrew Wiles's proof of Fermat's Last Theorem. It tries to capture, imperfectly--the irony of all this is that it can't be captured perfectly, that's almost what we are saying--but it captures almost perfectly or tries to give you the flavor of that creative endeavor. And it can't. But it gives you the flavor of it. It gives you a sense of it. I find it so moving. It's a beautiful story; there's a lot of poetry to the story. But it moves me close to tears when I watch it, because there is something extraordinarily human and unique about that form of creativity. And it's true in every field. It's not as dramatic in economics, but it's in economics, in science, in art, music. When you see something you haven't seen before, when you solve a puzzle that you thought you might not solve, that emotion and the power of that, there is something intrinsic in our humanity. Not everybody has the gift of being able to do it, obviously. Extraordinary thing. I couldn't agree more. I think it's interesting that you were talking about teaching and we've moved into talking about creativity, but for me, learning--when a student learns, they kind of recapitulate, go through the same stages, as the person who created the subject. I don't like the idea that it's just a bunch of geniuses who can be creative and that the average person has no access to that. Because it seems to me that when are describing your emotional response to a situation of major creativity, in fact in some sense--that is a resonant. I lost you there. That does what? It resonates. Your experience resonates with the creativity of the situation. It is not different. It is of course not of the same depth. But the idea that you can appreciate means that something in you goes off and you say: Aha, I know what that is, on some level. And that reaction is not an intellectual reaction. It's something very deep--one is moved by these stories of extraordinary creativity even if one understands them imperfectly. As everyone does with Wiles's proof. I think there is something about creativity which is self-validating. You don't have to ask why. You just have to observe your own reaction to that situation. And I think if learning is at basis a form of creativity then everyone is creative. And not just a form of amassing facts. If it's not a Lego but actually a creative experience. If there is no creative insight when a student is learning something, if there is no "Aha" experience, then it isn't true learning. I agree with that totally. I really think that great teaching is about--it's almost like replication, like: Come take my hand and follow me and you can see the wonders of this tour of exploration that this originator saw and that I sometimes can see. You talk about some interesting synergies--that's not the right word--but the richness of mathematical concepts. For me as an amateur, a mere economist, I remember taking a high-level statistics class and we proved the Central Limit Theorem. It was a beautiful proof. I don't remember the details but it was a beautiful proof. But then he said: Well, let's prove it a different way. And I think we proved it three ways. Do you know what we would have used--some kind of functions, Eiger functions? When we saw it proved again using a totally different mathematical framework, it was so moving again. It was so: Wow. To go back to Fermat's Last Theorem again, even more amateur on my part, the Takayama-Shimura Conjecture, which Wiles uses to prove it, which of course had nothing to do with number theory. And suddenly it's like: Oh my gosh, this window opens up a different view into that extraordinary world. And that moment of human--you have to call it creativity, is awe inspiring. That is the essence to me of creativity. It's this opening up of this totally new view.
37:32But you know, what I would like to add is that in order to have these kinds of creative insights, you have to be prepared to pay the price. And the price means that you have to go be prepared to grapple with difficult ideas. It's not all smooth sailing. In Wiles's case, he totally isolated himself from the universe for 6 years. It's brutal. People think genius is a question of raw intelligence--just some people are smarter and some people are dumber. But I think there's a lot to be said for what I call frustration tolerance. The ability to stick with a difficult situation until it resolves itself. You can't say that you personally resolve it because of course it's unknown; you don't know which way it's going. It does sometimes seem to resolve itself in that flash: Aha, now I see it. I think sometimes, I'm thinking in particular of Isaac Newton, who seemed to have the ability to hold these problems, what he said, in front of his mind's eye, for days, weeks, months perhaps. And in Wiles's case, for years. Until somehow that flash came. Now, which of us have that ability? My students, if it doesn't work out in the first five minutes they are inclined to give up. Was it a lack of self-confidence? Hard to know exactly what it was. But how many of us are prepared to live with an unresolved situation over a long period of time? That's hard to do. In the case of Fermat's Last Theorem, there was some chance it wasn't true. Of course; why would you want to do it? But it was kind of a tour de force--you kind of assume it's true and then we've got to make it ironclad. But there's a chance it's not true. Well, there was a lot of numerical evidence, you know. I understand; but there's always some uncertainty. Absolutely. That's the whole point. When you are studying an unsolved problem in mathematics, you have to guess: What does your gut tell you? Is it true? If your gut tells you it's not true, you look for a counterexample. If it tells you it's true, you look for an argument. So, you are moving in two totally different directions depending on what you guess; and you have to make that guess before you start. Let me say one more thing about learning; what you said before is important that it is both a creative act and that it requires patience and tolerance of frustration. One thing I tell my students, and maybe it's not very helpful, is the way you learn economics--meaning the intuitive side--is by looking off into the distance and thinking about things and struggling with problems and puzzles and weighing different possible answers and looking for holes in them. There's different ways to do that. You can do it by looking off in the distance. That's very hard. A cheaper way for most people, and more tolerable way, is to argue about it. So, I always encourage my students to get into study groups. And some of the reason I say that is the way I learned in graduate school; study groups were important. Absolutely; I couldn't agree more with that. In fact, I think it's true for many scientific theories, and it's certainly true for mathematics, that people seem to think that it's something you have to do by yourself. They have this idea that they've never discussed mathematics with anyone. These are students who have studied for years and years. They think it all happens in isolation, at home at their desk. But in fact, I also always encourage students to form study groups and talk to one another. Just trying to explain how you understand it to someone else is very beneficial. That's why it's great to write a book--because you are trying to figure it out for yourself. I'm going now posit three ways you can learn. One is you stare off in the distance and think real hard, which is very hard for most of us to do. Six years would be a really long time, but even ten minutes is hard for some of us. The second way is to talk about it; and that's what this program is. When it goes well, it's two people sharing ideas and the listeners are experiencing that through our eyes and voices. The third way is to write it down; and that's why I really believe it's important to write clearly, and the people who do not write clearly often don't fully understand what they are talking about, having fully grasped what they think they do. By writing it down, especially for somebody who is not an expert, you are forced to come to grips with it yourself. We learn by talking and explaining it to others, which can be done by writing as well. For me, that's why blogging has been so useful. It's a therapeutic way to explore ideas in print with a little bit of a check because you know other people are reading it. I agree. I think my experience with writing, which I came to relatively late in life, writing in words rather than mathematics, is just a very interesting process. I think it's a form of thinking. I never write as though what comes out the first the draft is what ends up. It's continually revisioned, throwing stuff out. I learned something that has been wonderful for me. When I started to write, I always thought that it was a question of either succeeding or failing, so either you wrote wonderful stuff or you wrote garbage. Sometimes I would have this terrible day--well, fairly frequently--where I would not get one decent sentence down on paper. When I looked at the next day it was just embarrassing and bad. I've gone to school and been brought up in a very competitive intellectual atmosphere, and I was thinking it was a win or lose situation. What hit me when I started to write was I almost invariably found that the day after a bad day was a good day. And I realized then, and it struck me like a thunderbolt, was that the good day was connected to the bad day. That I thought the bad day was a total waste, but it wasn't. Somehow stuff was still going on in my mind overnight or whatever, I slept on it, and the next day as though by magic it's as if the whole thing just worked itself out. And that was a really good feeling. To go back to students and learning, I think one thing students have to do is have a little bit more self-confidence in the process, in their own processes. I know in mathematics students often quit because they are often judging themselves. If there is a certain very small problem and they don't get it, all of a sudden they think: I always knew I was too stupid to do this.
45:57I'll tell a story that always makes me a little bit sad but it's a powerful story. One of my professors at the U. of Chicago was Sherwin Rosen, a wonderful economist, a slightly outside the box thinker, and for that reason all the more thought-provoking. He told me once that when he had taken the Core exam when he was a student there--I don't know if this is true verbatim, but this is what he told me--that after the exam was over, that Milton Friedman had talked to him and told him that he should become a plumber. I don't know whether that was a joke, but I don't think Sherwin took it as a joke. I think it was a devastating comment and I think it created, for better or for worse, a lifelong insecurity about his ability. Which we all have. One of the most remarkable things about skill and ability and mastery--Bill Russell, one of the great basketball players of all time, used to throw up before every game because he was nervous. You'd think: How could one of the greatest basketball players of all time be nervous? The answer was he was nervous about living up to his own expectations and the people's expectations around him. It doesn't matter how great you are. Maybe being nervous was part of what made him such a great basketball player. Oh, there's no doubt about it. So, when my students or friends tell me about their experience they're really scared, I say: That's good. Exactly. These negative feelings, I think the trick is to be able to channel them in a positive way. It's not a question of feeling insecure or not; it's what are you going to do with that insecurity? Are you going to give up or redouble your efforts? I think if a teacher can lead a student to accept these negative things that happen and learn to use them, to work with them, rather than just giving up on the situation, that's a life lesson that's beyond mathematics or economics. It works for everything. In my experience, that's the key to creativity. Not expecting yourself to come up with something that's brilliant every time, but using whatever comes up in a productive way, even the so-called failures and negative states. There are no failures. That's a dramatic claim. I'm not sure that's always true. Depends what you mean. I can give you a counterexample; and it was meant to be dramatic in a way, just to counterpose it to the normal way of learning, which is algorithmic. If you are doing a multiple choice exam, then failure is obvious. It's defined by what percentage of the answers you've ticked off in the right way. Once you start talking about creativity and creative insights, you are in a totally different universe. Just want to say, self-referentially, that it's one of the themes of your book, this idea that the observer and the participant sometimes have totally different ideas and sometimes as participants we step totally outside. So here we are having this conversation about that thinking and writing as an evolutionary dynamic process, and of course in this podcast we are doing the same thing. When I read your book I had a lot of ideas along the way, and while I was reading it, it provoked thought. I didn't think about teaching while I was reading it. But in our conversation suddenly something, an Aha, happened; and I realized that it illuminates this tradeoff between formalism and wonder. I hope it will help remind me to be a better teacher the next time class I teach. But this conversation in itself is evolving in a direction I didn't plan, which is more interesting probably, I hope for the listeners; and certainly more interesting for me. I think it's very interesting, and thank you for your comment. What I think you are saying is that this conversation, and any conversation, is a creative activity. And it's so obvious but people don't think about that--a dialog, we haven't rehearsed this. It has a certain spontaneity. When I respond to a question you ask or a comment you make, I don't think it out ahead of time. It just comes out in a kind of spontaneous way. There are different ways to run interviews. I've been interviewed where it was clear the person doing the interview wrote down 8 questions and just went through them. And when they get to the eight question, they're done. And you could have given me 8 questions, and I could have written out the answers; and you could have called me up and I could have just read my answers. The conversation would have been qualitatively, totally different. I find out when I lecture, if I am insecure and I read my lecture it is invariably boring. I have to allow myself to enter into the uncertainty of the situation. Of course I think about it, but I never know what's coming out. It leads me in one direction or another direction. That's where the life of these talks, and if you allow that to happen then people respond in a much more positive way. Depends who the lecture is in front of. A lot of students resent that. I've seen the course evaluations. They say things like: Professor Roberts goes off on too many tangents. But to me, those are the best parts of the lecture. They are not really tangents. They are unplanned. They are something where I have a spontaneous insight on the spur of the moment about something I am talking about, something I hadn't seen before. I think for some students that's the best part of the class. But for others, it's like: You ruined my notes. I've had students say to me about professors they like: This one gives a really good set of notes. I think these students have been programmed by years and years of education to expect a certain thing; and also by what you were mentioning before, by the examination system and how we test knowledge. We don't test the ability to apply the knowledge we've covered in class to a new situation. That is a different story than just testing for the acquisition of knowledge. I've had many discussions about computer knowledge--taking a course on the computer, what that is, and is that education compared to a class with an actual human professor. That's why I think distance learning, which is somewhere in between--you are watching a professor as opposed to being in the room; harder to interact. It would be an algorithm. Once you understand this, you move on to that--that works for some types of knowledge but most interesting types, that's not a good model. That's not going to be very good at conveying the insights. You are not going to get an Aha moment. You are going to amass a certain amount of formal, factual knowledge in that way, which is probably a basis. But if you don't take the next step, if you never get a feeling for what's going on, I think a good way is if you are exposed to something, some sort of technical argument, detailed, goes on for pages, and you have to ask yourself the question: What's really going on? What you were saying about popularizing material and being able to understand material well enough to explain it to a non-specialist. I think that in fact there is a huge danger with what you could call the cult of expertise, that there are these really smart experts out there who can do the thinking for all these ordinary people. I think people are too willing to give up their human capacities of judgment to some sort of anonymous authority. It's also true that just because a whole bunch of people are smart does not mean they do not subscribe to the same orthodoxy. Groupthink is a big problem. Sure it is. And if you get promoted, or get big money--why would you ever want to think differently? We've raised this issue in a couple of podcasts--Freeman Dyson and Ariel Rubinstein.
55:37Talk for a little bit on the implications of this for both the public perception of science and for the act of being a researcher in science. One thing you are saying is we should be a little skeptical about claims of certainty. What else are some of the implications? What I find is that scientists are so uneasy with this debate--so-called debate--between religion and science. If you say anything about science being uncertain you are giving ammunition to the enemy. And the enemy is the anti-rational, so-called, religious folks who are anti-science. For me, I don't see any conflict between the two but I do think that scientists make bolder claims than they are entitled to simply because they don't want to even open the door to this ambiguity and uncertainty which you are talking about in your book. I agree completely. It is a problem. It is, by the way, a particularly--although it's an international problem, talking from you in Canada I don't think it's as acute a problem in Canada as in the United States. There is a certain, particular debate that is going on, and it's been interesting for me to get responses to my books from Americans. You described it perfectly. People are terrified that if we talk about any kind of limitations to the scientific method that we are kind of embracing a kind of religious fundamentalism. Which is bizarre. I think that's a terrible mistake. I think religion has insights to contribute to the human experience, and we get off on a tangent. Whenever we have an ideology and we don't allow people to question it and it's a negative thing. I think turning science into an ideology is not going to help anyone. Of course, it's clear which side of that argument I'm coming down on: I want to loosen up what people feel science is all about. I want to speak on behalf of what I think is the larger human potential. I do not believe that, for example, when it comes down to evaluating nuclear energy--to just pick something out of a hat--that ordinary people, it's not just for nuclear engineers to make that decision. Someone has to explain it to the population in such a way so that they can think about it and make an informed judgment. I think this question of giving up your ability to make judgments because it's "scientific" or it's mathematical--and I think mathematics plays a big role in that and it did in the financial crisis and in general, because people have the feeling if there are equations, mathematical formalism, they will never understand it. And therefore it's beyond their abilities to even enter into the question. So people very often opt out of decisions that are vital to their lives. And they assume that those equations, because they have Greek letters and inequality signs must be truth. One of my things that I flog all the time is the misuse of mathematical formalism--the idea that by converting a very vague situation into mathematics that all of a sudden it now becomes truer. We all know that your models are only as good as your assumptions. Every piece of mathematics is based on assumptions. You have to go back and question the assumptions. And the assumptions are generally not discussed. In a mathematical situation there are always various axioms floating around. Look at the axioms. The way the mathematician wins arguments with his wife is: You get her to agree to a set of axioms; and then you show that you got it to end up where you wanted it to end up in the first place. There's something a little unfair about that kind of reasoning, and that is, in the computer science they know it very well with this expression: Garbage in, garbage out. In economics we do the same thing. We want to get to a conclusion; we figure out what the model has to be to get there, and that's what I think is a destructive way of having intuition about the tools. But often, you know, that's the way it's done. People don't realize. What you actually do is you reason backwards, and then you recreate it forwards. And that gets to your point about the observer and the participant. This conversation was a conversation where, although we were talking about science and about economics, in a certain sense it was from the point of view of the observer. Yet, we ended up discussing our own experiences as teachers and as researchers, etc. That was from the point of view of the participant. I think actually you get to a deeper truth by accepting the fact that we have these two ways of looking at things than if one just denies the existence of one or says that one is just not part of science. We had Ed Leamer on the program, thoughtful econometrician who wrote a paper called "Let's Take the Con out of Econometrics," where he makes the important point that where you have lots of freedom to reject equations that you don't like the conclusions of, the standard statistical tests that are used to evaluate the statistical significance of any one that you do accept doesn't hold any longer. He's suggested--I'm going to give an extreme version of his ideas--what you could imagine is you should video yourself as a researcher looking at all the different possibilities and watching yourself as a consumer of the research, the equations, the models, the formulations that were rejected. Not just the one that ends up as the final result, but all the ones that were rejected and why they were rejected. That's an outlier; got to throw that one out. Of course, you don't see that in the process. It's an argument that we ought to let our fellow researchers into the kitchen, alongside us to see how we did the cooking. That's a radical idea. And it's very time-consuming. The simple version of it is to make sure people can at least replicate your results. That's even hard enough, is the tragedy in economics. But what you're saying is really important, because in fact the negative experience that people have, you know, is vitally important. Everyone knows that if we did an experiment and we threw out all the outliers, of course it will be statistically significant. A posteriori you know you can make anything into something that's significant. I think what we need is a Journal of Insignificant Results. Because that's the real problem--you can't get a lot of fanfare for publishing something that shows that something doesn't have an effect, and yet that's an example of failure in some dimension that's very educational and you learn a lot from those as the researcher. But by not sharing those with the reader you make the whole process look like it was this seamless, beautiful journey that had no zigs and zags. Straight line, just went from A to B. Maybe that's an argument for having scientific publications, giving them an alternative format. Maybe in our emphasis that truth equals formalism and compression, we lose the human essence of what's going on in research. Yeah. Now that we have the web, the fact that these articles would be a lot longer in that Appendix that showed all the results that didn't work out, it's a lot cheaper to do that. So maybe we can head in that direction.
1:05:08You are one of a string of guests that I've had on lately that confirm my biases and what we're talking about here is confirming one's biases. So, I have a strong belief, I think not just a bias but a view that our knowledge is limited. It's a very Hayekian view--listeners know I'm a big fan of Hayek, but he talked a lot about what he called in his Nobel Prize lecture the pretense of knowledge--the idea that we have things that we pretend are true. All of what you are saying resonates deeply with me and it confirms this view that I have. So, I want to challenge you to think about: What would somebody say on the other side? Who disagrees with you, thinks you are overstating the case? Certainly that's all we've got is reason--shouldn't we just plow forward and your whole perspective is just too squishy and not scientific? Absolutely, and you know, it's interesting. I don't have to respond to that--I can respond to the part of myself that agrees with that. After all, I have a Ph.D. in mathematics. Why did I go into mathematics? Because I had this view of this perfect world where you could rigorously say what is true and what is false; and I'm still excited by scientific theories of everything; etc. There is a part of me that really resonates to that other point of view. I think myself now, after a lot of reflection that that view is a form of wish fulfillment, and I think I understand why people make it--because it does make you very comfortable. There are clearly certain very basic things about life that are not captured in words. Like one's aesthetic experiences; one's emotional response to music. I think it's vitally important that we include the entire human being in our discussions of science and not just a kind of truncated version of what the human experience is. I'm probably aware as a mathematician more than anyone of the power of reason. But I think that even for mathematicians, when they want to discuss the sources of their creativity, then they are not talking about reason. I want to say that the sources of reason are themselves not reducible to reason. So there is something, and like many, many famous scientists like Albert Einstein, ultimately there is a certain wonder and mystery associated to the natural world that you are either receptive to or not; and I think the great ones are receptive. And then they try to work with that and put it into some more kind of concrete form--which includes rational theories. But when you look back to your sources, then you are looking not beyond the theory so much as before the theory. Before reason. So in that sense, things cannot be captured definitively and things are a little bit open.

Comments and Sharing



TWITTER: Follow Russ Roberts @EconTalker

COMMENTS (29 to date)
Mark writes:

I couldn't help think of Godel's Incompleteness Theorem from the get go.

Belief systems are merely the way mind is categorizing information. Only now, I'm beginning to see that those belief systems are holding us back from seeing other truths. And that breaking existing belief systems is how revolutionary ways of thinking are brought about. As well as experiences that mystically resonate with ourselves.

Bringing the concept to economics, we can take a few axioms and see where it leads. Take Adam Smith: add a little salt of "the sole purpose of production is consumption" Add in a pinch of pepper of Says Law and Richardo's Theory of Comparative Advantage and you've got economies of scale.

With economies of scale you've got a very fragile system as Nissem Taleb would say. Nature evolves through redundancies and variation. And as the old saying goes. "The bigger they are the harder they fall"

Unfortunately, life reads like a Tom Wolfe novel with belief systems as the slave master of people's minds. Where it goes no one knows.

Eric Falkenstein writes:

He has a good point, but it's the subtlety that makes all the difference. Of course models and theories aren't complete pictures of what they represent, any good theorist knows this, and it's not a good point when there aren't any real people taking the other side. Knowing a theory's relevance is common sense, always recognized as important. But if you want just theories that capture everything, you basically get journalism, which are tendentious presentations of reality via the necessarily reductionist limitation of an essay.

Demaratus writes:

I'm shocked that the word "induction" was never mentioned. Much of the discussion on how models imperfectly represent the world is at its heart a discussion on the usefulness of inductive reasoning. A model that is claimed to represent the world accurately, but turns out in fact to have a flaw which means the model obviously doesn't represent reality, is simply an example of a failed effort to reason inductively.

Aristotle discusses this, as does Hume. I love economics and enjoy listening to economists discuss interesting topics, but clearly in this case the host and guest could have used a broader understanding of the philosophy of science than was exhibited in this podcast.

This is not to say that I didn't enjoy the podcast; it was quite thought provoking and other parts were better. But, this discussion could have been so much more, and instead it came of as a bit amateurish at times.

Social science is at its heart an attempt to understand the world using the inductive method of chemistry and physics; the great failure of social science is that oftentimes complex systems of intelligent beings cannot be accurately understood using an inductive method applied to aggregates.

Also, now that I think about this a bit more, I've remembered that Mises discusses the dichotomy between inductive and deductive reasoning near the beginning of _Human Action_. I find his argument that serious economic reasoning should be focused on deductive understanding to be quite persuasive; praxaeology is simply developing through deductive logic what follows from a few basic postulates about human action and exchange.

And, isn't this why Micro has such better conclusions than Macro? Because it isn't fudging things at the high, aggregate level and making leaps of inductive assumptions?

[edited per comment below--Econlib Ed.]

Demaratus writes:

Allow me to amend my comments above and apologies for my slight jerkiness. The discussion was great, I guess I just wish they had gone into more detail and used what I view is a better framework for the discussion--simply frustration that the discussion didn't go as far as it could have.

Keep up the good work, Russ. Maybe you can do another podcast that is more explicitly about induction? =)

Jeff writes:

Yay! Finally a Canadian guest on the show!

Lee Kelly writes:

I agree with a lot that Byers has to say, but mostly because his conclusions are not controversial. Despite agreeing in many cases, I don't buy any of his arguments. Byers makes really bad arguments -- he probably could convince me that I do not exist merely by trying to argue that I do.

chitown_nick writes:

I really enjoyed this podcast, as usual. I also feel that inaccuracy of models is somewhat of a dead horse. Maybe that's because I (and apparently a few others leaving comments) try to apply appropriate deference to reality when I model something.

Perhaps this is another division between physical sciences (my realm) and social sciences (at which I'm guessing the conversation was aimed). While it is true that there is more to discover, even in the physical sciences, there reaches a point where there is sufficient understanding to make reasonable predictions within certain bounds. As Mr. Taleb (a previous guest) might say, physical sciences often exist within mediocristan, not extremistan.

Perhaps more importantly, in systems modeled within the physical sciences, things that can make results spiral out of control are categorically kept away from the designed system. Social sciences do not have this benefit of design.

Back to point, I agree that the big picture should be considered, human intelligence and intuition should be used as a check against modeled results, and it should be individuals' responsibility to do this for the purposes of their own prosperity in practice - and academics I suppose should impose these standards on each other as well. Not too sure where the major dissent on this issue comes in. Maybe just a call back to reality and asking people not just to confirm their biases?

Hootan writes:

Thanks Russ! I enjoyed the podcast. I really like the fact that these podcasts are much more than just pure economics and Keynes/Hayek.

Just as a suggestion probably next podcasts could benefit from involving a third person which is the expert of the matter. For instance in this case, someone who was familiar with both pro and against Hume's ideas could really enrich the discussion.

rovesciato writes:

curiously, after listening to this podcast (rather, after work) i bought Henri Bergson's "Laughter, an Essay on the Meaning of the Comic" and read this in the first paragraph

The greatests of thinkers, from Aristotle downwards, have tackled this little problem, which has a knack of baffling every effort, of slipping away and escaping only to bob up again, a pert challenge flung a philosophical speculation.

A charming summary of much of the talk here. I would mention that much of this topic is ultimately bound up in the platonic figure of Socrates whose dissection of Courage in the Laches (i think) and assertions 'know thyself' and that wisdom comes only to those who understand that they know nothing really embody the whole question of epistemology, particularly since all knowledge that means anything to humans much be at least filtered through an actual human body. We know much more about courage than we did before reading the Laches but are also much more aware that we don't even have a foundation from which to talk about it definitively, let alone reason logically from it as a premise. It is fitting, I think, that Socrates was a great lover of wine.

This also touches, i think, on the power of Shakespeare. No systematic philosophy of anything can be taken from the plays as a whole and yet they illuminate almost every topic they touch upon before slipping toward a darkness just past our reach. Our chemistry and our genes can only address this darkness by shutting down our awareness or physiological responses to it, in short, can only make us wise by removing us from interaction with the subject of the inquiry.

Matt Asher writes:

Great podcast Russ! I particularly like the interviews where you discuss issues related to epistemology or how we live.

The functions you were trying to remember, related to the proof of the Central Limit Theorem, are called characteristic functions. Cheers.

Russ Roberts writes:

Matt Asher,

Thank you. And the other approach, I think, that I was looking for, was eigenvalues.

Lauren writes:

There was a very interesting interchange in this podcast around the 1:00:57 time mark, or in context a little before:

[Roberts:] In economics we do the same thing. We want to get to a conclusion; we figure out what the model has to be to get there, and that's what I think is a destructive way of having intuition about the tools. [Byers:] But often, you know, that's the way it's done. People don't realize. What you actually do is you reason backwards, and then you recreate it forwards. [Roberts:] And that gets to your point about the observer and the participant. [Byers:] This conversation was a conversation where, although we were talking about science and about economics, in a certain sense it was from the point of view of the observer.

I think Byers was making a very important point in this section about the way creative thought happens. It was not a point about the way econometrics is sometimes re-iteratively engaged in to achieve an intermediate goal of an individual's getting published results in a tenure-worthy academic journal.

Creative thought--in mathematics, economics, and many other fields, from writing a novel to physics to psychology to art to technology to invention--involves having a goal.

The first thing I want to do in thinking about achieving a goal is to think about what would have to be true to make that goal happen.

That means I work backwards from the goal to what conditions would be required to make that goal happen. And, simultaneously I work forward from wherever I can get to. And then, next, I try to make the two ends meet.

In other words, you work backwards from the goal and forwards from where you are, and you try to find a middle ground. This is true in any daily or longer-term intellectual pursuit.

Working backwards to try to find the conditions necessary to achieve one's goal is constructive. It is most certainly not in and of itself destructive.

Reiteration in statistical results, though, is way more complicated. There are rewards for publishing certain kinds of statistical results and not others.

That there is some kind of analogy between reworking publishable statistical results and going back and forth in one's mind about truth should not cast aspersions on the fundamental intellectual process of working both backwards and forwards to try to find the solution to an unsolved intellectual problem.

Judge Glock writes:

I appreciated William Byers's discussion about what he calls "frustration tolerance," or the ability to survive uncertainty, as a essential component of genius and creativity. Fortunately, someone else already discussed the same inclination in a different context. John Keats called it "negative capability." In a letter to his brother in 1817 he said:

"Several things dovetailed in my mind, & at once it struck me, what quality went to form a Man of Achievement especially in literature & which Shakespeare possessed so enormously - I mean Negative Capability, that is when man is capable of being in uncertainties, Mysteries, doubts without any irritable reaching after fact & reason."

Although as a Romantic he seems to argue against a reliance on facts in general, the work of people like Richard Holmes ("Age of Wonder") has shown how important romantic wonder was to early scientists like William Herschel and Humphry Davy. For writers and scientists alike then, negative capability is an admirable though difficult characteristic to cultivate.

David B. Collum writes:

Great podcast. I really sat up on the definition of genius (willing to persist long enough for the problem to solve itself.) This gets at a concern that many of us have. Maybe I am getting jaded but as a teacher of fine science students (Chemistry, Ivy League) it is my conclusion that a willingness to completely immerse in the subject is now relatively rare. I know have my best insights as a scientist after enormous time immersed in chaos and confusion. You cannot skip that critical step. You could blame pedagogy because it is old fashioned but the previous students who were presented this pedagogy were willing to immerse. I sincerely believe this is a societal trend, not just grumpy old fart. I am now attempting to find out if immersion in economics, markets, and politics can take me any place special.

Mark Sundstrom writes:

What a terrific podcast. Thank you! Off next to purchase the book.

Benigno writes:

Good podcast. It's incredible how you never mentioned epistemology or philosophy during the interview.

Mort Dubois writes:

This may be slightly off topic, but Russ brought introduced it into the discussion. Why is everyone so impressed that computers can play chess? Chess is the least human of all games. When computers can win a soccer match, or fix my leaking toilet, I'll be worried. So many discussions of artificial intelligence ignore the extent to which human minds exist in a body which moves in space, and that a huge percentage of human economic activity is about manipulating a physical environment which is designed by and for human bodies.

Mort

Bryan Sloss writes:

I enjoyed the podcast. I'm a high school teacher, and I think the part of the conversation that was most interesting to me was the discussion of teaching, creativity, and aha moments. Although the experience is probably different at different levels of education, I've noticed that those aha moments come it a lot of different shapes and sizes. I've also noticed that one of the most frustrating things for new teachers is that high school freshmen don't get the aha moments over the same things the teacher does. Which makes sense to me. I've been coaching football for 20 years. It seems ridiculous that I could expect a 9th grader to have the same depth of understanding that I've acquired over many years. So, I agree that teaching isn't like building with leggos. It's more like working with play-do. We see what we're working with and try to taylor aha moments of insight to our students. We try to give them that deeper understanding and appreciation for what we love. But, we've also got to be aware that the small victories, the great moments for a 5'4", 135 lb. football player are going to be different (not necessarily better or worse) than for a 6'2", 225 pounder. These are the things that make our job challenging, interesting, and exciting. And, sorry if I got a little off the topic, but the podcast really got me to thinking about these things.

emerich writes:

Enjoyed the podcast but thought that the core issue was by now conventional wisdom. I know next to nothing about the history of science, but who doesn't know that the "Newtonian model," for example, was shown to be incomplete by Einstein and relatively, which was shown to be incomplete by quantum mechanics, for which refinement is being sought in string theory or whatever. And doesn't every listener of Econtalk know that economic models and their assumptions are at best simplifications that facilitate understanding, or at worst, lead to confusion and policy blunders? We can't think without using models in our heads that are by definition simplifications of reality. Some are relatively useful simplifications and better representations of reality (e.g. the germ theory), some useless or damaging (the theory of humours). (Even the original "germs cause disease" theory is incomplete; turns out our bodies are filled with bacteria, but most are benign or even beneficial.)

James writes:

I enjoyed the first part of this podcast, but then it began to sound too skeptical to me. I'm not sure what things are like in economics, but in my field (education) mathematical models are a much needed source of clarity. Not truth, clarity. The purpose of math is to spell out clearly what one is talking about, not to model reality perfectly.

I found it ironic how Byer's used the computer as an example of the limitations of math and logic. That is one way to look at it. The other way is to point out that modern computing, including the internet, is quite simply the greatest miracle man has ever invented. And it was all made possible via careful, precise, mathematical modeling. You can't get very far designing a computer using prose.

Maybe math has limitations when it comes to science, but it has driven technology well into the realms of what people living just 200 years ago would call magic. TECHNOLOGY, not science, is the ultimate vindication of the rational world view.

TGGP writes:

From what I've heard, neuroscientists reject the pop-science account of "left" and "right" brain styles of thinking.

MF writes:

The part about randomness really made me think. Particularly, how randomness in computers is probably achieved with a complex algorithm is such a cool thing. If an algorithm actually produces randomness, rather than some mechanical system, random numbers are not that random after all.

AHBritton writes:

@MF

Random number generators in computers are not really random at all. How they create the impression of randomness is usually by applying an algorithm to some arbitrary value (I believe usually the date and time). This works relatively well as a random variable because of it's complete arbitrariness, just like if you were to say pick the second digit in the Dow Jones, multiply it by the third digit in the NASDAQ, divide it by the temperature, etc. It won't be random in the sense that it could be replicated, but unless you know the whole process, it appears random.

Justin writes:

Maybe I'm way off base, but a lot of this podcast seemed to deal with the role of intuition in science, and the first thing that came to mind for me was the Ian Ayres podcast on "Super Crunchers".

Even the example of deciding whether to prove Fermat's last theorem, one could ignore intuition and use a statistical analysis of other theorem's of his to find a probability that it was right or wrong, and go from there. It doesn't have to be left up to intuition. Surely if I came up with a similar theorem, others would have little or no interest in trying to prove it due to my not having any record of being right about anything. Anyone who decided to spend their life trying to prove it because their "gut told them it was true" would probably be laughed at. And most of the time, they'd have wasted their time either proving it was true, and no one caring, or proving it wasn't, and it not mattering at all.

Any time I start to feel like intuition told me something, I can almost always go back, review the situation and prior situations, and find quite a bit of evidence that pointed towards my conclusion. I think it's a little silly to credit intuition the way this podcast seems to. But then, maybe I just need to listen to it a few more times.

Now, the role of beauty in science, or "poetry" as Russ likes to call it, that I totally agree with. I feel just as creative when solving a difficult problem as I do playing my saxophone. Maybe even more so!

Frank Graves writes:

Perhaps I am missing something, but I found both Byers and Russ Roberts to be very shallow in this exchange. To me, Byers articulated a very implausible straw-man characterization of science and mathematics as being occupied by a bunch of naive idealists who somehow think they have finalized all prior work and theory and are just looking for new frontiers. Nothing could be further from the truth, as should be readily apparent even to a very casual observer of those fields. Any cover of Discovery Magazine, Scientific American, etc. clearly shows that foundational concepts are constantly being challenged and revisited, and I have never met a mathematician or physicist of any rank who is not well aware of the limits of our prevailing knowledge. What could be more fundamental than "mass", "time" or the dimensionality of space? -- and there are no good theoretical physicists who take these for granted as well understood. Likewise, no math student who has reached the undergraduate class in Analysis is unaware that "number" is partly a formalistic concept that depends on what axioms are used to define it. Finally, the notion of randomness, though hard to understand metaphysically (e.g., how can a process of generating numbers or events be a "random process"?), is also not hard to work with safely and usefully as simply the noise remaining in a model due to missing variables. Other than a lot of "gee whizzing" going on in this show, I heard no insights.

Demaratus writes:

I wrote the following above, but made a horrible mistake and wrote the wrong word in a key place: "I find [Mises] argument that serious economic reasoning should be focuses on inductive understanding to be quite persuasive;". I meant to write "deductive" here instead of "inductive", as the independent clause after the semicolon indicates.

[comment fixed; also changed "focuses" to "focused" in the same sentence.--Econlib Ed.]

Floyd writes:

Fascinating conversation with William Byers, Russ! For me, it was one of the most engaging shows you've done (and they're all riveting). It was interesting to hear how you both have grappled as professors with what it means to learn, what it is to create, how the two are related etc. You are in a great position to have some insights into the issue as you are really there on the frontlines. I know that I spent a great deal of time as an undergrad and in graduate school thinking about many of the issues you discussed. A "journal of blind alleys and discarded results" is something that I'll bet anyone who has done original research of any sort has yearned for. I know this is very true in mathematics, where there is a strong tendency to follow luminaries such as Gauss and "throw away the scaffolding".

Every physics student who has been perpetually uneasy about textbook answers to seemingly simple questions such as "what is energy?" will appreciate what you guys were saying about the profound and often under-appreciated differences between the world as it exists "out there" and the world as it exists in our theories.

On another head, I'll tell you that I was reading Dawkins' "The Blind Watchmaker" just before listening to your podcast and was specifically impressed with his ability to convey a sense of wonder about the biological world and the marvelous intricacy and interest of the evolutionary story. That you discussed the very same issue was a nice surprise. The inability of educators to convey this sense of wonder (and perhaps they don't even feel it in many cases) is I think responsible for the epidemic of scientific illiteracy in our country. Embracing uncertainty is certainly key to conveying this sense of wonder.

Last point, and it is a bit political. You were right to say that scientists in this country (particularly climate scientists), having been under heavy fire for a very long time from segments of the population unfriendly some of the results science produces, have perhaps been too eager to downplay uncertainty. This is a cultural effect. Everyone starts to behave like a soldier in a culture where every discussion becomes some sort of pitched battle: dig in, put your head down, show no weakness. President Obama has often been criticized for being "too professorial" and too willing to see both sides of a question. Well, this is one of his most attractive attributes for those of us who are weary of the irrational certainty of many political figures. It's a powerful antidote to the cultural sickness that drives us to reflexively demonize those who seem to disagree with us. In fact, I'm sure it's quite rare to find two people who are entirely disjoint in their beliefs. You share many views with your opponents. The recognition of this fact is always the basis for civil debate. Mr. Obama, like many of those of you who have chosen a career in the academy, understands this on a visceral level. It would be a great boon to this country if there were many more high-profile public figures who understood, as you, Mr. Byers, Mr. Obama (Hayek, Mandelbrot, Taleb ...) seem to, that certainty is quite often misguided.

Cowboy Prof writes:

Just got around to this one two weeks late.

Best. Econtalk. Ever.

This will be required listening for my undergraduates, graduate students and any colleague I run across in the hall.

Pennrad writes:

I really enjoyed this conversation and feel like I really got a lot out of it. I just want to comment on the science-v-religion debate: I think Dr. Byers was spot on that too many scientists are tempted to ignore the uncertainty inherent in what they do. But I think that it is extremely unhelpful to characterize science and religion as "not really being in conflict".

In theory and very generally, yes, science and religion seek answers to different questions. But in practice there is often legitimate conflicts between the two and, people being people, get worked up about it and both sides look bad. But to dismiss these conflicts I think debases both sides.

Comments for this podcast episode have been closed
Return to top