|0:33||Intro. [Recording date: July 19, 2018.]|
Russ Roberts: Before introducing today's guest, I want to mention I'm planning to do at least two EconTalk episodes on the book, In the First Circle, by Aleksandr Solzhenitsyn. So, feel free to read that in advance and follow along if you'd like. I haven't decided whether those are going to be regular EconTalk episodes or bonus EconTalk episodes. But, if you want to be prepared for those by having read the book, now would be a good time. It is a 741-page read. So, be aware what you are getting into. I announced this on Twitter yesterday, and Amazon is now sold out of the paperback. The Kindle version, of course, is still available. But I recommend the paperback. There's a lot of characters, and even though there's a list of characters and you can highlight them on the Kindle, I think a paperback is easier. I read it on the Kindle and found it somewhat challenging. I also want to mention there are two versions of the book. The original version is The First Circle. You want to get In the First Circle. Solzhenitsyn self-censored the first one to get it published. And we'll be discussing the fuller one, called In the First Circle.
Russ Roberts: And now for today's guest. Teppo Felin.... Our topic for today is an essay that you wrote, "The Fallacy of Obviousness," which you published at Aeon.co. We'll put a link up to that, of course. And before we get to that essay and some of the academic work that's behind it, that you've done, I want to encourage listeners to watch a YouTube video that's called "Selective Attention Test." You can find that video at EconTalk.org at the page for the heading Delve Deeper, where you can find things related to our conversation. Or you can just Google "Selective Attention Test." It should be the first video that comes up. It is a minute and 22 seconds. So, if you are not driving, I encourage you to pause this conversation; watch the video; and you'll get a lot more and enjoy this conversation more if you do that in advance. And you will potentially avoid a Spoiler. So, I'm going to count to 5 to give you a chance to pause and watch before we start talking about it. 1. 2. 3. 4. 5. Okay. You're back. We're back. So, Teppo, this video, which now has about 19 million views, was created by Daniel Simons and Christopher Chabris, two psychologists. Describe the video, and tell us what conclusions Simons and Chabris draw from it, and what other people have said about it.
Teppo Felin: So, in the video, as you mentioned, you are asked to watch something. It's called the Selective Attention Test. And, in the video, in the first screen, those of your listeners who did it, the test, it asks you to count the number of basketball passes made by the team wearing black--I think, I believe. And in the video, what you see is two teams, one wearing white shirts, one wearing black shirts, passing a basketball. And, essentially it turns out to be a relatively sort of attention-heavy task. And so you are trying to count these basketball passes. I actually just did this with my father-in-law, when he came to visit at Oxford last week. And, sure enough, he managed to count the right number of basketball passes made by this team wearing black jerseys. And, in the clip you have two teams, like I said, and a total of 6 players, 3 on each team.
Russ Roberts: And they are both passing basketballs around, right? So you have to kind of focus carefully on the team wearing the right color--
Teppo Felin: Exactly. [?] And so, my father-in-law turned to me and he said, 'Teppo, I nailed it. I got it exactly right. It's 21 passes.' And so, I asked him, I said, 'Did you see the gorilla?' And he just stares at me. And now I've done this exercise with students in years past as well. And it turns out that some proportion of people, they run different conditions in terms of which team you are paying attention to, how fast the gorilla--whether it stops in the middle and so forth--but it's sort of some range between 20-30 to 70-80% of people miss the gorilla, whether it stops in the middle and so forth. But it's some range between, you know, 20-30 to 70-80% of people miss the gorilla, essentially. And, the inference that Christopher Chabris and Daniel Simons draw from this is that people are--well, they call it 'inattentional blindness.' And so the argument is that, because we are paying attention to something else, we miss things that are also happening in the screen that you'd think we would catch, somehow. But it turns out that we don't see the gorilla. And, like I said, it sort of surprises most of us. And, I guess the interpretation, or re-interpretation that I try to highlight in the Aeon essay is that this--you know, we can look at this test in a couple different ways. One is that, you know, people are blind. And I actually in the essay anchor a little bit on Kahneman's interpretation of this exact experiment. So, in his book, Thinking, Fast and Slow, he says: This tells us two fundamental things about the mind, namely that humans can be blind to the obvious and who are, sort of oblivious to this blindness, essentially.
Russ Roberts: Teppo, before you go on, we should make it clear, for those of you who did not watch the video: There's nothing subtle about the gorilla. It's not like he just sort of jumps on the screen for a second and disappears. He wanders around. It's a human being in a gorilla suit. And when you watch it the second time, he's blindingly obvious. He's very, very present. It's not like a trick. Well, it is a trick, but it's not the trick that you might think if you are hearing this described for the first time. It's shocking that anyone misses the gorilla.
Teppo Felin: Absolutely. Yeah. Yeah. And so, the question is: What's the interpretation? And you can sort of read the Chabris and Simons' interpretation. So, this was published in the Journal of Perception--in fact, it's the most highly cited piece in their journal. And, their interpretation waffles a little bit between sort of saying, 'Humans are blind,' or 'It's just an example of focus.' But, if you look at what they're emphasizing and actually measuring, it's the fact that, you know, many people miss the gorilla. That's sort of the surprise. And, like I said, that's been taken to--interpreted in different ways. For me, it's sort of highlights certain ethos or zeitgeist around what we're looking for in terms of, you know, human nature and particularly the interpretation that Kahneman emphasizes that becomes quite important throughout his book. And, for me, more broadly, in behavioral economics, is that there's this sort of, you know: Humans are blind to the obvious. That we miss some really fundamental things in our visual scenes. And this has been taken by others, like Steven Levitt and others, as sort of kind of a culminating summary of what it is that Behavioral Economics is after, as well. And, I guess, you know, my challenge to that is that it's a little bit of a Rorschach test, in terms of what we can say with that data. So, I don't have an issue, any issues with the finding itself. So, we could--all the kind of reproducibility efforts or go and replicate this finding. And I think that any number of people just like my father-in-law would also miss the gorilla. The question is: What precisely is it telling us about perception? About awareness? And, I think more fundamentally about human nature? And that's what's most important to me. Because this [?] sort of gorilla study has been kind of a big jumping off point for the authors, themselves--Chabris and Simons, they wrote a book called Invisible Gorilla, and Other Ways Our Intuitions Deceive Us. And so there's a strong emphasis on sort of human being blind, susceptible to illusion, and so forth. And my challenge would be that essentially illustrates something different--slightly more mundane in some ways. But I actually think quite powerful and important. And it's sort of that angle that I think is important and that I try to sort of push in that essay; and then in the associated academic pieces that we also published.
Russ Roberts: And I want to say: On the surface, this topic--of the gorilla and our blindness to the obvious, or some other interpretation--it seems like kind of a narrow thing to have an EconTalk episode about. But, I actually think it's quite deep. And, I hope in the course of our conversation that we can tease out some of the implications of your interpretation of this very specific social science experiment, to lead to some implications for economics writ large and how we think about data. And how we understand the world. When you say, 'human nature,' I really think of it as more, broader than that. Like, that's not broad enough. But, it's really--I think your insight into this, which we're going to get to, right now, your insight tells us a lot about just the whole enterprise of being human and trying to understand the world in its complexity.
Russ Roberts: So, you, in giving an alternative interpretation, talk about the fallacy of obviousness. What do you mean by that?
Teppo Felin: Yeah. So, I think, so, obviousness is a little bit of a trap. So I think that many things are sort of obvious in retrospect. But, obviousness is never sort of a priori evident unless we're looking for something specific. Right? And so, I guess the issue that I have with the test, and something that it sort of highlights more broadly in terms of like said human nature and other issues that we'll talk about, is that my concern is that the interpretation that humans miss the obvious isn't the right one. Rather, the right one is that people respond to questions. So, we tend to our visual scenes with questions in mind; and then we focus our attention. And that's partly what Chabris and Simons say in their argument. But, then, if you look at the subsequent emphasis in terms of what Kahneman says and in terms of where Behavioral Economics has sort of taken this ethos, the emphasis is largely on biases and mistakes, and sort of, 'Isn't it fun?'--
Russ Roberts: failure--
Teppo Felin: Yeah. Failure. And, like I said, it sort of led to other directions I talk about in the [?] essay, which is sort of denigrating human [?] more generally and saying, 'Well, we'll solve these problems with artificial intelligence; and nudges; and other types of things.' And, like I said, the more powerful point to me--which isn't the emphasis of the article--is that, you know, there's a set of questions, whenever we're attending to visual scenes, that direct our awareness. And so, when I'm looking for my keys--so I tend to frequently lose my, you know, my cellphone and my keys--I have sort of an image in my head in terms of what I'm looking for. And I'll miss any number of other things as I'm scanning, until I find, sort of, the answer. In the same way that, you know, subjects who are doing this test, they'll attune themselves to basketball passes. But they'll miss any number of obvious things. And so, in the Aeon article I talk about, you know, in that clip, there's many obvious things. There's obvious--there's, for example, the gender composition of the teams passing the basketball. What color hair they have. What color the carpet is. Or, it ends up that actually in the background there's two letters spray-painted, and so I could ask people afterwards and say, 'What were the two letters that were spray-painted?' They are very obvious in the clip, right? But, I would miss them because I'm paying attention based on cues, primes, prompts, questions--problems that I have that then direct my awareness to certain things in any visual scene. And, so, it's sort of emphasizing that powerful focus and awareness that we have based on what's in our mind, essentially. And, Chabris and Simons which sort of I think highlight that; but like I said, the emphasis is on the blindness. And that's what they measure, is: How many people miss the gorilla, and not how many people attended to x, whatever it might be in the visual scene that we're looking at.
Russ Roberts: I think it's really a deep insight into the nature of reality--which, again, I think it's larger, even, than human nature. Reality is really that complicated. And our brains relentlessly do two things that are contradictory: They fill in stuff that we might not be seeing, because we might not want to make it easier to interpret that complexity. So, my favorite example: there's some driving to school. My son's on his way. I'm driving with my wife to my summer work here at the Hoover Institution at Stanford. And my son is riding a bicycle. He doesn't want to drive; he wants to ride his bike. And he's going to arrive roughly the same time for his first day of work, on campus. And, I'm worried he's going to be late. And, he's a teenager; he's never had much of a job before. This was a couple of years ago. And he's going the other direction. As we pull up, I see him bicycling away from where his interview is. And, one of the things I'm worried about, of course, is: This is his first day. He's not going know where to go. And, I jump out of the car--where I got to ride, to get out, and I yelled his name. And, to my horror, he didn't respond. So, I yelled it even louder. And then I was puzzled. And disturbed. Because he was wearing, clearly, a backpack that wasn't his. And of course that's because it wasn't him. It was somebody who looked vaguely like him. And my mind filled in and told a story that said, 'My son is going the wrong way.' Ridiculous. Absolutely ridiculous. My wife actually saw him later that day, the same guy. It actually looked a lot like my son, which helped make this story work. But, my mind told a ridiculous story, filled in details, made it him when it wasn't him.
Teppo Felin: Right.
Russ Roberts: At the same time, I will tune out a thousand things that I see every day in my house, to ease--not consciously--to ease my perception of the world. And I always like to point out: If you put a bunch of things on your wall because you think they are beautiful, your guests will enjoy them because they notice them. After a week, a month, a year, 5 years, 10 years, you won't enjoy the artwork in your house so much, because your brain won't see it. Just tunes it out. And I've used this example before--you know, government warnings on cigarette packages, bottles of alcohol for pregnant women, driving machinery. You know, the first time you see it, it's like, 'Whoa!' The 50th time, you literally don't see it. And that's part of what I think--that's really interesting and important. But it doesn't mean we're blind.
Teppo Felin: Right. Yeah, yeah. And I guess, part of the fallacy of obviousness for me is precisely about the nature of reality. And so, there's a sense in which obviousness is sort of given by reality itself, right? So, reality itself has certain characteristics, in terms of size or color or so forth. And, this is actually a tradition that Kahneman comes from. So, he's trained in an area called pyschophics, in the 1960s, which was an area which an area that sort of tried to map environmental stimuli onto the mind. And, in his Nobel Speech, he sort of talks about these, what he calls 'natural assessments,' which is sort of size, loudness--you know, essentially the world tells us what's obvious. And so, if you take this into the Gorilla Clip, you'd say: How would anybody miss this massive, moving thing going across that's very surprising, because it should be--it should be obviously a thing that we recognize and pay attention to, and so forth? And, so it comes from this notion that reality sort of gives us what's relevant or meaningful or obvious. And the argument of the essay is that there's nothing like Obviousness. Obviousness is a function of what's in our mind: The questions that we have, the problems that we have; and we sort of attune to the world. And so it's sort of trying to flip the importance of--rather than looking at nature out there, looking at the organism, looking at nature and the questions that it has. And there's actually some really interesting sort of biological insights, insights from biology that sort of also, you know, verify this as well.
Russ Roberts: I was struck by the parallel to the EconTalk episode with Iain McGilchrist, where he talks about two different ways of thinking: The left side of the brain is very focused on a task. Like, he gives the example of a bird trying to separate a piece of grain from the dirt and gravel that may be around it. Got to really pay attention to that little piece of grain. And he's going to be oblivious, potentially, to a predator. So, with the right side of his brain, he's kind of looking around all the time. And you can see animals do this. You can see animals, especially birds, because they are very prone, I guess, to being eaten--they are constantly on alert. They look very stressed out all the time.
Teppo Felin: Right.
Russ Roberts: And they are constantly--and animals feeding. They will often do this as well--four-legged animals do this when they feed. They don't just chow down. They chow down nervously. They are always trying to scan the horizon--for a gorilla. For a predator. Something large. Sort of obvious. And we do alternate, I think, as human beings, between these two modes: The more focused mode and the more integrated, take-in-the-whole scene, how-do-I-fit-in, what McGilchrist calls connectedness or betweeness: my relationship to all kinds of things around me. And those--reality isn't one of those things or the other. It's both of those things? Right? The world's basically--the other thought I had is the world is full of data. Lots of data. And there's no sense in which, 'Oh, that data's important and this isn't.' I don't know which falls into which category in advance. Ex ante. You tell me, 'Look for the gorilla,' I won't miss is. You tell me, 'When does the gorilla come on screen?' I'm not going to be blind to the obvious. I'll get it. Right? But it all depends on what my task is--
Teppo Felin: exactly--
Russ Roberts: what's important.
Teppo Felin: Right. Yeah, yeah. I guess that's an issue that I reflect on in the essay as well, is that there's this sense that, yeah, the world will host[?] what's obvious, and the whole--all the attention on big data and so forth seemed to reinforce this idea that, you know, data will tell us the truth. But, from my perspective, you know, data is only as good as the questions and the theories that we have of it. Essentially. So, it's not the amount of data. Rather that it's the questions that we ask of it. And so, I have a co-author that's here at the Alan Turing Institute. And they just run these mega, mega datasets. And you can run all kinds of correlations, and so forth, but, you know, you don't get anything out of it, until you have some kind of informed guess and theory about what types of relationships we might, you know, expect. And so that's why--there's this Wired piece that sort of talked about the end of theory--how the delusion of big data will sort of replace science or something like that. And, I just find that to be the biggest misnomer. I think that, you know, theory and problems and questions are more and more important in this environment[?], because data never has some kind of has some kind of meaning or relevance that's attached to it. It's only useful in answering questions. And in the same way as that gorilla clip sort of illustrates.
Russ Roberts: Yeah. I've mentioned this story before. But, I have heard a number of young economists--and I don't mean literally young, but just trained a lot more recently than I was--say that, 'We don't need theory. We just listen to the data.' And I always want to say, 'The data doesn't speak. The data is silent. The only way you get the data to talk is to have a way to think about it: a perspective, a lens, a theory.
Teppo Felin: Right. Right.
Russ Roberts: And I've got a quote: Sam Thomsen, a commenter on something I wrote a long time ago, said something very profound which I always like to quote:
The universe is full of dots. Connect the right ones and you can draw anything. The important question is not whether the dots are really there but why you chose to ignore all the others.
And, those correlations in big data--there are zillions. You've got to decide which ones are meaningful, which ones are replicable, which ones are causal. That's what matters for human affairs, not patterns in the data. Big data is really good, AI [artificial intelligence] and machine learning is really good at finding patterns.
Teppo Felin: Yeah--
Russ Roberts: Even better than I am. I'm really good at it, as a human being. It's one of my weaknesses, right? So, AI is even better. But that's not a selling point. That just means it has a bigger flaw than I have.
Teppo Felin: Right. Right. Yeah, I guess--the other issue is that I don't think that more data in some situations will even sort of tell us the truth. So if we can, if we had more data on this gorilla experiment: I think this is--so there's this sort of reproducibility and replication crisis. I think that what's there is a crisis of interpretation and theory, in terms of: What are the types of questions that we are asking? And, if we have this a prior focus and fetish with looking for bias and blindness and boundedness, then we'll probably find some. Again, some kind of all-seeing standard. But I think that what's remarkable is human nature in terms of what is accomplished. And if you look at--I study innovation and creativity and things like that. And it's hard to sort of argue with the data in the sense of the world that we live in currently and the amazing conveniences and things that we have around us that have been accomplished, despite the fact that we miss the occasional gorilla or what have you. And so, I think that there's a kind of crisis of interpretation that I'm trying to channel a little bit in that essay, and associated academic pieces, in terms of: What are the questions that we are asking and maybe these a priori sort of this a priori focus on blindness is leading us to sort of craft these bias-centric outcomes that--you know, they tell us interesting things, but I don't know if they tell us something fundamental about human nature, and the nature of reality in the ways that we could do so.
Russ Roberts: Well, I want to turn to one of those academic papers, which has the title "Rationality, Perception and the All-Seeing Eye." You published it in the Psychonomic Bulletin or Review. And you co-wrote it, again, with Jan Koenderink and Joachim Krueger. And in there you critique Herbert Simon and Kahneman. Both of whom are very dismissive, or critical, of human rationality. And they are eager--in much of their careers, the part they are most famous for, most known for-- is their insights into our ir-rationality. As you point out, earlier, it's a big part of the ethos of behavioral economics is to point out our irrationality. And you point out that is that there is a certain, I would call, cheating underlying what they assume. So, explain what they claimed. Talk about bounded rationality, from Simon, and what he was trying to do, and Kahneman's twist on that. And why you think they are missing something.
Teppo Felin: Yeah. So, first, I wouldn't call it cheating. I don't know that this is unethical, deliberately by any means--
Russ Roberts: Yeah. It's a bad choice of words. I apologize.
Teppo Felin: Yeah. But I think that it's--I guess I have, sort of I should recognize Jan and Joachim on this as well-we have some sort of concerns about where that has taken us, essentially. And so, Herbert Simon essentially is one of the early people to introduce different sort of ideas from psychology into economics. Certainly Hayek and others were there as well. But, Herbert Simon's first two pieces, published in Psychological Review, and in the Quarterly Journal of Economics in 1955 and 1956, arguably those were sort of the basis of his Nobel Prize which he got in 1978. And, it was interesting--as I traced the history of this: In those papers, Simon makes the argument that the sort of, you know, hyper-rational, omniscient, rational-expectations actor--he says, 'Listen, it's not a real thing.' And so what he does is he coins a term: Which is a boundedly rational actor. And this has turned out to be really influential. So, it's essential to sort of Oliver Williamson's transactions costs, economics--but it's become really important in artificial intelligence--you know, Williamson and Simon was a pioneer in that. Williamson and Simon were both at Carnegie-Mellon together. And, but, when you go back and read those original articles, it's interesting that the emphasis there is specifically on perception. So, it's a similar foundation to where Kahneman starts in the 1960s; and so this is sort of 1940s and 1950s. And he basically says that rather than assume--so, imagine an organism sort of just looking for food--so this, your previous example. And, rather than sort of omnisciently seeing all food sources on some kind of landscape and going to the best source, he says: Animals, organisms are bounded by, you know, some kind of range of vision, what they can see immediately around them. And so he coins this term, 'bounded rationality,' that's very sort of, you know: perception-centric and perception-focused. But, it comes from this perspective where the emphasis is on the boundedness that scientists themselves, who sit sort of in an all-seeing position, specify and say, 'Look, in this environment humans make these types of mistakes because they can't see everything. But, we, as scientists, can see everything.' And this is sort of, if you follow this, sort of the breadcrumbs, has led to the work that Kahneman did, and the subsequent work on nudges, and other types of things. But, the argument that we make is that the perception, in terms of specifying it as sort of this range that's bounded or blind, is directed by something else. And, that something else is sort of foreshadowed in that essay, but in the work that Jan and Joachim and I did. There, we sort of tried to delineate the actual sort of theoretical arguments, which say that, 'No, we need to look specifically at the organism, in terms of--what is its'--the language that sort of works quite well comes from this ethologist who lived in the 1930s and 1940s, Jakob von Jakob von Uexküll, who talked about animals--humans as well--having this, he calls it a Suchbild, which is sort of a search or seek image. And so, what you have in mind is you are looking for something. And that's what guides your sort of awareness and attention. In the case of humans, that's, those are the questions that we're sort of prompted with or primed with or that we have in mind, that direct our attention. So, just to give you one example of this, from sort of a biological context: If you have a frog that's sitting right in front of a food source--so, it's got a cricket right in front of it. But if the cricket doesn't move at all, the frog won't recognize it. Because the frog's Suchbild or search image is a certain size thing moving at a certain speed--then it snaps. Then it's tongue goes and gets the thing. And so, the frog would starve in front of a perfectly good food source unless it moves. And, that's sort of the analogy that we try to bring in, which is that we need to understand the Suchbild or the theory, the problem question that economic actors have when they are attending to their worlds. And the direction that the sort of the behavioral angle has gone is more focused on the boundedness and the blindness. And we have any number of--if you go to the Wikipedia site to look at lists of cognitive biases, it's in the hundreds. There's so many of them. And so, there's a lot of emphasis on this, and I guess part of the worries is that's sort of taking up the oxygen in terms of how we think about, again, human nature and reality, if the emphasis is so strongly on that boundedness. And, like I said, I don't think that there is any sort of cheating involved. I think that there's models in which sort of this bounded rationality works and could be useful. But I also think that it's now taken us to a place where that emphasis on the blindness has--we're missing some really fundamental things about human nature. Which actually, you know, some social scientists like Adam Smith talked about a long time ago. And so, I've found some really nice insights from those types of places, because I think they give us a better conception of how we think about human actors and their environments, rather than sort of setting up scientists as these omniscient, all-seeing beings that sort of point out human failure and foibles and so forth.
Russ Roberts: Well, I think it's incredibly important because, especially this point about omniscience, and, just that observation--one of the things I learned from reading your work is to think about that, just that concept, that the scientist, the outsider, the policymaker, often acts as if they are omniscient--they have all the information. Which, of course, they can't. They don't. So, there's sort of two levels to the claim about blindness, right? One is: You are missing a food source I know about that's better. Okay? So, you are out foraging, you are the animal foraging for food; and you didn't realize that over the hill there's this fantastic stuff. You couldn't imagine it, even. You didn't even think to imagine it. So, you missed it. And I know, therefore that you have a suboptimal performance, and I'm going to therefore subsidize your climbing of the hill: I'm going to make it cheaper. I'm going to level the hill and allow you to get to the "better" food source. And that's better, of course, to find by me. So, one level that's strange is, obviously, it may not actually be better. I've decided as the scientist that you've made a suboptimal choice. And we see this in Development Economics all the time--we see outsiders giving locals advice on what to plant without realizing the complexity; or they introduce a piece of technology not realizing that it won't be used because it conflicts with cultural norms of that culture, that local culture. We see it when--just the way, to say that 'You're flawed. You're imperfect.' And yet the scientist claims to know. One of the ways this manifests itself that drives me crazy is[?] risk-taking. You know: people say, 'You're making the wrong choice,' on a piece of uncertainty, neglecting the fact that a choice that I may that has uncertainty around it can lead me to sleep worse. That I turned down a risk because, 'Oh, but don't you realize the expected value,' and I'm thinking, 'Why would you ever make that judgment on behalf of another human being?' Well, people do--we do it as policy makers and as social scientists all the time. So, I think this idea of omniscience is important. And this--you mention this--how do you pronounce Jakob's Uexküll's--how do you pronounce his name?
Teppo Felin: It's "Uks-kool."
Russ Roberts: We'll try to find a link to something. His last name, which is tough for Google, is Uexkull. You mentioned one of his concepts. One I liked was 'umwelt'--by which--and I'm quoting you, now,
by which he meant the context of existence. He noted that [and this is Uexküll--Russ Roberts] "every animal is surrounded with different things, the dog is surrounded by dog things and the dragonfly is surrounded by dragonfly things." [And this is you, commenting--Russ Roberts] These Umwelten or surroundings are not objective, but they comprise what the organism attends to, sees, and ignores. Hence, Umwelten vary across species and even across individual organisms within a species.
And, of course--I know you agree with this--people make mistakes all the time. People are flawed. They have lots of cognitive biases. We talk about them all the time on this program. But, the idea that I can tell you what yours are, is arrogant. I'll never forget the world-class lawyer who told me that we couldn't have, we couldn't abolish Social Security or forced retirement plans because his secretary would never be able to make those decisions for himself. This was a man who confessed to me that he picked individual stocks, didn't have any indexed mutual funds. And I suggested that maybe he suffered from the same--this was, it was a crazy idea for him. So, he had no problem looking down on his secretary; I had, of course, no problem looking down on him. I might be wrong as well. But, the idea that somehow there's this objective truth out there, that the scientist or expert or policy-maker can know and that others cannot know, seems to me a very dangerous idea.
Teppo Felin: Yeah. Yeah. This played out[?] actually in really interesting ways. I ran into, just a couple of months ago, in the Journal of Political Economy, Armen Alchian has this piece on sort of uncertainty, equilibrium, so forth.
Russ Roberts: Great piece.
Teppo Felin: it's a great piece, yeah. And Edith Penrose wrote this response in American Economic Review. But she sort of calls out Alchian exactly on this issue. She says, and I'm going to find the quote here real quick--she says, "For the life of me, I can't see why it is reasonable on grounds other than professional pride to endow the economist with this unreasonable degree of omniscience and prescience, and not entrepreneurs." And basically, she's saying that we're looking at markets and saying that there's no opportunities in markets, and we can sort of prove this through various formulas and so forth, with our math. But, nonetheless, this omniscience that we give ourselves as scientists--why can't we just assume that there's something about these actors themselves in terms of the theories that they have? She doesn't use this language, I've sort of imposed on top of it--but, the theories that they have. Because, they're trying, in uncertain environments, make sense of the situation as best they can. Right? And, this turns out to be important. I think the part that might be really interesting to your listeners is this notion of omniscience, the way it plays out. I kind of like the, sort of the thought experiment, about $500 bills on sidewalks. And so, it's one that's used by, I don't know, Akerlof and Yellen[?] and Romer and many sort-of economists. And basically the argument is that there are no $500 dollar bills on sidewalks. If there were, someone would have picked them up already. It's almost the equivalent of this notion of natural assessment that Kahneman has, that, sort of, the world is obvious. In the case of economics, that things have labels on them. They have a price. Right? And so that price tells us how valuable something is. And, if it's really valuable, it will get picked up, essentially. But, the argument here is that there's no way to sort of value and label and create relevance and meaning for every single thing out there in the world, essentially. So, economic actors or sort of the Hayekian man-on-the-spot, is constantly looking and finding new uses and sort of affordances for various items that are novel and new. And, I think that's a really important point, to think about that omniscience and how our models might need to change to recognize that these economic actors and agents are also sort of theoretical beings, in some sense; and they are trying to make sense of the situations that they are in. Certainly they are making mistakes--which is inherent to what they are doing, what an entrepreneur is doing. But, giving them the benefit of the doubt, I guess in some ways.
Russ Roberts: I think another way to think about it is that the $500 dollar bills that are lying around are disguised. They don't say, '$500 dollar bill' on them. I think about Fred Smith, when he started Federal Express--I think I mentioned this recently--the first night it was open they delivered 2 packages. One was a birthday present he was sending to his mom. The other was their only real piece of business. And they were pretty discouraged. And after a few weeks, or maybe a few months, they realized they weren't going to make it. Smith went to Chicago from Memphis to try to get one more loan. They turned him down. He's coming back to close the company. And if that had been the way it played out, people would have said, 'What a stupid mistake he made, trying to deliver packages overnight. It's not profitable. Didn't he realize that?' And, instead, he went--I think he saw on the board at the airport, he saw a flight leaving for, I think it was Reno. Somewhere in Nevada. And he put all of his money, maybe his sister's money--got sued by his sister for raiding their family trust fund--and put it all on the roulette wheel on red or whatever it was. And happened to win. And made payroll for another week or a month, and therefore had a chance; and they started to grow. And they made it. And now everybody can say, 'Well, that was a giant $5-billion dollar, $50-billion dollar bill laying around. Overnight delivery. And he was the only one. Why did it take so long? Why didn't somebody pick it up earlier?' And the answer is: Because it's not announcing. It's not beeping. They don't beep, right? It's just such an important point about the way innovation takes place.
Teppo Felin: Yeah. The example I think that I like--there's probably too many Steve Jobs examples of sort of like everything--but I quite like this example: So, when Steve Jobs was sort of creating the original Macintosh, this is captured in Walter Isaacson's biography but also in several other places--when he walked into Xerox Park--so, he knew the CEO [Chief Executive Officer] of Xerox Park, I believe he was based in New York, but he was walking through Xerox Park there in Silicon Valley; and he had a chance to see this sort of dormant, latent technology that Xerox wasn't using because they were busy with the copier business and so forth. And the way that that's sort of described: When he saw the graphic user interface, and the mouse, he's--there's several sort of quotes on his--there's all kinds of lightbulbs going off and saying, 'Holy cow.' Like, 'This is it. This is going to solve a major problem.' And, any number of other people had sort of scanned through and seen what was happening there. So, Todd Zenger, who is my co-author, actually spent time and was at Xerox Park around the same time, walked through it. So he's like, no alarm bells: nothing went off when he was looking at this technology. Because, he doesn't have the Suchbild or sort of seek-image, a problem. And then, so in that case, it didn't have a label. It didn't say, 'This costs $1 million dollars,' or, there wasn't a market for it. This wasn't being auctioned. There was nothing for it. In fact, what Steve Jobs, the arrangement that he made, was, it was--that Xerox was allowed to I think it was invest $2 million dollars in Apple, and then they sort of took the technology--I can't remember what the exact, you know, arrangement was. But, basically, things that are incredibly valuable aren't--they aren't priced, necessarily. And so, it's sort of the Suchbild, the questions, the problems that economic actors have, that then sort of helps those lightbulbs and helps create that salience, I guess, for things that for other people might be just trash or this is just engineers just sort of messing around, and this technology won't be relevant to anything. Right? And so, it's important to think through the Suchbild and the theory that these economic actors have. So.
Russ Roberts: I like tying it to the first example. So, Steve Jobs watching the video, and he sees the gorilla and he goes 'Oh, my gosh, there's a gorilla among these basketball players.' And for him, that graphical user interface was the gorilla. It was so blindingly obvious to him that that was something of incredible value. No one else saw it.
Teppo Felin: Right.
Russ Roberts: And, so what does that--does that tell you they were irrational, they didn't realize? No. He had a different way of saying. He knew what, as you say, he knew what question to ask. Which was, a question he'd been asking, and he saw this as a solution. If you want to ask[?] the question, it's not a solution. It's just a toy. Something that people had fooled around with. It was a cool bit of achievement that you'd show off and had no practical application. He saw that it had a practical application. But, the idea of seeing innovation, entrepreneurship as--I think of it--this is a subtle point that you make, but I think it's hard to understand it but I think it's there: There's a difference between perception--certainly visual perception--I want to go back to the foraging example. I look all around. I'm very thorough. I don't see any food. I look all around. But I don't realize I can go over the hill. Or, I don't realize if I climb up the tree, I just look at the base of the tree, I'm missing. And so, the person who comes along and has that innovative insight is able to see, perceive, in a richer, different way. But you wouldn't want to call the first person 'blind.' They see everything. They've got everything. So, I think it's the whole idea that perception sort of--what's in your field of vision is not really the interesting question.
Teppo Felin: Right. Exactly. Yeah. And I guess--that's part of the concern I have, is that there's sort of this discounting of people's beliefs and other types of things. The psychology that we've introduced is, for my taste, really one-sided. And so, if you look at original--if you look at the work of people like William James and others, it's a far richer conception of human beings, and it's--human beings are driven by their beliefs, for example. And so you have beliefs that will then lead you to see certain things. And this, for me, ties into people like Adam Smith. And so, a book that I quite like, along with the original work, obviously, of Adam Smith, is Emma Rothschild, who is an economic historian at Harvard, has this book called Economic Sentiments. But, in it, she quotes Adam Smith as wanting to get into the sentiments and minds of the actors. And then she has this sort of summary that I like. It's not Smith saying, but it's her summary of what Adam Smith was after. Which is that: Adam Smith was after a theory of people with theories. And, I thought that was just beautiful; it was just a beautiful sort of conception--
Russ Roberts: Explain that.
Teppo Felin: Yeah. So, it's a conception of human nature that gives them the same sort of proclivities that we as sort of scientists have about them. And so rather than sort of observing human beings as automaton, you know, on some kind of chessboard or whatever, that we can manipulate and move in certain ways, rather, we give some dignity to those actors. Recognize that they're acting under, in uncertain conditions. But, these people also have theories, that guide--and models that guide their activities. That then lead to, hopefully great things, like, iPhones and cars and so forth. And I think that that conception is missing. There's a little bit of it in economics. Penrose had a little bit of that intuition. Recently, I visited with Eric Van den Steen who is an economist at Harvard, and he's talked about sort of beliefs and measure[?] of vision, and I kind of like that notion as well. And, with co-author Todd Zenger we've kind of tried to flesh that out into a theory of thinking about, you know, the role of economic actors: sort of having these theories about how to create value rather than starting with the premise that they are sort of blinded, and so forth, or mistake-ridden. Which they are. But, but, but, can we also sort of develop, I guess, a model that gives the same capacities that we have as well. So.
Russ Roberts: Yeah. I--this has, obviously, you mentioned earlier the relevance for Behavioral Economics. One criticism, that's not what we've made, of Behavioral Economics, is that it's just criticism. It just says, 'Well, economics--the reductio ad absurdum of homo economicus is rational. Is all-knowing. Perfectly informed. Perfectly rational, calculating machine of maximizing utility, is, "inaccurate." Which, of course, it is. No thoughtful person would disagree with that. But, what the Behavioral Economics and Psychology literature have done, to some extent is just accumulate shortcomings of that model. But, as you point out, it doesn't tell us how people actually behave. It doesn't--so far, at least--as far as I understand; maybe I'm being unfair to it. I don't know. But, you are suggesting that we should get into the--Smith's style, Adam Smith's style--and try to figure out what theories people are using to understand the world. Imperfectly, of course, because they can't--there's no such thing as a perfect understanding. It's one of the lessons of what you've written.
Teppo Felin: Yeah.
Russ Roberts: And to think more deeply about that--or maybe it's not possible. I don't know. What are your thoughts on that?
Teppo Felin: Yeah. I mean, that's the problem, is that any sort of beliefs, particularly if it's radical beliefs about something, they can look like delusion to other people. And so, the example I like--so I actually, before I got a Ph.D. I worked in venture capital. And we were trying to invest in, you know, things that are the next big thing, right? And, the next--people sort of pitching the next big thing sort of tend to go in herds. And there was a lot of sort of similarity around what they were pitching rather than sort of truly novel beliefs about things. But, a window into this, actually, that--that I think is quite interesting. One of the--I don't remember the venture capitalist's name right now--one the venture capitalists who invested in sort of Instagram and then Twitter and many sort of very successful companies--there's this interesting exchange where, they had a chance to invest in Airbnb. And, in this real-time sort of exchange when they are talking about whether we should invest in this company, they talked about, they are like, 'Well, we are not sure that this couch-surfing thing is really going to blow up and become a big, you know, hotel chain or anything like that. There's no way that you can compete with a sort of sophisticated hotel market.' And, 'It sort of goes against the intuition that we want to sort of have, sort of, you know, people that we don't know stay in our homes. Like, why would you want to rent out your home? Like, why, when you go to New York, why on earth, if there's a sophisticated hotel market, why would you stay in somebody's place rather than in a hotel?' And so there's this intuition where it's sort of counterintuitive and contradictory to sort of common sense, almost, right? And, in so, you can actually find this email exchange that sort of captures some of this, not exactly in those words, but it highlights this issue which is that beliefs that entrepreneurs might have: on the whole, looking at funders and the public might say, 'This is nuts. I wouldn't put my house up for anybody to use.' But they sort of stuck with it and they said, 'No, we think this is a thing.' Now. And it turns out that now it is the biggest. It has the most rooms in the world. It's bigger than Marriott and any number of other chains. And: They sort of have what looked like a delusional belief that they sort of, once they solved certain problems like, you know, verification of people's identities and sort of using an e-bay[?] type sort of both recommendation and sort of reputation system, that that made you feel comfortable renting someone you've never even met, they solved all those problems. And then it became something. Right? And I think Uber is sort of a similar thing, as well. We are all told you shouldn't ride with strangers and so forth. But, they solved a host of problems; held on to this theory and belief that this could be something. So, now when I travel, it's sort of a mess with my kids and getting a number of hotel rooms doesn't work, so we always just use Airbnb because it's a very simple solution. But, had been asked by that to invest in it at the time, I probably would've said the same things as some of these venture capitalists, which is: 'No, you know what? Couch surfing is fine for hippies and whatever. But, this won't be a mainstream, value-creating activity that deserves venture investment.' So, I do think that, yeah, getting into the theories and models that people have and thinking through who to they need to convince--what funding mechanisms--public markets aren't always very good at funding things that are counterintuitive. And so maybe you have to find more patient investors--people who buy into the theory and the model that will then enable you to actually realize it in a way that others don't think is possible, essentially.
Russ Roberts: Yeah. I encourage listeners to check out the episode we did with Nathan Blecharczyk, one of the cofounders of Airbnb, and also Sam Altman, who describes how--I think it was Sam--how they were accepted into the Y Combinator to get help and funding even though it was a ridiculous idea; but they thought a lot about the founders were really creative; and that revolves around cereal boxes. You can listen to that. As well as, Marc Andreessen I think talks about why he passed on Google--a "mistake." Or, I don't know what you want to call it. I wouldn't call it a mistake, but it's a decision he made that he wishes had been made otherwise, obviously. What's interesting to me is that a lot of those--as you say, a lot of those startups seem implausible. Certainly Airbnb is ridiculous--saying I'd let strangers in my house, or stay at a stranger's house. And obviously, there's two kinds of rentals for Airbnb--you get the whole house, but a lot of times you are taking a room in someone's house. That that would work is crazy. But it does. So, in the early days of that kind of venture capital, people were just so skeptical of those kinds of ideas. Now I think they've gone the other way. It's like, 'They'll figure it out. It'll work.' So, the driverless car is thought to be inevitable. And I have to concede: I sort of think it is inevitable. But it has a lot of challenges. And, it's not obvious: If you had to bet on when it's going to be the dominant form of transportation, that's not a bet I'm happy about, eager, to make. I have no idea. I see all the selling points; and I have assumed, foolishly, that all the technical challenges have been solved; all the regulatory barriers have been solved; we're going to put 3-5 million people out of work in America driving taxis, Ubers, and trucks eagerly to avoid killing, people dying in accidents; we assume that no one will die in an accident--maybe a handful of people--if there are driverless cars. So, all of those problems are going to get solved somehow: it's just inevitable. And certainly, enormously large bets are being made that it will happen by more than one company. Which is crazy. Fascinating.
Teppo Felin: Yeah.
Russ Roberts: You want to say something about heuristics? Because heuristics, which are rules of thumb that people use to make decisions to get through life, are an example of something that I think is often called irrational or foolish. And, I think it's an example of where the omniscience of the outsider is making a mistake.
Teppo Felin: Yeah. I mean, there's sort of a debate between the biases literature, folks on biases, so Kahneman, Tversky, and others; and then there's this sort of strain of research in psychology by Gigerenzer and many of his colleagues that focuses on heuristics, which say that these things that look like biases are roughly rational. So, the all-seeing eye article that you talked about, it actually led to a debate where Gigerenzer sort of responds to some things, and so forth. I have some challenges with that notion of heuristics, because the emphasis that they have is on very general heuristics, and I'm much more focused on sort of very specific heuristics almost as questions. And so, for me, a simple heuristic--and this comes from Michael Polanyi, is sort of this Suchbild-friendly[?] way of thinking about it, which is: Searching for an object is, for me, a heuristic. And so, when you have something in mind that you are looking for--like, you are looking for a solution to make computing, personal computing, easy--and so if that's your model, then you are going to quickly identify graphic user interface [GUI] and mouse. Versus, if you are IBM [International Business Machines] and you are saying, 'It's never really going to be a thing. It's going to be these big mainframes,' and so forth, you are not even looking at the world in the same way. And so, the heuristic is this sort of search for, given a certain model of the world that then guides your activities. And so, that's why I like the notion of Suchbild, which you can sort of translate into some kind of heuristic or seek/search image where you are trying to wrestle with a problem that then lets you quickly see something that others aren't seeing. And, again, if our emphasis is that the world tells us what's obvious, then we'll never get to those things. And so, it's the sort of difference in, you know, me, coming to a painting and crying because it has some meaning to me, or it answers some question or whatever; versus somebody else just walking by and saying whatever: All I see is--
Russ Roberts: a bunch of color--
Teppo Felin: Yeah. A bunch of color, or a landscape, or what have you.
Russ Roberts: Shapes.
Teppo Felin: Exactly. And then, so, rather than focusing on or thinking that the world is going to give us this relevance and meaning, we need to impose that with the questions that we have. And I think that goes for the arts just as it goes for entrepreneurship or any other activity, because it guides our behaviors in powerful ways, and leads us to, like I said, novel, interesting, creative innovation and so forth.
Russ Roberts: Wouldn't you say, though, that so many of the most important innovations of the last 50-100 years are people who didn't look for something that would solve that problem? I mean, so [?]--I guess part of it's semantics about what you mean by 'look for,' and what you mean by something that would solve the problem. But, it strikes me that--one of my favorite examples of this is the slide rule. So, the slide rule is this fantastically amazing human creation, which many of our listeners--by the way, I mentioned a recent episode about--a lot of our listeners are 25-34. They are not more than half. They are the largest group. They are about 36% in the survey that we did--they are more than, say, the 36-45. But I don't--we have a lot of young listeners, is all it means, not the dominant. But those young listeners, and even the ones under 44, which are also a fairly large group--they have never seen a slide rule. You probably don't know what one is. And it was a computing device. It was a way to make pretty accurate--not perfectly accurate, but pretty accurate--calculations of various kinds: trigonometry, large multiple-digit math problems, that, until about 1970 or so had to be solved with either a book that you looked up in the back with a bunch of tables. And my dad had such a book. He'd--I don't know why he had it. He wasn't a STEM [Science, Technology, Engineering, and Mathematics] kind of guy. But he had that book. I think it's because he was a Psych grad student and had done some statistics in his time. So, you had to use that book, or you had to use a slide rule. And every engineer had a slide rule in their pocket, or somewhere in their briefcase. And the biggest slide rule company was Keuffel and Esser, K&E--I think they were the largest. And, you'd think: Well, how do you make a better slide rule? We'll make it out of more durable equipment. We'll make the marks finer so you can make the calculations more accurate. But what kills it is the Pocket Calculator, of course. Which not only ends up being cheaper and more accurate than the slide rule, but ends up ultimately being much less expensive. Which is really mindblowing. And that comes out of nowhere, right? It's the kind of innovation that disrupts an industry, because they just can't imagine--they can imagine, but they didn't know where to look for it. So, I would think that's an important part of the story. It's a different kind of looking, at least it seems to me.
Teppo Felin: Yeah. I think that there's something intriguing about thinking about the nexus of questions that organisms or people or entrepreneurs have and then serendipity. I think of the story of Archimedes. And so, Archimedes was given this challenge by--was it the King? I can't remember--to ask, 'How do I measure the volume of an irregular shape?' Essentially. I think it was a crown or something like that.
Russ Roberts: Yeah, it was a crown.
Teppo Felin: And he says, 'No, this isn't possible.' And so, he goes home and he lowers himself into a bathtub; and he notices that the bathtub, in sort of commensurate fashion, the water raises. And he says, 'Holy Cow! Eureka!' he runs out onto the streets of Syracuse [Greece] naked and says. And so, it's sort of where--is that serendipity or is it this question that's been posed to somebody who is smart, who has a theory, and then observes and says, 'Wait a second. This is different.' Because all of us have lowered ourselves into bathtubs, but we might not associate that with that question, say, right? And then, so, I think there's an important role in the questions that we pose. And, when those meet certain observations out there in the world then we come out with insights. And so, just to give another example: So, Newton observed--he actually told this story to a friend and it was sort of captured--he said he observed the apple falling. Any number of people have observed things falling, right? But we don't have sort of a theory of gravitation that immediately pops to mind. It's with the right question and theory that things start to take on new meaning and relevance. In the same way, he didn't have Big Data to highlight how white light actually is sort of composed of the rainbow; and he'd observed rainbows just like a number of other people had observed rainbows. But it was with the question and the interpretation that these observations then took on new meaning. And so, no Big Data would really tell you anything about that, right? So you could run Big Data and observations; and in some ways, we had. All of us had seen things falling, or throughout history, things had fallen. Or, we'd seen rainbows. But it's only once we have the right question and problem to solve that these things take on meaning and become quite powerful. But, absolutely: there's some form of serendipity in sort of the question meeting this encounter with the graphic user interface or the apple or the rainbow--what have you--that then creates tremendous insight about what might be possible.
Russ Roberts: Well, I also want to suggest--and we'll close on this--I also think there's something mysterious--not necessarily mystical, but it might end up being mystical--but something mysterious about innovation and insight and perception. So, you mentioned Archimedes: you said any number of people lowered themselves in a bath before. Well, so had Archimedes. So, if you ask, 'Why this time?' he couldn't have answered that question. Part of it, he could have, because he said, 'I'm thinking about the crown.' But he couldn't have understood--most of us can't understand how we come to that insight at that moment. My son is just reading Oliver Sacks's book, An Anthropologist on Mars--no, it's The Man Who Mistook His Wife for a Hat. Both are great books. And in The Man Who Mistook His Wife for a Hat, there are two autistic--I think they are sisters, twins. And Sacks spills a box of matches on the floor. And they both say, immediately, the two of them together that, 'Oh, there's 117 matches on the floor.' Then they both say, '39.' And 39 is a third of 117. And the incredible thing is, they can't multiply 39 times 3. If you'd given them that problem they couldn't solve it. But they can see, in some fashion we can't understand. [Note: The actual number in the book is 111 which is 37 times 3. The latter is a better story because the twins are able to see and come up with prime numbers.--Russ Roberts, from an emailed correction.] And so, I think the tendency to map the human brain as a computer and to assume that all problems will be solved by computers because computation will be better assumes that all problems are computational. And I think many problems are not computational. My favorite example of this is Andrew Wiles--Archimedes is pretty good, but Andrew Wiles who solves Fermat's last theorem, and then is on the front page of the New York Times for his insight, and then discovers there's been a mistake in it, and spent a little over a year trying to re-prove something that he assumes is true. And then one day he just says--he sees--and he can't explain how he saw it. It's not like he worked on it in a different way, or his brain--he just tried harder. Something just clicked. And some part of the human experience is that clicking that we don't understand. Maybe we'll come to understand it someday. It's possible. But I don't know.
Teppo Felin: Yeah. I think just the comfort[?] with uncertainty and that type of serendipity and so forth is important. And I guess, with a lot of science we have all kinds of certainty about what's obvious and so forth. And I think over time, the people that are comfortable sort of sitting back and maybe questioning some of those foundations that might then yield interesting insights that are just fundamentally different--I've been working with this co-author, Stuart Kauffmann, who is himself an atheist but he wrote a book called Reinventing the Sacred, and he said essentially what we've done with science is we've sort of taken out the mystery, in terms of being comfortable with uncertainty and emergent and other types of dynamics. And I think that this issue of perception highlights that. And I think if we can get a window into some of this by thinking about the Suchbilds that we have in terms of where we look for meaning and where we look for insight and what types of problems we're trying to solve--and that's sort of part of the essay and these pieces with Jan and Joachim[?] aim to try to develop those arguments.
Jul 23 2018 at 2:22pm
All attention is selective attention. Long ago, William James pointed out that, if it wasn’t, everything would be a “blooming, buzzing confusion.”
I encountered this video years ago on a Jon Kabat-Zinn led meditation retreat. Like most people I got the pass count right but was amazed to find I had missed a gorilla. In what may well have been a transparently self-serving interpretation, I demanded to know why this result shouldn’t be viewed as a triumph of one pointed attention.
Jon answered with the example of a quarterback who needs to stay focussed on his receivers while also noticing pass rushers. He is a superb meditation teacher but I thought that was an unresponsive answer. The quarterback was coached to keep track of two things. We were coached to keep track of one thing.
There is something that could change my mind on this and I’m surprised I have never seen the data on it despite reading about this experiment many times since. Did those noticing the gorilla pay a price in counting accuracy? I don’t know but I suspect they did. If anyone knows please post the information here. If gorilla noticers do count passes as well or better, then we non-noticers really do have an impoverished level of attention. If not, I’m claiming success for our group.
I enjoyed this podcast but thought it went way too far in attributing claims of “omniscience” to behavioral economists and policy makers. I have literally never heard either claim omniscience. If you can really identify some who do, I will happily join you in denouncing them. In the other cases I suspect a straw man.
It is hard to design an experiment where the experimenter doesn’t know more about the situation than the subjects. That’s very far from a claim of omniscience.
And a belief that you will get better results by implementing some government policy than by not implementing it is no more a claim to omniscience than the reverse. Agnosticism on the matter, not libertarianism, is the position of humility.
Jul 28 2018 at 11:32am
When I did this test in a group the one person who did see the gorilla was bored and deliberately wasn’t counting. I am curious if the 20% who see the gorilla just aren’t following the directions.
Jul 23 2018 at 9:30pm
The ability to focus on what matters and shut out extraneous noise is a feature, not a bug. At any given instant, there are thousands of sensory signals all vying for a person’s attention. The brain needs some way to filter out most of those sensations so that the one or two really important signals get through. Humanity would not have lasted long if the sound of birds chirping overrode the low growl in the bushes.
One of the reasons it took decades before a super computer could beat a grand master at the game of chess is that grand masters ignore the noise – that is, the literally billions of “garbage moves.” Instead, they concentrate on the moves that will lead to a strategic or material advantage. Computers, by contrast, don’t “understand” chess and must run through every possible move – good as well as bad. When its time is up, the computer selects the move for which it has calculated the highest relative value.
Jul 24 2018 at 6:40am
“The ability to focus on what matters and shut out extraneous noise is a feature, not a bug. ”
The potential problem is not with the ability to filter out extraneous noise, but rather with the fact that selection of which noises matter and which are extraneous isn’t perfect. In the Steve Jobs example, lots of successful people filtered out the GUI, for example.
Jul 24 2018 at 5:30pm
True. However, we have a surprising amount of control over what we filter out and what we let in. For example, say I’ve decided to buy a new car of a given make and model. Suddenly, the world seems filled with those cars – cars that I’d never noticed before. We’ve all had this type of serendipitous experience. The trick is learning how to consciously “program” our mental filters.
Jul 25 2018 at 1:48pm
In his books and instruction, renown tracker (of animals and people) Tom Brown covers the scanning process and, in particular, the problem of “learning to see” tracks and signs of activity.
In particular, he relates the story of Special Forces officer who worked with the Montagnards in Vietnam. Allegedly, the officer was told one can’t spot tripwires and the like in the jungle. And, because of that mental conditioning, he never did— though his indigenous bodyguards had no problem spotting them. As the story goes, the soldier tried an experiment when he returned stateside: he mentally conditioned himself to seeing golf balls. (How? Why? Dunno.). Eventually, he began seeing them everywhere he went, amassing tens of thousands of gold balls over the years.
Whether this anecdote is exactly true (Tom is prone to some B.S.; though it’s a weirdly specific story to make up), the principle is. A person or animal will leave some measurable, if minuscule evidence of its passage through almost any environment. And with enough training, one can condition the reticular activating system to filter for the proverbial needles in fields of hay.
Jul 24 2018 at 8:42am
Umwelt psychology is interesting. Objectivity is hard!
Perhaps underscoring this, as far as I can tell the stories about FedEx and Fred Smith are apocryphal.
Apparently Smith himself wrote that he did indeed use the idea of computerized package delivery in business school, but doesn’t actually remember the grade.
I haven’t found any indication that saving the company the roulette story is actually true. I suppose though, that if it were true, it could help illustrate how much luck plays into business.
I did read, however, that Smith (presumably accidentally, although maybe recklessly) killed two people while driving.
The actual Fred Smith seems to be a well-connected, well-born guy, with perseverance, luck, and less worry about how far he’d have to fall than the typical guy.
Jul 24 2018 at 8:57pm
I cannot vouch for the Las Vegas story (and various tellings differ on the ambiguous point of whether the gambling was black jack, craps, or roulette; black jack is the only one not dominated by chance in the house’s favor) and I certainly do not endorse that choice.
That said, it seems the story has become publicized in the book Changing How the World Does Business: Fedex’s Incredible Journey to Success – The Inside Story by Robert Frock. founding executive of Fedex.
Jul 27 2018 at 9:07am
Thanks for the book reference. This is better than word of mouth, and roulette winnings are possible. I’m sure you understand my skepticism though, book or not – given that so many founder stories come out as untrue – for instance, Steve Jobs’ garage.
I suppose though, that it can be taken as a proverb to more directly illustrate that business involves risk.
Jul 24 2018 at 11:21am
A quick comment: apologies for all my verbal tics. As someone poignantly mentioned to me in an email, I somehow managed to pack in a distracting number of “sort of”s into the conversation. A quick (cmd-F) search of the transcript on the right suggests a whopping 95 instances – along with all kinds of other verbal fillers. Ouch. I clearly need some media training, if I am to do something like this again.
Anyways, I’m hopeful that interested listeners can look past that, and find some gems in the discussion. And interested listeners may also want to check out the associated essay in Aeon Magazine and related articles in Psych Bulletin & Review (with Jan Koenderink and Joachim Krueger) and Strategy Science (with Todd Zenger). These pieces are less tic-filled, I think (though I have my share of writing-related tics as well). See the “Delve Deeper” section on the right for links to these articles.
Jul 24 2018 at 6:34pm
I noticed the verbal tics the way I noticed the gorilla in the selective attention test, which is to say not at all.
I will note, however, that decades ago, I was sitting with my graduate advisor for a seminar, and at the end of it he criticized the speaker by saying that he [speaker] said “umm…” 146 times! The next few talks after that I started noticing everybone’s verbal tics and the whole experience of going to a talk became much less enjoyable. Fortunately, I stopped noticing after a while.
Jul 25 2018 at 7:51am
Michael: good point – yes, many others have mentioned the same thing to me (i.e. didn’t notice tics – wasn’t looking for them). On Twitter one commenter said the tics weren’t part of their Umwelt (well, the Suchbild-Umwelt relationship is fascinating). Alas – by pointing out the tic, I’ve now managed to create a “sort of”-Suchbild for any future listeners.
Jul 26 2018 at 7:26am
On the plus side, “verbal tics” (or at least non-lexical stumbles like “um” and “uh”) may aid listener comprehension. (See, e.g., “The disfluent discourse: Effects of filled pauses on recall.”)
Related fun-fact: Even ASL has signs for “um” and “uh” (presumably for translation purposes).
Jul 27 2018 at 12:52pm
The EconTalk episode with John McWhorter discussed the power of undefinable interjections like “um” and “like” that we use to communicate nuance, interest, and attention.
Jul 29 2018 at 12:36pm
Neat. John is fantastic. It’d be interesting to see Bryan and John talk about the value, if any, of foreign language instruction in high school and college.*
Aside: One of the fun things about the “Invisible Gorilla” experiment are the follow-up results (i.e., when happens when people who know about the first video watch a very similar, but ultimately different, video).
Jul 24 2018 at 11:30am
It is relevant to notice that a very serious problem and persistent limitation of artificial intelligence is that we don’t know how to give AI a good general ability for making relevant selective attention. As commenter Richard Fulmer said above, “The ability to focus on what matters and shut out extraneous noise is a feature, not a bug.” It is a feature we have not been able to add to AI in a general way. Here is an introduction to the frame problem for AI. (Also note the now ironic use of “obvious”.)
All our success in AI comes from programmers building into the software algorithms for sufficient focusing guidance for specific defined problems. If the problem to solve (i.e. the question, so to speak) is changed significantly, the programmers must write new code. The AI doesn’t understand the problems in any way that would let the AI arrive generally at how to focus and find solutions.
It’s obvious that this was a great episode of EconTalk. 😉
Jul 27 2018 at 5:07pm
Great comment. Yes, I don’t know how all the AI-beats-human-rationality stuff manages to ignore the limits of computation and indeed the frame problem (discussed by Hayes-McCarthy years ago, in 1969). This problem doesn’t even get mentioned in the rationality context (this 2015 Science piece is a good example of this). Even Charles Babbage and Ada Lovelace anticipated these problems. Here’s Lovelace:
Yet many continue to use computation and calculation (NP-hard/complete, etc) as the main/key metaphor for rationality (as a side note: with Stuart Kauffman et al we wrestled with what this means for entrepreneurship-innovation here).
I like your “problem to solve”-angle. Michael Polanyi had a nice 1957 article titled “problem solving,” which also has a definition of heuristics that is useful (I struggle with the generalized cue-based focus of Gigerenzer et al) and links to questions. And the species/human-specificity of all this (perception and Suchbild)—amidst all the generalized, computational models—never gets any attention.
Jul 27 2018 at 9:52pm
Mind Matters (cf. mindmatters.today) is a podcast and a news and commentary site operated by the recently established Walter Bradley Center for Natural and Artificial Intelligence. (cf. centerforintelligence.org). Neurosurgeon Michael Egnor offered some thoughts during a panel discussion at the launch event and later elaborated on them in an article at mindmatters.today.
Here’s a short excerpt that touches on your comment.
Jul 26 2018 at 12:21pm
just to highlight the inate ingenuity (or rebelliousness?) of the human race, I can report that when I watched the video the first time I took the title “selective attention test” as a clue and predicted that at the end I would be asked how many passes the black team had made and so started counting those and immediately saw the gorilla. Needless to say I failed at both tasks, being so bemused by the gorilla…
Jul 27 2018 at 9:05pm
Interesting. I’m not surprised that you saw the gorilla when you changed your focus.
It might seem arbitrary whether the test was actually to count one side’s passes or the other, but my guess had been that it matters. Looking for passes by those in white will ignore black clothes (and the gorilla with black fur), whereas my guess was that looking for passes by those in black would involve ignoring white clothes and seeing the gorilla — which you did!
Jul 29 2018 at 11:04am
This episode calls to mind the episode with Ian Mcgilchrist and the two modes of attention, left brain versus right brain.
Also this writing from 1972:
In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.
Simon, H. A. (1971) “Designing Organizations for an Information-Rich World” in: Martin Greenberger, Computers, Communication, and the Public Interest, Baltimore. MD: The Johns Hopkins Press. pp. 40–41.
Jul 30 2018 at 2:14pm
This is yet another great episode. A quick remark. As noted in the conversation, the “selective attention test” has several structural defects. The framing bias introduced by the initial instruction – count the number of passes – is probably the most serious of them. On top of this, I’d say that the name of the test is also an error-prone feature. Any competitive person w/o knowledge of what “selective attention” means will take the test as a generic attention exercize, and proceed on the assumption that the challenge is attention diversion, and that the winning strategy is focus.
Comments are closed.