Russ Roberts

Phil Rosenzweig on Leadership, Decisions, and Behavioral Economics

EconTalk Episode with Phil Rosenzweig
Hosted by Russ Roberts
PRINT
Continuing Education... Vernon... Continuing Education... Phil R...

Phil Rosenzweig, professor of strategy and international business at IMD in Switzerland and author of the book Left Brain, Right Stuff: How Leaders Make Winning Decisions talks with EconTalk host Russ Roberts about his book. The focus of the conversation is on the lessons from behavioral economics--when do those lessons inform and when do they mislead when applied to real-world business decisions. Topics discussed include overconfidence, transparency, the winner's curse, evaluating leaders, and the role of experimental findings in thinking about decision-making.

Size:28.6 MB
Right-click or Option-click, and select "Save Link/Target As MP3.

Readings and Links related to this podcast episode

Related Readings
HIDE READINGS
About this week's guest: About ideas and people mentioned in this podcast episode:

Highlights

Time
Podcast Episode Highlights
HIDE HIGHLIGHTS
0:33Intro. [Recording date: March 18, 2015.] Russ: We're going to talk about your book today, and the general issue of how do you make wise and good decisions. The framework of your book is that many of the experimental results that we often hear about relative to decision-making are incomplete or not completely useful for real-world decision-making. Give us an overview of that literature in mainly behavioral economics, and what is it missing. Guest: Okay. Well, first of all, good to be with you; and when you announced me as being from Switzerland, people might expect a somewhat different accent. I'm originally from California, but I've been here for quite a while. And I have been a professor in business school, so I begin with managers making decisions in real-world settings. I'm not a cognitive psychologist. So I come at decision-making in a somewhat different way. The really good news is that for the last number of years there's been outstanding work done by cognitive psychologists about the ways people make judgments and choices, and we've learned a lot about what people do well and what they do less well. And a lot of that now has been made accessible to the general public and reached a popular audience. And I'm a big fan of that work. So, my book is not a critique of that. My critique, though, is with people who have taken that work and have generalized the findings to settings that can be very, very different. And one of the problems we have is that if you are trying to do really good work about the way people make judgments and choices, you often try to do that work in controlled, experimental settings. That's great for certain kinds of decisions. But when managers, when leaders are making decisions in real-world settings, those kinds of controls don't exist. You are not just making, you are not selecting options that are presented in front of you; you can actually change the options. You are not just picking a or b; you can maybe improve a to be a'. And you oftentimes also have a competitive dimension. So, a lot of that work is very good. As I say, I'm not trying to debunk it. But we need to be a little bit more careful in how we apply it. Russ: So let's start with one issue that you hear a lot about, which is overconfidence. There's a lot of experimental work that shows that people are overconfident. Explain why that's a very, very incomplete summary. In particular, it has limited implications for how you should make your own decisions. As you point out in the book, a lot of people learn from that literature and say, 'So don't be overconfident.' Besides the fact that it's hard not to be overconfident--or better to say it: It's hard to change your behavior when exhorted. But that's not the only problem. Guest: Well, there's a lot of things I think that are problematic that the word 'overconfidence'--the first thing is the way we tend to use it in everyday speech. We tend to apply it retrospectively when something has gone badly. So, 'Gee, this didn't work out. Well, I guess I was overconfident.' Ex ante, nobody thinks they are overconfident. And if something goes well, you say, 'You see? I had an appropriate level of confidence.' If something goes badly we say it was overconfidence. So that's one problem. That's not, by the way, the problem that good researchers have, because they know not to make those sorts of retributions retrospectively. To researchers, they've tended to say that overconfidence is a level of confidence that is excessive or unwarranted. Well, what do we mean by that? If I think I can do something that I've never done before, is that overconfidence? Well, by one definition it is. If I think I can run 10,000 meters, 10K, in a time I've never done before and I think I can do it, am I being overconfident? Well, the thing is, I'm not simply predicting what I'm going to be able to do: I am the person who has to do that. And when it comes to things that we actually have to do, where we're not just predicting something that somebody else will do, but we have to do it, to have a level of confidence that is somewhat elevated, somewhat optimistic, can, for many people, actually lead for them to do better. So, if it leads you to do better, well, is it overconfidence or not? By one measure it is. It's a level of confidence more than I've ever done. But if it encourages me to do better, it's really not excessive. So, that's one way to look at it. I could go on. There's lots of other things in those words. Russ: Yeah; I want to talk about that--let me stop you there. You talk in the book about leadership in many examples. And when you ask, should you be confident in your ability to do something you haven't done before, that's a very important trait in an entrepreneur in a startup: you'd better have some confidence in yourself and in your employees, who look to you for inspiration. Guest: Right. And so, I spent quite a bit of the time talking about leadership. And let me just point out--the book, the subtitle of the book is How Leaders Make Winning Decisions. There's two really important words there. I didn't say, 'How People Make Good Decisions.' I'm talking not just about ordinary people, shoppers, consumers, and so forth, investors, but leaders. When you are in a leadership role you are guiding other people; you are trying to inspire them or mobilize their action. And I didn't just say 'Good Decisions'; I say 'Winning Decisions.' What's important there? A winning decision, to win, involves doing better than somebody else. There a relative dimension. When we're talking about somebody who is inspiring others, leading a team or starting a business or something, you may need to convey to other people a level of confidence that they and you collectively can achieve something that hasn't been done before. You get into a very curious aspect of leadership where, if you are brutally honest with everybody and you tell them, 'Gosh, the odds are against us; this has never been done before. I'm not sure we can do it,' you very likely won't do it. So, we like to talk about transparency and things like that, but one of the traits of a good leader is how they selectively convey information to their followers; and sometimes try to inspire them to do more than what they have done before. Is that deception and manipulation? By some definitions, yes. By other definitions, I think it's probably one of the highest traits of a leader. Russ: I think that's why most academics don't make very good leaders. They are prone to worry about, 'What about this?' and 'on the other hand'--caveats and hesitation, at least in some settings.
7:43Russ: But you give the example in the book, really a very powerful example and very dramatic, of Apollo 13. Talk about the leadership that was displayed there by, I think it's Gene Kranz, is his name. Guest: Right. Well, person I focus on there is Gene Kranz. Many people will remember the 1995 movie: Tom Hanks played Jim Lovell up in space, but it was Ed Harris who played Gene Kranz, who was Ground Controller. The fact is they worked in shifts and he was not the only Controller, but he was the one focused on there and he's gotten a lot of the credit. He's the guy who said, 'Failure is not an option,' and so forth. But if you look at it, the odds were clearly very slim that Apollo 13 was going to come back. It certainly was very, very far from a sure thing. Yet he had to convey to people on his team, 'Don't talk to me about the odds; don't talk to me about the difficulties we have; we are going to get this done. Failure is not an option.' So on and so forth. And in fact, I contacted him and I asked him about this because so much of what NASA (National Aeronautics and Space Administration) was about was extremely good, objective analytics about probabilities, and you know, 3-sigma and all these things about trying to get chance of failure very, very low. So, clearly the people there were thinking in a very good, detached, analytical way before the mission. Planning for the next mission. But at that moment in time when those guys are up there and you are trying to bring them back, you have to convey to people that the odds don't matter; we have to get this done. Russ: And they might not have. In which case he would have looked overconfident. As you point out, it would be cherry-picking. But the other point I thought was important was, he didn't use that confidence--I would call it confidence, not overconfidence--he didn't use that confidence to sort of sit back and say, oh, we'll work it out. They relentlessly worked systematically at reducing the odds of failure. Guest: Yes. Exactly right. Because his job there is not to sit back and say, 'Gosh, I wonder what the odds are.' His job is to say, 'Let's make it happen. Let's do better.' I'll give you a slightly different analogy; I don't know if you'll like this one or not. It's not in the book. But I was reading quite a bit about the approval decision to go ahead with the mission to kill Osama bin Laden. And there was a lot of time spent at the White House, between Panetta of the CIA (Central Intelligence Agency) and Obama's team and all this about what's the chance of this person we see in this house in Abbottabad is in fact bin Laden: is it this percentage or that percentage? And at some point, Obama apparently said, 'Let's just assume it's 50-50. We don't know.' And I think what he meant by that is, we may look at the intelligence a little more closely and realize it's 55 or 60 or maybe 45%, but you know what? That is not time well-spent. We should be spending our time getting this mission to be successful rather than trying to think, can we be a little more precise? So, again, that's an example of somebody who says, 'Let's use our time not to assess the odds but to improve the performance that we are going to deliver.' Russ: Yeah. One of the [?] interesting you talked about how in the case of Apollo 13, President Nixon wanted to know the odds. Which would have been very useful for him. He wanted to know, probably, how much time getting ready for a catastrophe, because that was what his natural--he couldn't do anything to bring them back more quickly. He had to deal with whatever the fallout would be if it failed. So he was curious, I assume, among other reasons, to know how much time to put into that. But I love that Kranz wouldn't answer the question. Guest: Right. And in the movie, Kranz just waives the question away, and the other people report back to the White House that it's, I don't know, 3-to-1 against. And I wanted to know if that had really happened. So I managed to send emails to Gene Kranz. I had a few questions, and one of those was, 'In the movie, the White House asked. Did that in fact happen?' And I just had a brief exchange with him; it wasn't that lengthy. But he says, 'I think they may have. In any event, I wouldn't have given them because I said to my team: The crew is coming back.' And again, that to me is an example of what I try to talk about in this book, titled Left Brain, Right Stuff. The rational, detached, analytical side says of course the odds are less than 100% they are coming back. But we're only going to bring them back if we push ourselves to do the best we can. And sometimes you need to be somewhat optimistic or insist that you can actually shape outcomes. And so one of the key ideas in the book, if we come back to this notion about the decision research, sometimes I think falls a bit short. So many of the studies about judgment and choice involve evaluations of things that we can't shape--what the weather is going to do, are the Knicks going to win next week and who is going to win the next election--you can't really change-- Russ: the price of a good that we're thinking of buying. Guest: Precisely. And that's very good if you are doing an experiment. Because they say, Russ thought it was going to do this and Phil thought it was going to do that; and we can ask a whole bunch of people and we get really good data. It's much more complicated to do research if you are asking people about things they can control, because you don't have an objective phenomenon that we are all trying to assess. There's a reason why scientists like to ask you to choose from a set of options that you cannot change, with certain parameters that you cannot alter, or make an estimate of something that you are not able to influence. However, in real life, of course we have those kinds of decisions; but a lot of what we do on a day-to-day basis we can actually improve. I'm sure you prepared for this interview, and you thought, 'Well, it could go well or less well; how do I prepare this to make it as good as possible?' And you are going to drive home at the end of the day, and you don't just get in the car and say 'Shall I choose to drive safely or not?' You have to make it happen. And then you cook dinner; most of our lives, I would tell you are about doing things. Making judgments and choices about things that we cannot influence is probably a minority of what we do. Russ: Yeah. I'm very confident this is going to go well, by the way. And we don't have very many objective measures of that, so I'm going to persist in that view even after it's over.
14:42Russ: The other part of it I thought was so interesting--there's a number of things I don't like in the literature on this so-called overconfidence. 'How long is the Nile River?' is a typical question and 'How confident are you of your answer?' Well, we haven't thought about it that much. If I get it wrong I can look it up; I'd never guess it if it was important. I'd look it up. So, if you told me, give me 30 seconds, and then 'How confident are you; you can use any research you want' I'm going to be very confident. And I'm going to be probably right. So a lot of these laboratory results are very artificial; as you point out they are not things we can control. They are usually things that are on the outside. But more than that, they are not things we care about very much. They are not things that we are accustomed to making decisions about. And I thought one of the nicest points you make in the book is, you know, you ask people if they are a good driver, if they are above average. And, what's the number? 70% in some setting say they are above average. My joke used to be--I've made it before; I apologize to listeners: At least half of my fellow graduate students thought we were in the top 10% of our class. Maybe even the top 5%. And I like to think that most macroeconomists, a disproportionate share of macroeconomists think they have an inside shot at being Chair of the Fed. And that distorts their view of the Fed, unfortunately. So I think there's obviously, sometimes we overestimate our abilities. But as you point out, there are many times we underestimate them. Guest: Right. And so, right there what you've done is you've taken the word 'overconfidence' and you've mentioned a few different experiments that are actually very different in their nature. And one thing I try to tease apart here are these. Now, I'm quoting here some work that was done, a fellow named Don Moore who is a professor at Berkeley and his colleague, P. J. Healy, who I think is at Ohio State. They wrote an article a few years ago called "The Trouble With Overconfidence". And what they point out is that this one word has been used to mean three very different things. First of all, when people say, 'Oh, yeah, I'm 90% sure of this,' and it turns out actually they should be much less--were too precise. We tend to be too precise in ranges that we give or our certainty about certain events. A second thing, you mentioned the word 'overestimation.' There is that, too. If I think I can complete a project in 6 months and actually I can't, or if I think I can high-jump 6' and I can only do 4', that's overestimation. But then the third one, and you touched upon this: when I think I'm in the top 10% and a whole bunch of us think we are in the top 10%, that's a third one that's overplacement. Now, let's just stop there. Overestimation, overprecision, and overplacement. If you look at the work on overconfidence, we tend to mean one of those three at any given time. And we kind of go back and forth among them using evidence of one to claim another. What the research shows is, regarding overprecision--yes, there is very robust evidence that most people are over-precise much of the time. When I say I'm 99% sure, it's not really 99%. Part of it has to do with just the way I use words. Russ: And the social convention, that it's sometimes awkward to concede that you don't know something for sure. Guest: There is that, too. And it becomes a figure of speech: 'Oh, I'm 99% sure'. Well, it doesn't really mean that I mean 9 times out of 10. But however you think of that, we do tend to be over-precise. The evidence on estimation and placement is nowhere near as powerful. There are some examples where people do tend to overestimate their abilities. But there's other examples where people underestimate their abilities. How many times have you heard somebody say, 'Oh, I'm no good at math.' Or how many Americans say, 'Oh, I could never learn another language.' Well, you know, you live in Switzerland, where I live, and your secretary speaks four languages. And he or she does not have great education; you just kind of grow up doing it. I was in South Africa recently, ordinary folks there speak 5, 6 languages. You can do it. So, you may overestimate some things; but we also are prone to some underestimation. And then as far as placement goes--this is about how many people think they are in the top 10%. I give the example of driving: How many people think that they are better than average? Well, most of us do. But then, in my research, when I do a survey, I also ask a second question: I said, 'How good are you at drawing a picture, a sketch, of a likeness of somebody?' and then I asked, 'How do you think you rank compared to others? Are you in the top 5th, the second 5th, middle, lower, or bottom 5th?' Now: If human beings truly have a consistent propensity for overplacement, you would think, not just in driving but in drawing they'd all say, 'Yeah, I'm better than most people.' But the exact opposite happens. Most of us say, 'Oh, bad, I'm not very good at drawing.' And you know there are good drawers out there, so you figure I must be worse than average. But when we all think we're worse than average-- Russ: It's unlikely. Guest: It's unlikely. And so, what you find is that, yes, there is a tendency for overprecision. But overestimation, overplacement can be manipulated by the questions you ask and how you solicit the information. And I can show you--I would say the problem is not with the respondent. The problem is with the people who have been administering the surveys. Because they have not had balanced designs. So, it's just a much more complicated area. And that's why I think the kind of complicated, simplistic conclusion that, 'Oh, people are prone to overconfidence'--there's really much more there than meets the eye.
20:56Russ: I really like your analysis of the driving example. We don't have a lot of information about lots of other drivers other than we know we see accidents and we're not in them every day. And it might be perfectly rational and reasonable--I don't like to use the word 'rational' so much--but reasonable to argue, to think at least without more information that you're better than average. When in fact, of course, you are not. Just two quick reactions to your examples. I love it when a baseball announcer talking about a right-fielder will say that he's got a better-than-average arm. They say that about every right-fielder, pretty much. Right field tends to be the place where teams put their best throwers in the outfield. And they have better than average arm to American people. They have better arm than I do. But about half of the right-fielders are better than average as right fielders. And they almost never notice that. And part of the reason, by the way, reasonably, is that they all throw pretty close to each other. There are very few that have an extraordinary arm. So one way to capture that is to say, 'Better than average.' Of course, it's not literally true. Guest: [?] the reference set. You [?] have to be right. They have a better than average arm compared to most outfielders; that's why they are in right field. But it doesn't follow that they have a better-than-average arm for a right-fielder. Russ: Correct. Guest: The reference point matters a lot. Russ: And then lastly: I got a C in art growing up. This was in the days when self-esteem wasn't--people didn't worry so much about it. But I grew up thinking I was a horrible artist. Which in some dimension I might be. I don't know. But I assumed I could never learn to draw. And about 10 years, maybe 15 years ago, my wife and I decided that we were going to learn how to draw, or at least give it a shot. And it turns out, you can actually learn to draw. Almost anybody. Even me. Almost anyone can learn to draw a portrait of someone that looks somewhat like them. I didn't think it was possible. I had underconfidence. Guest: Well, yeah, you underestimated. But when you said the information we have about ourselves and others: so, rather than saying people are overconfident or underconfident, the word that I like better--and again, I borrowed this from Don Moore at Berkeley and I think is a very, very bright guy: He says people are myopic. What you do is you see very clearly what's close to you, which is yourself. You know how good you are, as a driver, as a drawer, lots of things. And you have less information about everybody else. When you are very good at something--and most of us are very good at driving, and we know there are some bad drivers out there; and it is not unreasonable. In fact it is reasonable to imagine that on average, I'm probably somewhat better than most. And we do the opposite on drawing: 'I know I'm not very good, and man, I know there are some really good artists out there, so I guess, given my tendency to make myopic inferences from limited information, I'm probably worse than average.' It doesn't mean we are all over-confident. Not at all. Russ: So, let's shift gears. Before we do that, why don't you summarize? So, what are the lessons--I'm about to make a big decision about, use a lot of examples in the book, of bidding on a contract or an auction setting or, you could argue, I'm choosing a career path. Should I worry about being overconfident? Guest: Well, you should worry about doing what it takes to make the best decision. Now, for some of us, having a somewhat elevated level of confidence encourages us to do things we might not otherwise undertake. We will bring more energy. We will bring more commitment. If we meet with initial difficulties we will persevere. We will also persuade others to come with us. There's many examples of how overconfidence--a high level of confidence--will lead to better performance. For some of us. Others of us are the kind of people who say, 'Gosh, it's when I'm afraid of failure, when I think maybe I think I can't, that that summons the best in me.' So, it's not having a certain thing in your mind that leads to results. It's how what's in your mind either does or does not translate into your actions. And so what you need to understand about yourself is, 'Am I the kind of person who achieves better when I'm somewhat more optimistic and myself and my prospects, or not?' So that would be the first thing I would say. The other thing I would say, and we talked a little bit earlier about the competitive domain, and that's why I talk about 'winning decisions'--because in business or sports or the military or politics, you don't want to do well--you want to do, oftentimes, better than others. One thing you find in a competitive setting is that the one who wins probably had a level of confidence that surpassed others. And in that sense, I would say that somewhat elevated level of confidence is not only useful--it's probably essential. And so, you need to be concerned about--a lot of the book talks about how you want to make sure you don't make the Type II error of failing to act when the spoils go to those who are willing to act. Russ: Yeah, the other thing missing, I think, from the literature, and it's actually when I think about the implications for decision-making, is the possibility of learning. So, if I tried to draw for 5 years and my 5th year of effort looked very similar to my first year, I think I'd kind of stay underconfident about my drawing ability. And I think if you persistently mis-estimate the length of the world's rivers you could eventually come a conclusion that you weren't very good at geography. Or, at least your stock of knowledge of geography was limited. So I think a lot of that literature just ignores that possibility.
26:47Russ: Let's shift gears. Let's talk about experience and expertise, and what we know about practice. And I particularly enjoyed the thoughts you had about case studies as a way of gaining expertise in the MBA (Masters of Business Administration) programs. Talk about that. Guest: Well, you just mentioned a moment ago drawing. Drawing is an example--and I could give you many others: I talk in the book about shooting baskets or you could bake a cake or for that matter a surgeon. Atul Gawande talks a lot about how a surgeon needs a coach because surgery, as important as it is, it's a discrete event that takes a certain amount of time but at the end of which you usually have a fairly clear feedback of how well you did. You can then take that feedback onboard and try again. I give examples about hitting golf balls, shooting baskets. Drawing would be another example, and so forth. So, this all comes into what we call deliberate practice. And there's been a lot of research that is the way to develop expertise is through deliberate practice. Great. But every one of those examples is an example of a sequential activity, typically not that long in duration for which you could get concrete feedback and try again. That doesn't apply very well, in, let's say, the business world to decisions where you don't get rapid feedback. If I launch a new product, if I enter a new market, if I acquire another company, these are things that are events that will take a long time to get feedback. By the time I get the feedback, so many other things have intervened that it's very hard to know exactly what led to what. And so the idea that you can do, something, learn, and try again, is I think a bit illusory. So I'm not against the case method, in terms of discussion based learning in classes. But let's not fool ourselves that a case study is like deliberate[?] practice of shooting a basket. Strategic decisions--military decisions, political decisions, big business decisions--do not lend themselves to deliberate practice and therefore we need to think differently. We may need to say rather than trying, seeing what worked, and trying again, maybe I need to spend more time trying to get this decision right, because I don't have the luxury once of trying once, learning, and trying again. And so the main thing I'm trying to convey here is, I'm not against deliberate practice. I'm trying to get people to understand that it's extraordinarily powerful for some kinds of things; but really, rather irrelevant and wide of the mark and perhaps even dangerous for other kinds of things. Russ: Well, I do think we have a serious problem with overconfidence, and you'll tell me which kind, whether it's overprecision, overestimation, or overplacement, when, after a complex decision takes place and then the complex events unfold and it's very easy to then fool myself into thinking I made the right decision because there are so many data points about it I can choose so many variables I can leave out. So there I think people do have what I would call overconfidence in how they would assess, there, their decision-making ability. I think most people in leadership roles think they are "good decision makers." Could be true, of course. They could be well above the average of the population. But I do think politicians in particular don't like to admit they made mistakes. And I even think they don't admit those mistakes much to themselves. I think they are probably pretty good at cherry picking and fooling themselves. Guest: I think that's right. And I think what also plays into that is what we expect of a leader. One of the things that we want to see leaders be is persistent and steadfast and persevere. And I have an example in the book--there is actually an interesting experiment. If a leader persists, persists, persists, and always fails, they don't get credit for succeeding but they get credit for persistence. But if you change, change, change and win, they usually say, 'You were lucky.' Russ: It wasn't you. Guest: So I guess your success was not because you were adaptable and agile. You just got lucky. And I think another thing that happens is that as people's careers go forward, some people don't do well; they are selected out. And those who continue to do well, they keep getting reinforcement that says, 'You're good. You're good. You're very good.' And they tend to believe it because that's of course their experience. Russ: Yeah. It's a great point. There's a selection bias there. Even if they are good, some of them might actually be good decision-makers, but some of them are merely lucky and they turn out to look smart. Guest: That's exactly right. Russ: On the case study issue, when I was reading your book, this story resonated with me for a bunch of reasons; I think you'll like it. I had a conversation with a CEO (Chief Executive Officer) once. We were alone. He and I were in my office. And he said something remarkably honest. He was a former CEO, actually: his company had gone bankrupt and he'd lost his job. It was a big company, by the way, not a small company. So, this was--I didn't bring it up, I wouldn't have brought it up because I thought it was embarrassing. But he brought it up. And then he confessed to me--and he was a Harvard MBA, which is relevant--and he said, 'You know, when I had to make that key decision,' he said, 'I used the wrong case study.' And I thought, this is the greatest--I knew you'd like it. I mean, it's unbelievable, right? So, he shuffled through the files--he went through the mental Rolodex of cases and he picked the one he thought was analogous to the situation he was in--and--he picked the wrong one. Of course, I'm not sure if that's really a meaningful statement. But it is a danger of the case study program, case study example: It's not shooting a free throw. A free throw is the same every time: 15' away; the ball's round; it's the same size; the basket's the same size. Guest: And that's why deliberate practice can be extraordinarily powerful for some things, and it really doesn't make sense for others. It's not the case that it can't be useful. By the way, when you teach the case method, the point should not be: In this circumstance, use this case. It should be something else. But anyway, you don't need to go down that path.
33:26Russ: Let's talk about a very interesting set of results that you talk about in the book which is related to the Winner's Curse. So, describe what the Winner's Curse is, and what are the lessons we should learn from it, compared to what people think the lessons are. Guest: Sure. Okay, well, the Winner's Curse is a very interesting thing because it's not a cognitive bias. It's not where any individual has made an error in estimation or in evaluation of something. It's actually the outcome of a process. If you've got a number of people placing a bid for something--for example, the classic is you put a bunch of nickels in a jar and you ask people to come up and estimate how many nickels are in the jar and what they'd be willing to pay for the jar, what they found--and they've done this a number of times, not just nickels in jars and other things--is that even if everybody tries to be a bit conservative and even if on average we all bid a bit low, the nature of a distribution is such that at least a few people will probably err on the high side. And the person who is the highest, and the wildest on the high side is the 'happy'--I use the term ironically--winner of the jar. They've overpaid. So the winner's curse says the person who wins the auction probably overpaid. Or if you think about it on a low bid for a contract--you get 5 different bids from contractors and the low bid wins, the one who is low enough to win the bid: congratulations, you've won the bid, but you probably bid so low you won't make money. So it's the winner's curse in the other direction. And there's been a lot of studies of this. The way this was formalized was back in the 1970s was bidding for oil tracks in the Gulf of Mexico, lots of companies were bidding and they looked and they thought, 'We're losing money.' And then they looked across companies and everybody was losing money--because of the nature of the winner's curse. So there's a few standard lessons, one of which is-- Russ: Don't bid. Guest: Well, if you can buy that item somewhere else. For example, there's stuff you can buy on, I won't mention the name but well-known Internet auction sites, where--don't go there thinking you can get a deal. You probably won't get a deal. Buy through another channel if you can. But one of the things I try to bring out in the book is that most of these examples--how many nickels are in a jar, so forth--they are what we call a common value auction. Which means the value of the nickels in the jar is common to all of us bidding. It's the same number of nickels, and each nickel buys you 5 cents and buys me 5 cents. So it's worth the same. However, be careful if you take the lessons of the winner's curse and apply it not to common value auctions but private value auctions. A private value auction is when the thing that is being bid for may actually have a different value for you or for me. So, for example, the oil tract. Well in some ways it's common because it's the same amount of oil in the ground whether you buy it or I buy it. But in fact, you may have an ability to more efficiently extract and process the oil than I would. So you might want to pay more for that because it has greater value for you. And then, if you think again about the oil field, you don't just capture that value immediately. It's probably an oil field that will have a 10, 20-year useful life. Now you have to ask the question: What are my capabilities of extracting and bringing out the oil not just next year but in 5 years and in 10 years, and how much better do I think I will get at this over time? Now we get back into this issue about control. Because this is no longer a common value auction. This is one where you might say, 'How much better do I think I can get?' Now, let's make this really simple. You and I are both bidding for a certain oil tract in the Gulf of Mexico, and you have your capabilities and I have mine. Neither of us wants to bid too much for it. On the other hand, if I'm not willing to bid somewhere beyond my present capabilities, I'll never win the tract. So you almost have to bid beyond today's capabilities. Now this brings us back to overconfidence. If you do that, someone might say, 'Ah, Russ, you are overestimating. You are bidding based on the level of capabilities you do not yet have.' And you'll say, 'Yeah, that may be true, but I know what my historic rate of improvement has been. I believe this will continue. And if I am not willing to bid somewhere beyond my capabilities today, I'm going to lose the bid to Phil, and then I won't win anything.' So, again, there are certain auctions you should not take part in. There are auctions where you should understand that it's a common value and you should be very conservative. But there are other settings where, in a competitive setting where you can influence outcomes and improve your performance, you must be willing to go beyond what you've done up until now.
38:53Russ: I think one of the problems with the literature is that you want to use nickels because then you can literally show that the person overpaid. Because you can count--as you say, it's a common value. Everybody knows what the nickels are actually worth, once you actually open the jar and count them. And so you can "prove" that people overbid. Now the implications of that, to me, are extremely uninteresting, because if you consistently overbid you are not going to have much money after a while. The market is going to correct that problem, and you're not going to be in the bidding pool. So, the more interesting questions are the ones that you talk about, which are where you have inside information. Meaning your own capabilities, the fact that it's going to take place over time. And then, of course, it's very difficult to evaluate at the time of the bid. The press will write about it. The sports example is phenomenal: when people hire a free agent, 'Oh, they overpaid,' or they say, 'Oh, they got a bargain.' They have no idea. It's just a form of entertainment, to make sports fans give them something to pass the time. But then[?] I enjoyed reading that, by the way. I understand the appeal of it. But it's not a serious exercise in trying to establish whether it was a wise decision or not. Guest: Well, you are absolutely right about the nickels. We can count them. And by the way, the whole experiment takes about 10 minutes; and then I can bring in another class and do it again and tweak a few things, and very quickly I get lots of data and I can publish a paper that meets all the demands of replicable statistically significant empirical work. That's great. And to identify the fundamental phenomenon, I have no problem. The problem is when you take a lesson from a contrived experimental setting and then you then generalize that to a setting that is very, very different. Be careful. And so one of my concerns--you know, I don't mean to criticize the people who do the basic research. I criticize the people who say, 'And therefore here's what it means,' in a very different setting without recognizing the difference. So, I sort of say, what I should have called my book is Yes, But, because what I'm saying is, yeah, there's a lot of good stuff there, but be careful about deliberate practice. Be careful about how we think about the winner's curse. Be careful about decision models, and so forth. Because what we have now come to accept makes a lot of sense for these circumstances, but really should not be applied to those circumstances; and you, dear reader, I would like to help you begin to understand the difference. Russ: Well, I think Left Brain, Right Stuff is actually a better title than "Yes, But." So I think you made the right call there. I've often thought about writing a book called "All Those Books that You've Read About Decision-Making Are Wrong." That's not a title that's going to sell. No one likes to be told that their cherished--this happens to me all the time with casual friends--I'm not going to quote authors, but they'll quote some well-known author's clever result about, 'Oh, isn't this the greatest?' And I always just want to say, 'Here's what's wrong with the experiment. Here's why in your case your example doesn't apply.' Etc., etc. But no one wants to hear that. They just want to enjoy the novelty of it; and it's good cocktail party conversation. And I wonder if they really take it seriously anyway for their own decision-making. So it could be less dangerous than it appears. But who knows.
42:16Russ: Let's talk about new ventures. I really liked your observation that people make about new ventures that "most of them fail." You challenge the empirical claim to start with, and then you challenge the conclusion that people sometimes draw from that. So, what's wrong with that claim, when people say, x number of businesses disappear after 5 years? Guest: And this has been pretty well documented; it doesn't matter if you look at the 1980s, the 1990s, or just recently. Gee, all these businesses that are founded, half of them are gone after 5 years and 80% are gone after 7 years--and therefore--therefore--most new businesses fail. And therefore the assumption is starting them up was a mistake. Russ: You're a sucker. Guest: And because it was a mistake--you thought you could succeed, you didn't realize this, and that it was against the odds. And you were overconfident, you neglected base rates, and therefore there's some kind of cognitive error behind it. So, and again, here I'm going to quote my colleague Stuart Read, who is now at Willamette College, Willamette U. in Oregon. Stuart works in an area about new business creation and he talks a lot about affordable loss and how, it's true that a lot of these businesses don't persist--they don't survive. But many of them find ways to limit their costs and essentially only lose as much as they were willing to. And then they can learn from that and start again. And part of it--it's not just a rationalization--you ask people, 'Would you do this again?' 'Yes, I would, because I was able to meet my costs, not lose much, learn, and I can go on from there.' Russ: And it was exciting. I was my own boss; I was creating something from scratch; it was mine. There are a lot of non-monetary aspects to it, that you point out. Guest: Yes. Although I'm a little bit cautious there, because it ain't so exciting and fun if you are losing a lot of money. Russ: Good point. Guest: The key thing here is that most of them find ways not to lose much money. Russ: Yeah. I'm talking about limping along. Or not going broke. [?] survival [?] pulling the plug. Guest: Yeah. I think that's right. Survival itself is not the only measure of success. You have to look at the wins and losses. And here, again, come back to the issue of control. Starting a business, it's not like rolling dice and let's see what happens. Because when you roll a set of dice you can't actually--you shouldn't be able to--shape them or change them or influence them as they roll. But when you start a business, it's not a one-time choice and let's see what happens. You can actively do things along the way to lower your costs, improve your prospects, and so forth. So that's one thing. So people go from there and then say, because of that, that means people showed what they call 'reference group neglect'--in other words, I ignored how many people who have started similar businesses have lost out and therefore I must think I was better then them, my prospects were better--this is an example of overconfidence. And there's some experiments that have distilled this. But again, if you look at the nature of the experiment, you are not able to actually manage the venture. You are not able to improve your chances. You are not able to limit what you put in. And so they are rather contrived experiments. You find statistically significant findings, but I think only having greatly restricted the degrees of freedom that the managers have. One of the points I make in my book is that we have a lot of problems in our society about health care and violence and education. Nobody ever says 'A great problem in American society is we start too many new businesses.' Quite the contrary. We typically say the entrepreneurial spirit and climate for starting new businesses is a strong point of American society that most other countries want to learn from. So, even at the level of face validity, there is something wrong when people say, 'Gee, so many new businesses fail; therefore it was a mistake to start them; therefore people are suffering from biases.' It's just not true. That doesn't mean, by the way, there aren't some people out there who are totally deluded when they start new businesses. I'm sure there are. But, in the main, I think that the picture is rather different from how it is often portrayed. Russ: Yeah. I have to quote Adam Smith here. He is talking about soldiers, who are going to sea, but he could be talking about entrepreneurs. He says,
The contempt of risk and the presumptuous hope of success, are in no period of life more active than at the age at which young people chuse their professions. How little the fear of misfortune is then capable of balancing the hope of good luck, appears still more evidently in the readiness of the common people to enlist as soldiers, or to go to sea, than in the eagerness of those of better fashion to enter into what are called the liberal professions.
So there's no doubt of that young people, for example, are more eager to start businesses. For a lot of reasons, by the way. Not just cognitive failure--as Smith is hinting at. But the fact that the costs are lower: if you fail, it's not as, you typically don't have a family who are sharing the burden of your failure. It's only falling on you. You don't have to feel bad about that. And so on.
47:56Russ: The other point that you make, which I thought was really important, is adapting. Which is missing from the experiments. Talk about the importance of that adaptation. Guest: Well, that's the other thing you see when you look at new ventures that start up. I mean, even some very successful startups, their success was not at exactly the product or service or market that they had in mind at the beginning. So again, what you find, when entrepreneurs start things up, they have an idea; they do take some chances, they have a somewhat elevated level of confidence. But very quickly they take information from consumers, from competitors; and they will pivot. That's a term you hear a lot in Silicon Valley. They will adapt. They will try to keep their costs low. Again, my colleague Stuart Read uses the term 'effectuation.' It's the opposite of causation. A causal approach is when you have the end in mind and you know what to do to get to the end. An effectual approach says let's start with our means, let's start with our resources, and let's see how best to combine them for an interesting end. And when you look at many companies that are successful, they have taken this adaptive, effectual approach. One of the problems that we have is we don't teach that. We teach, well, come up with a business plan. A business plan typically says, 'This is this end I have and here's the steps that are going to lead to that end.' Very few companies even with so-called good business plans ever live out exactly the plan that they had in mind. And usually an ingredient of success is that they abandon that plan rather quickly. What they do, they play to their strength. They know what they are good at. They take on feedback from the market. And they adapt and are agile, while also generally trying to not commit more than they can afford to lose. Russ: You quote George Bernard Shaw from Man and Superman, his play, and where Shaw writes
The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.
That sounds great, by the way. It's dramatic, it's eloquent, it's a beautiful quote; it's somewhat inspiring. But it's not really right, is it? Guest: Well, it's not entirely right. You could say that progress is due to unreasonable men and women. You could also say that the greatest catastrophes are due to unreasonable men and women who inflict their grandiose ideas and hard-brained schemes, and other kinds of destructions, on fellow people. And it's also true that, yes, you try to influence things and bend the world to your will. But a lot of times success is also about being nimble and adapting. And we see lots of examples of that in startups. Russ: What is the best overall strategy for success--given--there are challenges. You are not saying that it's easy to do a startup: all you have to do is pay attention and work hard at it. But you have some general advice for success in startups. Guest: I think one of the things is you do need to have ideas that others don't have. You need to think of where is a comparative advantage. And it is important to be willing to take risks. But then, if you are going to fail, to fail fast. To try to engender in people around you a belief that you can do what has not been done. You need to inspire, but you also then have to have this realistic idea of taking on from the market. So, I come back to the title of the book, Left Brain, Right Stuff. Left brain is you do need to have this absolutely detached, sober, and thoughtful view of what's out there. You've got to be realistic. But, right stuff, is you also have to also be willing to push boundaries. So I think the two of them go together. Russ: One of the things you talk about in the book is the importance of risk management. And in a way, there's a theme in the book which is--well, there are a lot of themes--but one of them is this idea that in taking a risk that's really dangerous, that's imprudent as a bad idea--but it's also a bad idea to miss an opportunity because you are not bold enough. And I think you talk very thoughtfully, and especially with respect to startups, about the importance of risk management. We have a romantic ideal that entrepreneurs, especially in Silicon Valley, are these wild-eyed dreamers who overcome all their skepticism and the skepticism of the people around them, people who say it can't be done. And they do it anyway. They take a big leap. But that's not the road to success. That's not why they are successful. It's not so much taking risk as dealing with risk. Guest: Yeah. That's right. And there is a section there. I had some very interesting conversations with folks out in Silicon Valley and one guy there said a lot of it is risk management. There's a few different kinds of risk. There's technical risk. There's market risk. There's financial risk. And he thought about them very differently. He says, technical risk, if we are not able to overcome the technical risk then we shouldn't be in this line of business. That's within our power. It's what our added value should be. Market risk, that's everything outside that I can't control. If you ever say, my business depends on consumers being willing to buy something they've never bought before or prefer something they haven't preferred in the past, you are dreaming. So, there's a big difference between the internal technical risk that you should be able to influence and the external market risk. But the third he talks about is financial risk. And there it has to do with how quickly you burn through the resources that you have. And one of the things he talks about is, you want to be very cautious about spending early, because it shouldn't take all that much money to do certain kinds of technical startups. And there are companies that fail because they burn through much too much money to rapidly--however, there's also the transition point. There's a point where your product becomes successful, and if you are not willing to commit the funds to ramp up the marketing and the market presence, you will miss, too. So there's a sort of crossover point. And this fellow was telling me a lot of companies get that wrong. So, risk management is a very crucial thing there.
54:38Russ: So, let's talk, let's close and talk about this literature generally that you are, 'yes, but' talking about. I'm more skeptical than you are probably about the replicability and generality of even the experimental results. You're willing to accept those, mostly, in the book; you are then being critical of what the implications are. I want to make an observation and get your reaction. It seems to me that the behavioral economics literature, or this decision-making literature that's experimentally based that you are mainly talking about, it's not designed to help us make good decisions. It's designed by its authors to get publishable articles. Their incentives are not necessarily to produce experiments that lead to important lessons for making decisions. They are designed to produce clever and novel and startling results to get them into the latest journals. So, to some extent I think we've maybe asked too much of that literature. What do you think? Guest: Well, I'm come at this a slightly different way. I'm not going to criticize them so much because of you know, the publishable article thing. I would say, though, they are trying to do work that is interesting--if your main concern is about cognition, cognitive human psychology, if you want to know how the mind is working, and the applications in real-world settings are not really your concern. Now, I do believe a lot of their findings are applicable, in certain kinds of decisions. A lot of consumer marketing decision, a lot of financial investing decisions. I think there are pretty good applications there. But, we begin to stray when we talk, number one, about not simply making choices from options presented to me but where I can alter the options, where I can improve the outcomes, where positive thinking can matter--number one. And number two, where I'm trying not just to do well, but I'm trying to do better than a rival who is also trying to do better than me. So, I'm fairly charitable with the basic research. I'm concerned about the applications. And I would say, I'm by no means a Marxist, but I do like the idea of thesis/antithesis/synthesis sort of dialectics. I think that the thesis that has been so strong in at least post-War economics has been to assume that humans are fundamentally rational and maybe not just fundamentally but that we should assume this in a lot of our theories and models. Fine. The antithesis of that in the last number of years has been to point out that actually people make some fairly predictable errors. Doesn't mean people are irrational. But it does mean under certain circumstances they make judgments and choices that diverge from the tenets of economic rational theory. Fine. But now, what I'm trying to do, and I think some others are trying to move from the antithesis to a synthesis, where we say: It's old news that I can show you that under certain settings people make decisions that run counter to rationality. Fine. We know that. But now let's try to say, how do we move to better understand the way people really behave in real-world settings. And the main point I'm saying is, there ain't one kind of decision. There are very different kinds of decisions we make. Some where you can shape outcomes, some where you cannot. Some that involve competition, some that are repeated, like deliberate practice. And what I want people to begin to understand is that to make the best decision, I really need to understand more about what kind of decision I'm making. And to be able to then learn how to respond in an appropriately versatile way to that. Russ: You give some examples in the book where you interviewed decision-makers after they made billion-dollar decisions. Big decisions. Not what peanut butter to buy on a Tuesday afternoon at the grocery. And you--I don't know if this was a straw man or this was just opening the conversation, but you asked them, 'Were you worried about overconfidence?' They said, 'Oh, sure we were. Of course we were.' I kind of go back and forth on these. For me, my pet issue is confirmation bias. I find it remarkable how aware I am of it and yet how still how hard it is for me to sometimes remember to take it into account. So, close and talk about these things being aware it's useful but it's still hard to do, to take account of them. Guest: Right. It is. But in that particular setting, you can know that there are dangers of overconfidence. But if what it leads you to do is not take action, you are never going to win. And so there are many settings--could be a competitive bid, as in that example; could be something else in the competitive setting where your fear of committing of what we call a Type I error--that is, to take action and fail--is significant. But at least if you take action, you can win; and maybe you can actually do things to improve your chances of winning. You've got to be very fearful of a Type II error, which is a false negative, which is a failure to act, an error of omission as opposed to an error of commission. So, if what we have learned from a lot of this decision literature is, [?] the way to avoid errors is not to do stuff, then now you've got another set of problems. Because in the real world, you're going to do stuff. And if what you've taken is I'm going to avoid biases by not doing stuff, I think that's not terribly helpful. And again, that's a bit strong: nobody's quite saying that. But if you look at a lot of this conventional literature you say, oh, I could make this kind of error; gee, how do I avoid confirmation bias and how do I avoid overconfidence. I'll tell you how to avoid those things: Don't do anything. But then you've got another set of problems. And so, it is good to be aware of these things before the fact. What you should try to do, I think, is understand them. Try to recognize them when you see them. Try to improve your probability of success, always knowing that you'll never get everything right, and resisting then making a knee-jerk attribution afterwards: 'Oh, I guess I was overconfident.' No, actually, maybe that was an appropriate level; things just didn't work out. So, this is the third level I'm talking about--a thesis, antithesis, and now I hope a synthesis where we can say: Right, we know that people are not fully rational or fully reasonable all the time. But let's try to advance the sophistication with which we think about real-world decision making and make decisions that will be better rather than less good.


COMMENTS (5 to date)
Warren writes:

If you will permit, this is a question unrelated to the topic of this weeks interview.

Adam Smith” distinguishes between productive and unproductive labor , where the former is involved in the creation of commodities and therefore of income, while the latter is involved in the provision of services; and that services, which are always maintained by the industry of other people, does not contribute to aggregate output.“ “Two points arise from this argument, first that the productive capacity of any society would depend on the proportion in which total income was distributed between revenue and capital, and secondly, that capital could only be increased through parsimony.”

Today the proportion of our economy devoted to services seems very high, does this mean that we are poorer than when services were a lower proportion? Does it mean that our economy is much less stable?

[Warren: In the future, please give sources when you give quotes. I can't find any sources for your quotes. From whom or from what sources are you quoting? Certainly you are not quoting Adam Smith! Regarding your first quote, starting "distinguishes between productive and unproductive labor...", I so far can find no published citation or documentation for that quote. Regarding your second quote, "Two points arise from this argument, first that the productive capacity...", that might be a distorted quote from material written by Dugald Stewart in 1793 in a forward to a republication of one of Adam Smith's works. Please clarify the sources for your quotes. When our commenters quote from others, we expect links or documentation. Who said it, when, and where? Please supply your specific sources when you quote from others.--Econlib Ed.]

Corey Hunt writes:

"But in that particular setting, you can know that there are dangers of overconfidence. But if what it leads you to do is not take action, you are never going to win."

Of the cardinal virtues, (prudence, justice, temperance and courage) the Greeks considered courage the most essential, because none of the other virtues could be used to the best effect without courage. Without courage, life would be just a long type II error. Without courage, the process of discovery that people undertake to improve processes of production, service and exchange would not take place. I have found this episode interesting from an academic standpoint and encouraging as a business owner.

Matthew writes:

Great podcast.

But I was disappointed in the lack of attribution to Saras Sarasvathy of the effectuation ideas.

"Saras Sarasvathy explores the theory and techniques of non-predictive control for creating new firms, markets and economic opportunities.

Using empirical and theoretical work done in collaboration with Nobel Laureate Herbert A. Simon, the author employs methods from cognitive science and behavioral economics to develop the notion of entrepreneurial expertise and effectuation" http://www.amazon.com/Effectuation-Elements-Entrepreneurial-Expertise-Entrepreneurship/dp/1848445725

Gavin writes:

I found the discussion about overconfidence bias misleading.

It was suggested that if overestimating driving ability is an example of overconfidence, then underestimating drawing ability must be an example of something called "underconfidence". In fact, overconfidence bias does not refer to a tendency to overstate the level, but rather to an underestimate of the range within which the true value lies.

It is possible for someone to be as overconfident in his artistic incompetence as his driving prowess.

Robert Swan writes:

A good discussion sounding a note of caution on applying controlled research in an uncontrolled real world.

On the driving example though, Prof. Roberts' terminology was uncharacteristically loose. Assuming some standardised test, it's perfectly possible for 70% (or 99.9%) of drivers to be "above average". If nearly everybody scores 8/10 and just one person scores 2/10 then all but one are above average. Also applies if that one got 7/10, and flips on its head if the non-conformist got 9.

Tightening up the terminology to "top 10%" fixes this problem, but then what "standardised test" are all these self-assessing drivers applying? The best racing driver might not be very good at parallel parking. A good parker might not be very courteous. A courteous driver might be inclined to be a bit timid at merge points. Etc. Quite easy for everyone to truly be in the upper part of their own scale; no cognitive dissonance required.

For many years now, when I hear a comparative/superlative (better, worse, highest, etc.), I find myself automatically asking "along what number line?" This was a central point in another Econtalk interview, Kling on the Three Languages of Politics, applicable to far more than politics.

Comments for this podcast episode have been closed
Return to top