Russ Roberts

Rubinstein on Game Theory and Behavioral Economics

EconTalk Episode with Ariel Rubinstein
Hosted by Russ Roberts
PRINT
Munger on Microfinance, Saving... Papola on the Keynes Hayek Rap...

Ariel Rubinstein of Tel Aviv University and New York University talks with EconTalk host Russ Roberts about the state of game theory and behavioral economics, two of the most influential areas of economics in recent years. Drawing on his Afterword for the 60th anniversary edition of Von Neumann and Morgenstern's Theory of Games and Economic Behavior, Rubinstein argues that game theory's successes have been quite limited. Rubinstein, himself a game theorist, argues that game theory is unable to yield testable predictions or solutions to public policy problems. He argues that game theorists have a natural incentive to exaggerate its usefulness. In the area of behavioral economics, Rubinstein argues that the experimental results (which often draw on game theory) are too often done in ways that are not rigorous. The conversation concludes with a plea for honesty about what economics can and cannot do.

Size: 27.9 MB
Right-click or Option-click, and select "Save Link/Target As MP3.

Readings and Links related to this podcast

Podcast Readings
HIDE READINGS
About this week's guest: About ideas and people mentioned in this podcast:

Highlights

Time
Podcast Highlights
HIDE HIGHLIGHTS
0:36Intro. [Recording date: April 21, 2011.] Game theorist; recently wrote an afterword for the 60th anniversary of von Neumann and Morgenstern's classic work on the theory of games. You wrote it from the position of a skeptic. Excerpt from the essay: "So, is game theory useful in any way? The popular literature is full of nonsensical claims to that effect. But within the community of game theorists there is sharp disagreement over its meaning and potential usefulness...." So, this is kind of a shocking thing to write, as an Afterword. What kind of reaction have you gotten from your colleagues? First of all, you were reading it and I like it. My response is very positive, if negative a few years ago. I am a skeptic, it's true, and I'm not ashamed of that. In fact, the opposite. I think to be a skeptic as an academic is the fuel of academic life. I think one of the problems of economic theory in general is I do feel there was too little skepticism and too much feeling we are on the right track toward fulfilling some goals that personally were not believable. Second, there is one point that I emphasized in the part you read, which is the fact that there are many game theoreticians that do want to find game theory to be applicable, applied. From the beginning of my academic life, I have never felt any obligation to do something which is useful, which is applied. I don't think that's a sin--not at all. If something that I do will pave the way to the moon or to cure cancer, I would be very happy, but I don't believe that's the case, and I think we should look in academic life in general to much more modest usefulness, more from an abstract point of view, from an intellectual point of view. And not in the sense that most people understand the word being useful. Now, when you ask about the response of the people around me, well, people sort of know me for many years and I did not hide my views about it. It's true that probably I was less clear about my views when I was very young. I remember the first time, around the early 1980s, I was participating in some conference, and somebody told me he did some experiments that were not very consistent, he thought, with the model I wrote about bargaining. And I didn't understand what he wanted from my life, namely, of course the model does not have to predict anything; it does not have to be verified in life; it's not a model like a model in physics is. And it was not supposed to be useful in any way. I called it a model of bargaining because I wanted to emphasize that there were many models of bargaining; and my model is just one stories; there could be many stories; and if it's useful, it's only in an indirect way. So, I think in time people got more to know me and they understood that these are my views. It's true that only in the last 10 years or so, especially in the late 1990s, that I expressed my views in a more coherent way, more clearly about the role, the usefulness, or the non-usefulness of game theory. Now, I have to say also that some people did not like it at all. It's not that I got threats to my life. I fully understand why. They feel they should do something useful in life. I say it very positively about them. I think there are people who were educated, brought up on the idea that they were not allowed just to be professors in universities just for thinking for the sake of thinking. But, they need to do something that will be useful in the regular sense of the word. For them it's not easy to accept this position. Now, of course, I should also say, especially in the last 15 years or so--it was not like that 20 or 30 years ago--we are also experiencing some economic interest around. Once game theory became applied in the sense that there are companies and individuals becoming consultants and giving advice, and those who get nice amounts of money in return, then of course the interest also plays a role. We economists are also human beings and economic agents; something we also take into account. So, I don't blame anybody for cheating, nothing like that. Very moral and very nice people around game theory. But I think interests are also involved. And some people feel their work is very useful in a concrete way, either driven by idealistic positions about academic life or they are driven by more human incentives or interests like other people. So, I don't feel isolated.
10:04One of the nice things about this position: It's not like every day I get responses or many emails from thousands of people. But it's nice to get very positive responses from young people, from students. It's nice because it's coming from something more pure. In general I believe in young people more than I believe in people my age. Listeners will hear some echoes from a recent interview we did with Freeman Dyson, where he talked about the value of being a heretic and made some similar comments about the reliability of science. We're talking now about social science and game theory in particular; but it is a fascinating thing. In my experience when you confront people with the idea that their certainty is not as high as they claim it to be, their reaction is very angry and aggressive. They don't say: That's interesting; maybe I should be a little more open. Actually their reaction is that you don't know what you are talking about, you don't understand, you don't know the field. But in your case, they can't say that. You know what you are talking about and you do know the field. Awkward. Some people must be very defensive. Some are; and some will probably not say it to my face, but probably say some bad things about me. And I will not hate them for that. I understand that this is the game of life. For saying that, I don't need to know game theory. One point I would like to emphasize is that many of the game theoreticians and economics theoreticians in general could make a lot of difference to the world. But my argument would be the following. I think what happened is that economic theory attracted many smart people, many very smart people. The difference between game theory, economic theory, and pure mathematics is that those people who come to economic theory are very often also with a very good touch to the world. Not only their IQ is very high, but also to do game theory you do need to be in touch with the world, in the sense of understanding how people argue, what is important in a situation, and so on. Now, that's something you don't learn in game theory; but I think if you are a good game theoretician or good economic theoretician then this is the view you need to come with. I think what's going on is that game theory and economic theory, many of the people who work in those fields, have this special ability of analytical ability and some touch to the world which makes them very valuable independent of the theories or the concepts they come with. I do believe if you take an intelligent person, like many of the names--don't want to mention particular names here--and you tell them that in the next year you don't do game theory but just consult to the American government agency or the Israeli army or to some private company, that these people will give many good advices; and also give many very bad advices. But some of the good advices will be quite original and from social or economic point of view. But this does not mean that the people who are describing themselves as game theoreticians, who are game theoreticians, that the sectors are really in a concrete sense does not really mean, not at all, that they really use the theory itself. Of course, game theory, like logic, like the Talmud, like other philosophy--these are fields that sharpen the brain; and in this respect trains people to contribute something to the world. To be successful academics, you have to be original, and originality is not something you learn. It's something you are born with or your mother gave you with her milk. That makes many game theoreticians quite valuable from a social point of view. But the one thing I disagree with is that I disagree that they really use game theory. One of the things that motivates me in life is my hate to pretention, my hate of the use of authority in cases where there is no real authority to be based on. It's very tempting; in my own life I also was confronted with this temptation. It's very tempting to say I know something that you don't know; I am a professor; I am a game theoretician; I know some people who won the Nobel Prize; etc.--and therefore, listen to me. I know something that the layman, the politician doesn't know. Very seductive. It's tempting, but I think we should and personally I hope I did succeed not to be tempted by that. In the last 10 years or so, many times I did talk in some public gatherings, some public lectures in Israel. And I've been from time to time in the newspaper in Israel; and one thing I emphasize from the first sentence is: Listen, I am talking to you as a citizen of Israel and of the world, but not at all as an expert at anything which is relevant.
17:08That's a very deep observation. I've been struck by it in the current economic situation. I may have made this joke before, in an earlier program: People come up to me and said the middle of the crisis that people must be coming to you for answers. And I always say: Not the smart ones. Because I don't have any answers. I may have an insight or two from the study of economics, but people don't want an insight or two. They want the truth. They want the answer, the solution. And, as you say, there's such a temptation, a seductiveness to being treated with honor and glory as if you are something special. Exactly; and you use the word solution, that people look so much to. Part of the success of game theory is the rhetorics that was used in game theory; and the term solution--we talk about solutions in game theory. We don't just talk about equilibrium. We talk about solutions. This word in English or in Hebrew, I think it triggers people to believe that here we give something very concrete, the solution to a dilemma, to some conflict. There's a famous example of a shipwreck--I think it's in a novel by Joseph Conrad, Typhoon--where all the jewels and boxes of treasure that people have put on board get all mixed up together. And the problem now is how do you get people to tell the truth about what was in their box? Everyone will have a tendency to exaggerate. So, there's a very elegant solution; we'll not talk about it here--that will encourage people to tell the truth. The idea is that you don't give people back necessarily what they say, but you compare what they say and what everyone says to the total amount. Beautiful, elegant piece, I think there was a little article on it in the Journal of Political Economy--but the idea that that would work the first time as a solution in the chaos, in the aftermath of a typhoon when people are traumatized, that they would just all act like the game theorists say they would, is stupid. There's a logical beauty to it, but no captain would use it. He'd be a fool to use it. I think this is true about many of the so-called game theoretic solutions. It's like logic. It shows something; we learn something from it, something which might or might not be useful. But to say that they wrote the model, have some implementation problem and I found some mechanism which the Nash-equilibrium or the only Nash-equilibrium, or the only whatever equilibrium is something I would like actually to implement--to say this is something close to being a recommendation for the world, I wouldn't call it stupid, but I would call it pretentious without a good justification.
20:21The other thing I'd like to observe about your point, which I think is very true, that people like to be treated with seriousness--it's interesting to me that, I think most economists generally would be insulted when told that most of your insights come from the fact that you are really smart. Yes, it's useful to have studied economics; it hones the mind. I would even go farther than that--it sensitizes you to unseen effects, to the effect of incentives, to the role of market forces which are difficult sometimes to be aware of; and for a game theorist you are sensitized to strategic interactions. And those can be very important. And yet no one says: This had nothing to do with my training. It's just me; I'm smart. That might be a nice identity. But as you point out, people don't want to sell themselves that way. They sell themselves as the keeper of the flame, the person who has this insight from the club, from the cadre of insiders called economists or game theorists. I find that psychologically interesting. I think many people would love to discuss that they are smart, but I think it's very difficult to say: I'm smart, my IQ is such-and-such, listen to me. Very easy to say: I have 50 papers in Econometrica, and therefore I am an expert; listen to me. In general I don't believe in [?]. Somebody wants to get it, somebody wants to pay for that, somebody wants to listen, somebody would like to cover himself. The worst case for me is when people are presenting political views; my own country, where some people are supporting the political views with sort of scientific justification. That only happens in Israel. Doesn't happen in the United States. We're all just scientists here. We have the same problem here. The worst part is, it's dishonest. We pretend we are scientists, but we are not. We are cloaking our biases in scientific rhetoric. I would be softer than you in one point: I would not say it is dishonest. I would say we are cheating ourselves, that the desire of people to make claims which are not just their views and not just derived from their ideological positions, is somehow so strong that people are just persuading themselves. I don't want to speak any evilness, any dishonesty; I think some of the people who do it are honest people, fine people, but I just think they are wrong in their positions. I agree with that softening. Series of podcasts on the Theory of Moral Sentiments, there's the line of Adam Smith, paraphrasing slightly: Man wants to be loved and to be lovely. We want to have people think highly of us, and we want to earn that respect and honor. So, we have a tendency to delude ourselves that we've actually earned it when in fact we haven't. Basic impulse is good. It's good to be liked for genuine reasons. But, we have a tendency to self-deceive. As you say, it comes with the mother's milk.
24:29I want to talk about a rather iconic example from behavioral economics literature that you've critiqued in a paper on behavioral economics; happens to be an example that came up in the podcast with Dan Pink and may have come up another time, forgotten which podcast. Now-famous example of an Israeli daycare center. They had a problem, people were coming late to pick up their kids, taking advantage of the daycare center. As an experiment they instituted a fine, where if you were late picking up your kids you had to pay an additional fee. They found out that it actually increased the number of people who came late; and this was seen as some sort of refutation potentially of downward sloping demand curves. I think a lot of the claims for this example are overblown, but I want to get your reaction to the science and objectivity of it. I prefer to talk in general about experimental economics, methods or assumptions. My interaction with experimental game theorists in economics started when I met years ago Amos Tversky. He became quite a friend of mine until he died in 1996. One meeting, I told him: Amos, I don't understand what you are doing. What you are doing is completely obvious. I don't understand why you need experiments to support your observations. You understand how people reason; you have interesting observations about those; it's enough that you write those observations like philosophers. Philosophers usually publish; they don't support their observations with experiments. The interaction with Amos for a few years, including one paper that we wrote together, taught me a lot about the meaning of experimental economics. I do still believe that most of the interesting stuff in experimental economics is something that comes from people with understanding, with good observations, the ability to observe how people think and reason, and of course with the ability to make some abstractions from that. Once they come up with some observation, if the observation is good that usually, you hear it and you say, Yes, that's [?]. Introspection, which I believe is the main tool of experimental economics or the cognitive psychologist is something that overall this is the most important tool in this field. Nevertheless, we require from a social point of view that it's not enough to say: I believe that or I think that and I checked with my friends or my son or my wife; we need a little more than that. And then it's become quite an out. And I think that people like Amos Tversky or Daniel Kahneman were like artists for me, namely they have this wonderful ability to take such an observation, from introspection, observing other people thinking and reasoning and somehow inventing some sort of experiment that demonstrates the point very well. That's something that is not easy. Once we see the experiment, sometimes we think it's so easy, everybody can do it. But that's not the case. In the paper I wrote with Amos, we went through 20 or 30 pilots until we came to the final phase of the experiments that we did. It's not me that I did it--it was Amos, after so many years of experience. It was not easy at all to find the right way to ask the question to demonstrate the point we wanted to demonstrate. The point was very clear; we didn't have any doubts that the point exists in human reasoning. So, my point is the following--one of the two. If we say economics is really like philosophy, and it's enough to say some people reason in this way or there is some elemental reasoning that is common to many of us, then we don't need experimental economics. If we do the experiments, then we need to do it in a very careful way and it's not enough to claim that the conclusion is correct. In some sense, the conclusion is very clear from the beginning; in some sense you don't need the experiments to believe in the conclusion. This particular case that you mentioned, common sense tells us if you know you don't have to pay anything for some service, then you may actually be more considerate of the interests of the guy who gives you the service. On the other hand, if he charges you something--and it's a little bit annoying that he charges you this extra--then you start to do your maximization. Might be that if the kindergarten teacher does not charge anything then I will be careful to come on time, and if he charges a lot of money I will be very careful to come on time, and if he charges some peanuts or a few dollars, I might say it's not worse for me to be half an hour late and pay the $10. This is something which, if you ask people in the street they probably didn't think about it with this abstraction, but if they think about it, they will agree that that's more or less the case. So, if you want to say that's the case and the incentives are not monotone in their influence, then. And also that there are non-monetary influences, cultural disapproval, other forces--it's not so complicated; that's part of economics, too. Right, but what I'm saying is that if you want just to say that's the case, then I don't think you need to do any experiments. If you want to do experiments, now you have to judge the experiments and not the conclusions. I think the general mistake that many of us make is we say: Aha, here's somebody who--and I don't refer to this particular case but in general to many papers in experimental economics in the last 20 years where actually people were quite impressed by the conclusion, probably because the conclusion was useful to construct new models. But in some sense you didn't need the experiments to be persuaded.
33:00Now, as I said before, if you do the experiments then I would like to require them to be done in the right way. How do we usually judge experiments? Usually by using some statistical tools. You use a statistical test, you run the program, you get t = whatever, and it's better if you get p below 5%; and if you get 6% it's a disaster and if you get 4% you are happy and it's the end of the game. One of the things which as a profession we hide or don't take into account, the big concern about results in my opinion is not uncertainty, which is measured, but the uncertainty about the way the data are recorded, the uncertainty about the way the experiment is done. This is something that my feeling is that economists are not critical enough. We don't check the details; it's not politically correct to say to somebody: I don't think you have done the experiment in the protocol you are reporting to. And, therefore the significance of the experiment, without getting into details about case A or case B, in general I am much more skeptical about this fact. Let me be bold; let me shock you again. Most of the facts that are reported, I don't believe the facts. I believe the conclusions, but not the facts. One thing that I learned in my life is we are not angels. There are very few angels in the world; and I am not an angel. Working in experimental economics requires in some sense to be an angel--namely, it's very easy to deceive yourself, to do the wrong calculation and then not to check your calculations if they go in your direction. I give lectures about new economics and one of the things that I do in this lecture is show some data about eye tracking. Some work with some friends of mine from the Weizmann Institute and a student from Tel Aviv U. And they show some data, nice pictures, recordings of the eye-tracker. What is eye tracking? The story is that in order to, we try in general in what is called new economics to understand the way people reason. They try to do it usually measuring these things. It comes from your brain, your body, gives us some hints about the way you reason before you make your decision. This is one of the most important, most interesting developments in economics in the last 5-10 years. A lot has to be said about the way this was developed. Personally what I did with some friends, we did some eye-tracking experiment so we get people to choose between two factors, like x dollars with probability p and y dollars with probability q, and you wanted to see how people reason about it, using the eye-tracking, which allows us to follow the eye movements--namely whether they compare the dollars to dollars, the probabilities to probabilities, or whether they made some calculation like multiplication of the x dollars with the probability p versus the y dollars with the probability q, etc. So, we did this experiment. I do these experiments mainly because I am interested in the methodology of economics and I wanted to understand better how people do these sorts of things. And one thing that I learned better in my life was that the best way to criticize something and understand how people do something is to do it yourself. When you do it yourself then you find the dirty tricks of the field on one side; on the other side you are also becoming more sympathetic to the difficulties other people face when doing their research. In any case, we have some very nice movies where you can really look at the movements and look how the eye movements went; and you say: Wow, it's very clear what's going on in his mind; something quite funny, quite excellent. Now, I show these movies to a crowd and then I tell them: Look, the entire data that you saw is wrong. I showed it already in 20 or 30 lectures and nobody until now realized why the data were wrong. People could see it because they had the entire information; and the entire background to make the conclusion is in some sense wrong; and they don't see it. And actually, we, my co-authors and me, were watching this movie for days and weeks until there was some accident that caused us to realize that the data was wrong. Because what's happened in this case was there was a mistake of the program and actually everything up/down was reversed. Whatever was up was down and whatever was down was up. And people could realize it because all the eye movements started it from the bottom. And it's very strange--people started to look at the screen from the bottom and not from the top, even in a country like Israel where we write to right to left--we don't read from bottom to top. Nevertheless, we don't criticize enough the way we take for granted what we see is correct. The way that the data was recorded and collected was fine and we don't search for fundamental mistakes in the way that we analyze the data. We do look for statistical tests, which actually are very good tests for the ability of the data, if you mean everything after you run the statistical test was correct. So, I think one thing I have tried to encourage in my research, my lectures--other people to be skeptical about the data in experiments and not just to check the statistical validity but to go farther to the very fine description of the experiment. Details are extremely important and may make a lot of differences.
41:25But this is a general problem. You pointed out very eloquently in the case of experimental economics, in your paper which we'll put a link up to, you give some nice examples, what you might call casualness in the collecting of the data or the imprecision of it. The problem I have--and this is true of every type of empirical work, not just true of experimental economics--the researchers don't reveal what they actually did. They tell us an ex-post story that they tested this and it came out, and we were right. You don't know how many times they ran the regression, how many specifications they used, how many times they didn't like the outcome and said: Well, we'd better try it in log-log form instead of in the levels. So, in the example of the experimental literature, when I read these pop psychology books that report to teach us some new thing about economics, I'll always ask myself: Can you replicate it? If I did the test would it come out every time that way? Did you just find it once? How many times did you do it. There's this classic example in a book I like a lot, James Surowiecki's book The Wisdom of Crowds, a lot of interesting things in that book. But one of the things is that when you have people guess something, the mean sometimes is the right answer even though no one individual guesses accurately. Sometimes true, sometimes not true. But for fun, I did an experiment with my class. I filled up a big jar with beans; walked around, asked people how many beans were in the jar to see if the mean was anything close. It turned out that almost every single person underestimated how many beans were in the jar. Let's say there were 800, most people said 500, 600. But of the 30 people I asked, one person said, like 8000. It was impossible that there would be 8000; he just looked at the jar, gave a wild guess, was totally off. But when you included that 8000 it came out very close to the mean. Now, does that confirm the theory? And if I had been trying to disprove the theory I would have said to myself: Do you really think there are 8000 in there? Come on. And then he would have said 800. And then I would have shown that the theory was wrong. And as you point out: In the protocol, was I careful not to do that? Did I keep every outlier? Or did I say: Well, that's an outlier and did I pick and choose? Did I run it three times before I got that result? These are the questions we need to ask all the time of experimental economics and empirical economics, and we usually don't, because we are not angels. We need to confirm our biases. In connection with that, let me add two or three points. One thing that is certainly that one thing needs to be changed in economics: I have heard already many people agree with that point; we like this culture of replication. I think you do something original; I am not sure about it; I run it again. The chances I will be able to publish it, whatever the results are very small. That makes it very rare that people will replicate; and if they replicate, usually they have the incentive to approve the result and then to make some sort of secondary experiments that under some different conditions with some changes and they have some other phenomenon; but basically they have to replicate to approve the original experiment. I think this culture is very bad. There is room, I think, for giving some credits to people who do replications, whether it works or fails. Of course it has to be venerated less than some big invention, but the profession does not give any incentives for people to do that. Second, there is a big problem about the protocols that experimental economics use. At the moment, experimental economics requires certain protocols which became very rigid and extremely difficult to publish, whether you are a famous person or a young person. Very difficult to publish stuff which does not follow these protocols. The protocols require certain incentives, namely giving the subjects some money and require what they call laboratories, a nice word for a bunch of computers. I think the outcome in my opinion is devastating for academics producing, because first of all the incentives are not serious and there are pieces that show that these amounts that are given do not really incentivize people in the way that makes much difference between this protocol and a protocol when you are telling people you are not going to get any money but now let's imagine that you are now in a protocol situation where such and such thousands of dollars are at stake, etc. That's because people are very good at fantasies, in putting themselves in imaginary situations; and then actually I believe the reports about what they would do, of course it's not exactly what they would do, but give us some information, in my opinion better, than what we get when we pay $5 to students for sitting thirty minutes at your computer and doing some very boring game. The worst of it is it prevents, at the moment a lot of impossibilities to do very huge experiments via the web. In the last 8-9 years I've done a lot of experiments on the web and I think the results I get are not less valuable and in my opinion even more because I am talking about many thousands of subjects and participants. I think the results are not less reliable, at least for many of the games or decision problems I am working with. Very difficult to publish it, to draw the attention of the community because the community requires these sorts of protocols. Now, why does the community require these sorts of protocols? Let me again push the position that they cannot take the position--everyone is an economic agent, and behaves according to incentives but we are not. Firms create cartels, and we are not cartels. I think the economic profession at least in certain ways behaves like a cartel, and the cartel has barriers to entry. That's the basic strategy of any cartel. The cartel of experimental economics puts a barrier to entry. Because in some sense it sounds very easy to do experiments. This is one of the misleading facts for graduate students. Many graduate students who don't succeed to do theory or elaborating macro models or whatever, then they feel we can finish the Ph.D. quite easily because they will get some money from their supervisor and run some experiment, usually get some results; and we can publish it and get the Ph.D. What the cartel does it put some barriers to entry to prevent that too many people will do this sort of thing. But these days, people have good access to the web, to subject around the world and can gather information about the way people reason in general. Including in economic situations. Much better than putting guys in a lab for 30 minutes.
50:22Two comments about this discussion. Recently an article by Johan Lehrer, I think in the New Yorker, about a psychologist who had found this surprising, unintuitive result. He became very honored for this, in demand, his career took off. But over the years he's realized that his finding has not stood up. People tried to replicate it. Over time it got smaller and smaller and now it basically disappears when people try to test it. I believe this is not an indication that people have changed, human behavior has not changed over the decades. More an honesty about what the experiment revealed. The second observation I would make: You made a very interesting observation about the power of introspection and we accept the conclusions and then we do the experiments. I have a friend who is reading one of these pop psychology books that was trying to show all these novel, counterintuitive results; he's loving the book. There are many books like this; I'm not going to name them, that take the psychology literature and craft a book around these new paradoxical findings. And I said: You know, a lot of those experiments--I'm not sure they can be replicated, not sure they are reliable. And he said to me: Don't worry! I only believe the ones that make sense to me. So, that's a dangerous road, also. There is a challenge; I'm tending more toward philosophy as I get older and less toward mathematical economics, but there is a challenge that your introspection is prone to confirmation bias. And here I am interviewing you--you are confirming all my biases that I have about experimental economics and game theory. So, I have to be careful, too. One more issue that you've been a contrarian or maybe an outspoken person on, sort of brings together a few things you've been talking about. I think there is one thing I have been trying to emphasize in the last few years in my writings: What is an economic model? I wrote a book in Hebrew which is not published in English, at least not yet, in Hebrew means Economical Fables. Stories. Tales. The basic idea of this book, which I also emphasize in some other writings, is that my view of economic modeling in general is that it is a fable. It's a fairy tale. It's not like my imagination of what is a model in the sciences. I do treat the model that I build or invent or other people's models as stories. The good model for me is a good story. The difference, at least, is the good story in economic literature has to be written in formal language, which has pluses and minuses. It has pluses that it makes the story in some sense more clear. On the other hand it makes it less clear, because this formal language is not necessarily understood by many of the people who are going to use it. But in any case, I think a good story, a good model, I think about it as a story. It has to have much of the story, much of the rhythm of a good story. Usually presenting the characters, the economic agents, the motives; and it goes on till the end of the story, what usually people call equilibrium, solution, and so on; but here the big difference between the fairy tale and the formal economic model, which is that the formal economic model you would need some rules, or it seems you have some rules, or you are bound. You cannot just go from the beginning to the end in the way the writer wants it to go, but you have to justify the move from the beginning to the end using some abstract concept, some solution concept, and so on. In many respects, these are in many respects the same thing--both a good fairy tale and an economic model are very unrealistic because they take from the complexity of the world; they take very few elements, they put it usually in a very stylized environment. And in this stylized environment, we are playing in the fable and playing in the economic model. And then, at the end of the story, the model, if the fable is interesting, then at the end of the fable we feel that we have got some sort of an insight about the world. Not a recommendation to the Prime Minister as to what to do, not an advice to how to behave; but nevertheless we have some sort of a feeling that we got something about the way people reason and the way the social world goes. I think the same is true about the economic model. If the economic model is good, then it tells us something about the concepts, about the way that we reason. And it should be also enjoyable. It should entertain a little bit. The more dramatic it is, the more surprising. I interviewed Ed Leamer, book called Macroeconomic Patterns and Stories: this is what we are. We don't like to admit it. We like to think we are doing science. But we are doing something closer to literature, philosophy. I think, some people in the profession, when they hear that, they bristle, they get angry, resentful, say you are wrong. Then there are other people on the other side who say: Okay, so there is nothing to economics. I think that's not the conclusion. For me the conclusion is not that economics is useless because these stories are just stories. Economics is a powerful way of thinking. It's a way of organizing your thinking in ways that are not obvious. Economics helps you see things that are hidden, that are unseen. Some of them, the magnitudes we can't measure. We are not measuring the gravitational force of a mass. We are talking about magnitudes we can't always measure; but we can see forces and see insights into human behavior that are not obvious and that you wouldn't have noticed before. But to pretend we can use those, then, to run the economy or to design a mechanism for a public policy is often a charade. So for me there is often a middle ground. Economics is a useful way of thinking or I wouldn't be running this podcast every week. But I think we oversell it. And when we do, we get into what Hayek called scientism, not science. The use of scientific language and technique to give the aura of certitude that it doesn't deserve. I absolutely agree. Let me just finish with saying that you know from time to time I talk with young students, and sometimes colleagues of mine criticize me that I deter them from studying economics because I tell them exactly what I was saying now in this interview. But I learned from my experience sometimes the opposite, namely it may be that I do deter some students or future students who want to come to economics because they believe that if they study economics they will make money. Maybe they conclude it is better for them to study business administration or finance or whatever, or just to start a business. But on the other hand, I think that the students economics needs, namely with white minds and openness to new ideas and eagerness to come up with new solutions in the real sense to the problems of the world, those more idealistic students are finding this approach to economics more attractive than telling them some false facts about the usefulness of economic studies to their pocket or to the ability to give good advice to the finance minister.

Comments and Sharing



TWITTER: Follow Russ Roberts @EconTalker

COMMENTS (11 to date)
bill greene writes:

This an excellent example of the humilty we need to see in our experts, economists, and legislators. The grandest plans frequently fail, and worse, have unfortunate unintended consequences. Is it possible that, the more abstract and complex an idea is, the less chance it will predict or influence activity in the realm of human behavior?

Rubinstein admits that most economic hypotheses and experiments "provide insights into human behavior that are not obvious and that you wouldn't have noticed before. But to pretend we can use those, then, to run the economy or to design a mechanism for a public policy is often a charade. So for me there is often a middle ground. Economics is a useful way of thinking . . . But I think we oversell it. And when we do, we get into what Hayek called scientism, not science. The use of scientific language and technique to give the aura of certitude that it doesn't deserve."

He indicates that all the presumed scientific "accuracy" of game theory is suspect, and that is true, because examples like the Prisoner's Dilemma are based on an artificial setting that is not readily transferable to real world settings. And, then there is the fact that humans do not always make rational economic choices in reciprocal altruistic interactions, so application of presumed logical thinking is counter-productive.

He does indicate a liking for the Wisdom of Crowds, the supposition that ordinary people often are more correct than the experts--a wonderful endorsement of the free and open marketplace--a marketplace happily uncorrupted by enforced mandates of expert policy makers. This topic led Rubinstein to cite the work of Kahneman and Tversky who won the 2002 Nobel Prize for their work on how people make decisions. They found that many people are very prone to errors of judgment and that decision-making skills vary greatly, are largely unpredictable, and are based on the unique nature of a person's cognitive make-up.

What is more insightful from Kahneman's work is that they found only a small to medium correlation between IQ test performance and one's capacity for rational decision making. It is my belief that the reason "crowds" of ordinary people make good guesses, (and start almost all new small businesses which is the real fruit of economic activity), is that they think in "concrete" ways--a cognitive process that enhances decision-making. Quite differently, economists and academics operate with abstract and conceptual cognitive machinery that is less applicable to enhancing an understanding of real world economics and in many cases may be more prone to errors of judgment.

Lee Jamison writes:

Both of you make excellent comments on Economics as a part of the "fabling" of culture. Economics lies on a continuum of human storytelling that can be traced beyond the dawn of written language. Rubenstein makes a significant contribution to us by staking the modelling of economics back to the ground- and to the fact that, to the extent the field claims to be scientifically predictive in the sense that physics is predictive, it is kidding itself.
There is no small amount of humor in the fact that a man who is essentially saying Economics is a part of wisdom literature is writing in Hebrew.

emerich writes:

Yes, more humility in "experts," economists included, would be welcome. But if this had been my first econtalk, or my first exposure to economics, I would conclude that economics is abstract theorizing with no connection to the real world. I don't think Russ really believes that. The people who would most benefit from such a realization are also least likely to arrive at it--politicians blithely foisting vast new legal and regulatory structures upon the rest of us, always comfortable that they've got experts backing their actions.

ric caselli writes:

Incentives in schools and Academia are biased towards narrative as opposed to training for good judgement...

It seems to me that economics is "the sick man" of science in this respect, with the current debate all about competing narratives and very little work done in complex modelling.

Hopefully a new generation of economists with better mathematical tools and less of a religious attitude will come to first define the limits of predictability and then to contribute useful insight and hopefully a more flexible vision for a complex world.

Lee Kelly writes:

I haven't studied game theory, but I have studied a little of formal languages.

I always make a distinction between a model and a scientific hypothesis.

A model is an abstract mathematical toy, a formal object with tenuous connections to reality. It is not, in and of itself, a proposition about reality anymore than an algebraic equation. Models may describe logically possible worlds, but by themselves imply nothing about the actual world.

A scientific hypothesis is usually a claim about a model, e.g. "model x corresponds to phenomena y". By replacing variables within the model with measured quantities, predictions can be derived and hopefully tested. But one should be careful to distinguish between the modeler and the scientist, even though they may be the same person in two different roles.

In any case, the problem with economics is that models which have anything more than an ad hoc or "rule of thumb" usefulness require extraordinary degrees of complexity and superhuman abilities of observation to test satisfactorily. Our ability to test scientific hypotheses about such models is extremely limited compared to physicists or even biologists (who face similar complexities).

When it comes to empirical testing, it is upon the individual to put his own views on the line and to resist the temptation to "explain away" recalcitrant evidence (which is especially easy when dealing with complexity and so many variables). One must take risks to learn., afterall.

Lee Kelly writes:

I forgot to explain what motivated the above comment:

Rubinstein's remark about how someone had "tested" his model and found an example where it did not hold, and then Rubinstein's response that he didn't understand the what the point was, because his model was just one among many. He wasn't proposing a scientific hypothesis about particular phenomena. The model cannot be tested, its worth is not a product of its predictive content, because alone it has none. The researcher had mistakenly assumed that Rubinstein was making an empirical claim, because that is the prevailing culture. It seems to me that Rubinstein wishes to reassert the difference between modeler and scientist.

Rider I writes:

Game theory is interesting. I wonder if a game theory has been created to show how the Communist Chinese are following exactly in the Soviets foot steps. From everything from creating a Bloc which is now called the Bric, to trying to implement a single world currency, to literally going on a resource domination campaign.

n Nomeni Patri Et Fili Spiritus
http://rideriantieconomicwarfaretrisii.blogspot.com/

Rider I

Berel Dov Lerner writes:

A comment on the experimental testing of obvious truths about human nature (Prof. Rubinstein seems at some points in the interview to imply that such truths are so overwhelmingly obvious that there is no point to testing them empirically). People often make fun of social and behavioral scientists for seeking empirical evidence for apparently obvious truths about human nature. The problem is that often a number of such "obvious truths" apply to a given situation and point towards different expectations of how people will behave. The trick is to figure out which "obvious truth" will carry the day. That is exactly the problem faced by people trying to study cases such as the Israeli day-care experiment in which the imposition of a fine on parents who show up late to pick up up their kids should *obviously* serve as an incentive for them to arrive on time while the fine system also *obviously* neutralizes courteousness as a motive for showing up on time. How could we know which of these opposing "obvious" factors would determine the outcome without running the experiment?

Charlie writes:

It would have been a more interesting podcast, if you had actually gotten to some concrete examples of how applied game theorists are deceiving themselves or others. EconTalk, for instance, has had Bruce Bueno de Mesquita on several times who argues that the method of game theory is very useful in real life. Is he wrong? Is he deceiving himself and if so, how?

AHBritton writes:

I have many thoughts regarding this podcast, but as often happens my desire to be thorough is often the enemy of my desire to present something in a timely enough manner that people will actually respond to it. In an effort to avoid that I am just going to toss out a couple of quick ideas, though it's possible the conversation has already died down here.

Game theory strikes me as very similar to praxeology, the term used mostly by the "true" Austrian's out there (slightly sarcastic) who follow the tradition mostly from Mises thru Rothbard and others.

The claim being that we can determine economic truths merely by using basic principals and deducing from then logical consequences (similar to the study of math and logic).

This is obviously oversimplified. It would be interesting to speak to someone about this on the podcast, such as maybe Hans Hermann Hoppe.

Secondly, I got very contradictory messages from this podcast, in short, economics isn't very useful, it doesn't matter if those using economic models are really describing and systems that exist in any real sense, confirmation bias is bad, but if you hear a good theory you "just know it," most contributions are made by people that happen to be smart and likely were helped little by any training, systems, etc. We can figure out a lot by just thinking about what sounds right.

I felt very little cohesion, and although I would likely agree on many issues if I was able to clarify what was being meant, I had a hard time determining what exactly that was.

James writes:

I completely agree with the points made in this podcast about the distorted incentives researchers face when it comes to publication and discussion of results.

The whole root of the problem is the "journal article publication" model of scientific progress. It really makes no sense in most fields to write a 20 page manuscript with arguments, theory, methods, discussion, references, etc, plus all the hassle of getting past reviewers, revising, paying fees, formatting tables and graphs, etc. just in order to tell the world that you observed X causing Y, or X not causing Y. It is way, way, way too much trouble. It means that something has to make quite a "splash" before you will even consider writing it up.

I have multiple datasets on my hard-drive right now full to bursting with X causes Y or correlates with Y. I know people that run dozens of experiments every single day, filling lab notebooks full of data the world will never see.

What is needed is a giant WIKI-SCIENCE or WIKI-Meta-analysis that all scientists can easily and painlessly update with data, methods, and results on a DAILY basis, regardless of whether the observations are considered "interesting" to the current fashions of the day, whether they are statistically significant in isolation, etc.

This would produce a tidal wave of information, and lead to the creation of entirely new fields of inquiry that do nothing but meta-analyze the work of others. The current prose-based journal article is just massively inefficient and biased.

Comments for this podcast episode have been closed
Return to top