Paul Pfleiderer on the Misuse of Economic Models
Sep 8 2014

Paul Pfleiderer, C.O.G. Miller Distinguished Professor of Finance at the Graduate School of Business at Stanford University, talks with EconTalk host Russ Roberts about his recent paper critiquing what Pfleiderer calls "Chameleon Models," economic models that are thought to explain the real world with little analysis of the accuracy of their assumptions. Also discussed are Akerlof's market for lemons model, Friedman's idea that assumptions do not have to be reasonable as long as the model predicts what happens in the real world, and the dangers of leaping from a model's results to making policy recommendations.

RELATED EPISODE
Sabine Hossenfelder on Physics, Reality, and Lost in Math
Physicist Sabine Hossenfelder talks about her book Lost in Math with EconTalk host Russ Roberts. Hossenfelder argues that the latest theories in physics have failed to find empirical confirmation. Particles that were predicted to be discovered by the mathematics have...
EXPLORE MORE
Related EPISODE
James Heckman on Facts, Evidence, and the State of Econometrics
Nobel Laureate James Heckman of the University of Chicago talks with EconTalk host Russ Roberts about the state of econometrics and the challenges of measurement in assessing economic theories and public policy. Heckman gives us his take on natural experiments,...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Greg G
Sep 8 2014 at 9:30am

Excellent podcast and I agree with just about everything that was said about the limitations of economic models.

A lot more could be said though about the limitations of humility in economics. Every economic policy choice and policy preference is an implicit prediction. Every policy preference is a prediction that that policy will produce better results than the alternatives. Every policy preference is based on some model of how the world works.

The true measure of humility in economics is how much doubt you have about your own policy preferences, not how much doubt you have about someone else’s model of how the world works.

Keith Vertrees
Sep 8 2014 at 11:09am

Well, in the end, I’ll make the trite observation–we do have to make assumptions. Janet Yellen has to have an opinion about how to do things and all the people working in the Fed do as well, and those voting in the Congress do. The IMF (International Monetary Fund), the ECB (European Central Bank)–everyone is having, is in a position where they have to take a stand. There’s a decision to be made, maybe not at the end of the day, but perhaps at the end of the week or the end of the month.

These entities only have to have opinions because they have been given unwarranted and immoral control over the economic lives of others.

I would prefer that every person be subject to his own models–not to those of the IMF or a central bank.

chitown_nick
Sep 8 2014 at 12:40pm

Thanks for the good discussion – the comments early on, including “My criticism is when we somewhat blindly take those models off the shelf and immediately or without too much reflection apply them to policy.” reminded me of a sketch I came across in grad school. Not a whole lot to add substantively here, but as an engineer, I even see this happening with the “hard” sciences, which I imagine is only more frustrating when applied to something that requires much more nuance to convey correctly.

mtipton
Sep 8 2014 at 3:54pm

Human ALL too human. I think it’s great to raise these kinds of topics as much as possible to raise skepticism among the general population. Lately I’ve been involved in discussions about people that lie at extremes of the left and the right. I am amazed at the degree of certainty a lot of these people have regarding economics, philosophy, psychology, politics etc, areas in which they aren’t experts. This doesn’t stop them from dismissing all the scientists while at the same time claiming that they are scientific. They are extreme AND dogmatic. In this context I’ve come to appreciate serious scholarship with all its flaws. I don’t think economists being more humble would have an effect on the population at large given the discrepancies that exist between scientist and most people. It would probably help advance the science though, that is a good in itself. Economic science while imperfect at least the disagreements are bound within a reasonable framework. There’s plenty of room for disagreement about particular policies, however you don’t have many economist advocating for total control of means of production, or for anarchism. I guess I am trying to say opinionated people abound and I would rather have opinionated people that are experts be opinionated in their fields, than lay people being opinionated.

mtipton
Sep 8 2014 at 4:07pm

I guess you could say I am battling the extreme, in this world, economists (left, right and center) come across as united in their views, reasonable, sensible, and even humble. If you saw the arrogance and certainty exhibited by people that don’t have a PhD in anything regarding all the fields mentioned above, scholars come across as almost saintly. They can off-course do better, we can all.

Stéphane Couvreur
Sep 9 2014 at 5:20am

I think some discussion of bayesian theory would be useful. Remember that the basic bayesian way of thinking is the following :
– if my priors (theory, model) have probability X
– and the empirical data has probability Y
– then the following prediction has probability Z (with a simple formula)

In classical logic, we assume X and Y (minor and major) to deduce Z (conclusion). It is the usual syllogism : if A then B ; A is true ; therefore B is true. In bayesian theory it is the opposite : we can revise our priors using empirical data : if A is true then B is likely to be true ; B is true ; therefore A is likely to be true.

This applies to the present discussion, because the problem is to chose between various possible models. I think what P. Pfleiderer calls ‘filtering assumptions’ could be described in bayesian terms.

Chris
Sep 9 2014 at 4:54pm

I’m an astronomer by training, and this whole episode reminded me of the strong parallels between astronomy and economics. Without the ability to conduct actual experiments (keeping conditions fixed and manipulating only a few variable to observe the outcome), we’re stuck using models to infer relationships and behaviour. It brings to mind one of my favorite quotes from a paper in the astronomy literature:

“Interpretation is cheap in astronomy because any ad hoc assumption consistent with our extensive ignorance and limited data may be used.” (Condon et al, 2002, Astronomical Journal, 123, 1881).

Seems rather apropos to the discussion of economic models.

Jason Thomas
Sep 9 2014 at 11:16pm

This was good. More of this, less of the tooics of the last several weeks.

tt31
Sep 10 2014 at 1:03pm

A few random reactions. 1) These ideas are familiar from my long-past undergraduate philosophy of science class. The distinctions made between econ and hard science therefore seem a little off to me – a lot of these ideas have been brought up about the history of the hard sciences. Given the work done in the philosophy of science, I’m sometimes disappointed that epistemological discussions about economics often are aimed at showing that economics don’t match a vision of science that was already discredited anyway. For this podcast in particular, I’m sure the interviewers and interviewees know a lot about the more sophisticated philosophy of science, but it doesn’t always come through in the dialogue. Would you ever consider doing something close to a basic history of philosophy of science 101 that might help set context for these types of discussions in the future?
2) I often think good arguments for skepticism are followed by lazier thinking about the implications of skepticism. The conclusion here is that the right reaction to Pfleiderer’s argument is humility/humbleness. But is it? And why? Similarly, (related to but not about this podcast), I often see a sophisticated case for skepticism followed by lazy conclusions that the skepticism somehow supports one’s prior point of view. I’d love to hear some deep thought about what to do in the face of skepticism – maybe something you can include in a future podcast?

steep
Sep 10 2014 at 1:11pm

At a recent workshop I attended to learn about modeling river systems, the instructor gave us some good advice for our modeling work. “All models are wrong, some are useful.”

I wish I could remember the professor he heard it from to give the proper credit.

Chris
Sep 10 2014 at 5:39pm

The quote “All models are wrong, but some are useful” is often attributed to George Box. E.g. see here

philipp
Sep 11 2014 at 5:33am

Somehow this reminded me of this quote attributed to Kurt Tucholsky, “Historic materialisms task is to eplain to us how everything must play out, and if it doesn’t, why it could not play out this way.”

Somehow modern economics doesn’t seem to have developed much past that point.

steep
Sep 11 2014 at 11:37am

Thanks Chris.

I should have known it would be out there, somewhere on the web.

Simon
Sep 11 2014 at 2:30pm

Within the confines of the topic, this was an interesting discussion. However, I think “humility” is the wrong lesson to draw from this. That word includes an implicit assumption that there is validity to economic modeling and that we should just be more accepting of its limitations. I think the real lesson is “futility”. Modeling the economy assumes it is a machine and that we just have to understand the way it works so we can manipulate its constituent parts. But this is fiction. The “economy” is a term that captures millions of unique individuals with very different objectives making multiple decisions daily about how to exchange what they have but don’t need to obtain what they want but don’t have. It is futile to believe all of these decisions and actions can be modeled.

Moreover, the only reason for economic modeling (the source of its demand) is to give the individuals in government ammunition to use when setting policy. That makes the implicit assumption that these people should be setting policy for the rest of us. Too few people question that assumption. Let’s remember, “setting policy” means a few individuals coercing millions of others to do with their property and bodies what those few individuals decide should be done. What if government only provided defense, courts and police (or less), and left the economy completely alone? Think of all the economists and models that would be made redundant!

I was surprised to hear the guest, who is clearly well-credentialed in economics, say that there is no alternative for economists but to model. What about Austrian economics, which sees economics as an a priori science based on logic (compared with the rest of economics, which is empiricist)? Hans-Hermann Hoppe lays out the difference in his short book “Economic Science and the Austrian Method”. Austrian economics starts with describing how humans act and builds from there. It uses this logic to explain how an economy works, and has a superior record of predicting economic outcomes. In fact, as noted in the discussion, one of the problems with empirical economics is lack of large data sets for experiments. Austrian economics uses the world’s largest data set: the observation of human action over hundreds of years of history. For instance, what Russ and his guest called “gaming the system” is a pejorative term for humans acting in response to government-provided incentives to satisfy their highest priority needs. People use this term all the time to describe actions they themselves take every minute of every day but which, when taken by others, are somehow worthy of criticism and attempted suppression by those in government. Such reactions to artificial incentives are entirely predictable, although those in government and those supporting them with models fail to acknowledge this (intentionally or negligently). The problem is not human action; it is the incentives put in place by those in government.

Kevin
Sep 11 2014 at 4:35pm

Good discussion. Regarding tt31’s first comment, although that theory of the hard sciences may have been disregarded, the big difference in the hard sciences is that they produce predictable results that are tested in millions of ways. We can, in fact build iPhones, so our knowledge of the physical sciences involved must be pretty valid. We cannot predict the macro economy. So, despite the fact we may have reservations about the hard vs soft sciences they clearly have greatly different abilities to produce consistently actionable knowledge. Skepticism about our understanding of electronics except at the bleeding edge, seems extreme. Skepticism of even the mildest macro economic thesis seems legitimate.

Brian
Sep 12 2014 at 8:00am

As to what can be done about the low quality of Western science in general and of the softer sciences in particular, indeed change the incentives.

No government funding of research.

keatssycamore
Sep 12 2014 at 10:03am

Enjoyed this podcast very much. Thanks. I especially liked the attack on Friedman’s ‘as if’ argument.

Coincidentally, I heard this episode after I finishing two books that really put the believability of studies/models under the microscope. The first (though many of the experiments described in it seem to think they’ve proven things that I’m not so sure they have) is The (honest) Truth About Dishonesty by Dan Ariely and it tells you why & when people lie (because they can & whenever it’s slightly in their extremely broad interests to do so). The second was Bad Pharma by Ben Goldacre and it explains how studies are deliberately manipulated on a systemic basis in a way that’s obviously applicable beyond the pharmaceutical industry.

Recommended for all the people who walk around angry that everyone is so “anti-science”.

Seth
Sep 13 2014 at 6:53pm

I 2nd Simon’s remark. It seemed like the guest ended with, ‘it is hocus pocus, but all we have is hocus pocus and people expect us to do stuff with the hocus pocus, so we have no choice.’

There are alternatives. Economists can start saying, “I don’t know” and “We don’t know” much more.

They should hold each other accountable. When one makes a prediction that turns out correct, we should know about all the predictions that didn’t so we can have a better idea if it’s just luck.

mtipton
Sep 14 2014 at 12:33pm

I second ‘tt31’ on having a Philosophy of Science 101, to get in to the basics:

“Would you ever consider doing something close to a basic history of philosophy of science 101 that might help set context for these types of discussions in the future?”

Thanks for the podcast!

William
Sep 15 2014 at 1:57am

The problem with complex adaptive system is that no model will be able to account for the all possible outcomes.

Every action by market participant will create reaction by other market participants, just like a chess player’s movement will draw different movements from his opponent, and since humans are often “irrational”, they often make suboptimal reactions.

Since economics does not exist in separation (i.e. it requires different market participants to work together to create an economy / market), we could not simply take a single firm and expect that the whole economy will behave the same way a single firm will behave.

This is the equivalent of trying to predict the behaviour of a human being by studying the way the carbon atoms in our bodies behave.

David Zetland
Sep 15 2014 at 12:01pm

Really fantastic discussion on a topic that’s worried and annoyed me for years (esp. as a PhD student facing ridiculous models). I especially appreciated the 1 million pool shots vs the CFO’s guess on bond market movements.

This paper — and the econtalk — should be mandatory for PhD students and (especially) for professors and researchers.

Chris
Sep 15 2014 at 6:16pm

Thanks to Russ and Professor Pfleiderer for the intriguing and thought-provoking podcast. Are we lacking adequate standardization in the way we assess economic theory? If not, please explain. If so, how can we contrive a standard that blends the “science” of economics with its fundamentally “artistic” component? What basic elements would comprise this standard? Is it appropriate to assess economic theory the same way physical and natural science theory is assessed? I’m curious to everyone’s thoughts.

William
Sep 15 2014 at 11:56pm

Chris, on your question: “Are we lacking adequate standardization in the way we assess economic theory?”, I thought (as a lay person) that the problem is the other way around.

There are too many neo-classical economists dominating the universities that the world thinks that Neo Classical econ is the one and only econ and anyone who thinks differently are excluded from the establishment.

But even more fundamental than that, I think the study itself, important as it is, seems to be filled in unfalsifiable theories – and thus it is closer to pseudo-science than a “blend” of science + art.

Imagine 5,000 years ago, two human saw thunder in the sky. One does not care about it and go on with his daily life. One wonders and think deeply about it, and after years of theorizing, modelling and deep thoughts, finally concluded that there must be a God throwing spears in the sky, causing thunder.

That is, in summary, what I think economists are… people who thinks too much about the thunder, and came up with theories that satisfy their “thirst for knowledge”, while most people just go on with actually doing economic activities without speculating about the invisible hand of the market.

DaveJ
Sep 19 2014 at 6:38pm

I’m curious what Dr. Pfleiderer thinks of the work of Alex Rosenberg on this same topic, including his book and this recent article in 3am Magazine (which is where I came across his work).

Chris McClain
Sep 24 2014 at 8:26am

It seems to me that this discussion relates to a common confusion of the terms hypothesis and theory in science. There is absolutely nothing wrong with continuously tweaking models, but we should recognize that this exercise does nothing but generate hypotheses. Hypothesis generation is an important part of science, but hypotheses must be tested for their predictive ability before conclusions can be drawn. Until the model routinely predicts future results that are then confirmed by data, we can’t exactly call it science or theory.

We have the same problems in science with nutritional recommendations based solely on “observational studies” and in physics with “string theory”. We shouldn’t criticize the application of “science” to economics on the basis of economists who aren’t doing the whole job, and therefore not really conforming to scientific standards. We should criticize such economists if they claim their model is a verified theory rather than a hypothesis generated by a snapshot observation of particular data.

Ron Crossland
Sep 28 2014 at 3:58am

Enjoyed this discussion very much. Humans automatically create mental models. We very frequently, and unconsciously, formalize these models. Sometimes we recognize that we have, but most often we don’t.

The biggest problem with all our model-making bias is how rigid a model becomes once we create it (regardless of the model’s complexity). I was surprised that this podcast didn’t mention some basic neurobiology along the way as many of the points made are products of our cognitive biases.

Comments are closed.


DELVE DEEPER

About this week's guest:

About ideas and people mentioned in this podcast episode:Books:

Articles:

Web Pages and Resources:

Podcast Episodes, Videos, and Blog Entries:


AUDIO TRANSCRIPT

 

Time
Podcast Episode Highlights
0:33Intro. [Recording date: August 28, 2014.] Russ: Before introducing today's guest I want to mention that I have a new book coming out. It's now available for pre-order at Amazon. The title is How Adam Smith Can Change Your Life: An Unexpected Guide to Human Nature and Happiness. It's my attempt to apply Smith's Theory of Moral Sentiments to modern life, lessons related to work, parenting, marriage, virtue, and even possibly happiness. Long-time listeners will remember the 6-part series with Dan Klein on The Theory of Moral Sentiments. You can check that out in the Archives. And there will be an interview about the book with me coming soon, with Mike Munger as interviewer, who I'm sure will grill me mercilessly.
1:13Russ: Now to today's guest. He is Paul Fleiderer. He is the C.O.G. Miller Distinguished Professor of Finance at the Graduate School of Business at Stanford University. We're going to be talking about a very interesting paper he's written recently called "Chameleons: The Misuse of Theoretical Models in Finance and Economics", a topic that runs through many EconTalk episodes. Paul, welcome to EconTalk. Guest: Thank you very much for having me. Russ: Let's start with the role that assumptions play in the modeling process in economics and finance. You describe something called 'Theoretical Cherry-Picking,' an idea had not heard before or at called that, and I thought it was really interesting. What do you mean by the phrase? Guest: Well, I'm actually referring in part or harkening back to the problem that plagues empirical work, which is cherry-picking the data. If I wanted to show that a particular result might occur, one way that I might disingenuously do it is to cherry pick the data. In other words, choose the cases that confirm the hypothesis and reject those that don't. There's actually a rather amusing story--I don't know the exact details; I can't remember them, but here at the Stanford Research Institute many years ago, which is not actually affiliated with Stanford now, but at the Institute they were testing the ability of certain psychics to basically display ESP (Extra Sensory Perception) behavior. And what they did is they gave these psychics a machine that they could actually take home, and the machine would randomly choose something and then the psychics, the purported psychics, would have to guess what it was. And it produced a tape that would show the ones that they'd gotten correct and the ones that they hadn't. But what occurred was the psychics or purported psychics could basically take the tape and tear off the misses and keep a big segment of the hits, and come back and show that they had this ability. So that's an example of the problem, which we all know exists in the realm of empirical work. In theory, in theoretical work, you can potentially do this as well. I make the claim in the paper, which is--I make it with a little bit of qualification--that with any set of assumptions, you can produce a particular result. In other words, if you want to produce a particular result, you can choose a set of assumptions that will give rise to that result. Now, that's a little bit too strong. But almost any result--Russ: Not too much. Not so much. Guest: That's probably right. I think almost any result that doesn't rely on a logical contradiction, you can make a certain set of assumptions that will give rise to that. So, we're all in some sense aware of that. And we know that there is that power out there. But in theoretical work what we do, I think, is give a fair amount of latitude to people developing theoretical models, because it's actually quite difficult to come up with a tractable model. And it's certainly impossible to come up with a model that embraces all the things that are going on in the world. We choose a subset of things to model, a subset of forces that we think are important. And we make some assumptions for tractability or to abstract from certain things that we think are not important. And what that gives, of course, is a fair latitude to focus on some things and not put other things in the model, and to put assumptions that we know are probably somewhat contrived, but make the model tractable. What that opens the door to, I think, is the ability to start with the idea that I'm going to produce a certain result; I want to show that something is important. I want to show that if you do more of something, something bad will happen or something good will happen. Or I want to show that something that I see out there is an optimal solution to a problem. And I reverse engineer: I go back and see what set of assumptions I can contrive to give that result. And, if I then take this model and say that it is actually telling me something about the real world, I'm engaged in a bit of theoretical cherry picking in the same way that if a psychic comes with a set of successes on a tape and says, 'Look at my ability,' I have to be careful to ask whether that really is representative of the psychic's ability or whether it was cherry picked. So I introduce the notion in the paper of what I call 'bookshelf models,' which is not meant to be pejorative at all, but simply an exploration of what happens when we make a certain set of assumptions--what conclusions we get, basically a logical exercise. And there's certainly a lot of examples in economics where we have models--I can mention one, the lemons model, for instance, by Akerlof, which is a model that makes a simple set of assumptions about asymmetric information between a buyer and a seller and has some very profound insights into how the world might work. So these models are obviously very important for understanding the phenomena that we generally look at in economics. However, what we have to be careful about is taking those models off of what I'm going to call the bookshelf and applying them immediately to the real world without passing them through a filter to determine whether the assumptions that we've made are really ones that we have reason to believe are operative. Because, we know that with theoretical cherry picking someone can come up with a set of assumptions that produces a result that may logically follow from those assumptions, but if the assumptions really don't have much traction in the real world, that result really doesn't have much to say about what we are actually looking at.
7:23Russ: So, let's take that lemons model for a sec, and then we'll come back to the more general issue. In the lemons model, the seller knows more about the car than the buyer does. Which is at least a good place to start. I think that is somewhat realistic. And the reason I want to go into it is I think a lot of people on my side of the ideological fence--which is the fence that tends to be respectful of what markets can achieve--they, I think, mischaracterize the Akerlof paper by saying the implication that Akerlof draws on that bookshelf model of information, the fact that the seller knows more than the buyer, is that the used car market can't exist because the seller knows too much; the buyer can't trust the seller, can't find out the information. And of course, there is a used car market. So, obviously this asymmetric information problem has been overcome in some dimension. And I think the virtue of bookshelf models--I'll be critical of them later--is that it tells you where to look to understand why in this particular case this market does work. Somewhat well--not perfectly, of course; no market does. But what are the underlying market forces that make that market possible? Obvious examples are people find other sources of information than the seller for the quality of that car. And it allows the buyer to buy the car with some trust. Or warranties are provided sometimes for cars. That's another way that the used car market is sustained. So I think that's I think the best case story you can make for a bookshelf model. Am I being fair there? Guest: I think you are. So, first of all, obviously, used car markets do exist. But perhaps there are a lot of used car markets that don't exist that we don't see because the Akerlof phenomenon tends to be too severe in those markets. But you are absolutely right. I should point out that in the Akerlof model it's not the case that these markets won't necessarily exist. If the asymmetric problem is too large and the gains to otherwise trading and selling your car too small, then the market will break down. But if there is asymmetric information but at the same time there are legitimate reasons for selling your car for instance for moving then those markets can exist. So, the Akerlof model is a great example of a bookshelf model, I'll call it that, that gets us to think about what's important and what the tradeoff is going to be in terms of whether a market can exist or not. And explaining spreads. That Akerlof model certainly applies to trading in a financial market where there's asymmetric information between the buyer and the seller, and that explains in part the spreads that we see in transactions. So, it opens up a host of insights, and Akerlof certainly deserves the Nobel Prize for that modeling because it did give rise to a lot of intuitions about how the world would work and how warranties might play a role and things of that sort. Though--and I brought up the Akerlof model as an example of a bookshelf model just to show that I wasn't using that in a pejorative sense. Russ: Right. Guest: I think it's actually extremely important that we reason about economic phenomena by asking what under a certain set of assumptions will occur and what won't occur, because markets are somewhat complex, economic phenomena are somewhat complex; and the whole reason for modeling is to make sure that we're not just thinking about things on the surface and that we actually model things, look at the logical implications of something and actually detect in many cases some second-order effects that might actually be very important, or some unintended consequences, and all those types of things. So, economic modeling is hugely important for our understanding of anything in the world. And I don't mean to disparage modeling as an exercise. My criticism is when we somewhat blindly take those models off the shelf and immediately or without too much reflection apply them to policy. In fact, my real criticism is that there is sort of an ontological standing that some of these models have, in the following sense: Someone has a model that they've written down and let's say published in maybe even a top-tier journal that shows that something can happen in a certain situation. And it's a bookshelf model because it makes a certain set of assumptions and then shows what follows from those assumptions. But it acquires an ontological status that it doesn't deserve when other people in a policy debate, for example, simply say that, oh we have to be concerned about this because x has in a model shown that this happens. Well, we may have to be concerned about that, but before we get concerned about that, or before we conclude that this is a legitimate phenomenon, we'd better look at that model and see if it applies to the world that we're actually interested in. In other words, are the assumptions reasonable? And so on and so forth. So, what I call a 'chameleon model' is a model that is sort of straddling two worlds. On the one hand, it's a bookshelf model. It's simply a logical exercise of seeing what follows from a given set of assumptions. But because it exists in that logical realm then some other people, maybe the author of that model but maybe some other people somehow think that because this is proven it applies the conclusions of the model to the so-called real world. And that's a big jump, because before we know that it applies to the real world, we better look at what the model is assuming and make sure that it's captured the important things that are going on. But what happens sometimes is when you end up challenging that model and criticizing it because of its assumptions, someone will just basically defend the model by saying, Well, it's like any model. We make assumptions. And sort of put it back on the bookshelf. In other words, say that that's unfair criticism; it's just a logical exercise that we're going through. So the model is a chameleon; it's made into a chameleon because it straddles those two worlds. One, just a logical exercise that helps to give us some intuitions potentially, and another where we think that it actually applies to the real world, without really passing it through the filter that we should pass any model through, asking: Are the assumptions really ones that we believe are important in explaining or potentially explaining some phenomenon.
14:19Russ: So, we'll come back to that question about assumptions because we are going to talk about Milton Friedman's paper in a little bit. But I want to rephrase or reframe what you are talking about. I think the fundamental question here--and I think it should be fundamental question of all economics and I think it often is not, which I take to be your point. But the fundamental question is, have we learned something about the real world from this application of the model? And I think too often--first of all, it's ironic. Or weird. It's not just that empirical work is important, it's the model [?], so therefore people claim, Oh, that shows that these assumptions are true. It's also sometimes just a simulation, within the model, using somewhat realistic measures or sizes for elasticities of substitution, say, or other pieces of the model. And I find that really unbelievable, that we've come to the point in economics where the simulation of a theoretical model somehow tells us something about, say, immigration or taxation of capital or similar areas. So, to me, I think there is a natural tendency, which I think you are pointing out is wrong, to say, 'Well, the model predicted well. If it captured something about reality, that means that my assumptions must be capturing something about reality.' And I take your big point to be: that doesn't follow at all. Guest: That is exactly my point. And just to basically reiterate what you said, because you said it so well, the problem is that I can observe something happening in the real world--let's call it B. B is a set of phenomena or something about asset pricing, something about the contracts we see out there, the effects of taxes, whatever it might be. I see something in the real world called B. It's something I want to explain. And then I go back and I come up with a set of assumptions, call it A1-A10--usually several assumptions have to be made--and I ask, if I make those assumptions and then calibrate it, because I think what you were referring to is sort of exercising calibration, if I calibrate the assumptions, how big is risk aversion? How big is this? How big is that? And I do some simulations. Do I come up with B? Well, yes, if I have enough degrees of freedom in choosing those assumptions--this is the cherry picking--and I have enough degrees of freedom in the calibration exercise, I shouldn't be all that surprised if I'm able to come up with B. And the problem, of course, is that there's also another set of assumptions, not A1-A10, but A11-A22, that, if I made those assumptions--or call them A'1-A'10, whatever--a different set of assumptions, and calibrated those, I could probably also come somewhat close to explaining B. And this exercise could go on for a fairly long time. So you are exactly right: The fact that I could come up with a set of assumptions and calibrate those in a way that gets me to approximate what I see out there doesn't tell me that those assumptions are correct. And doesn't give me the right to take them associations into some other phenomenon and say, Oh, this is how the world works; now let's see it's what's going to apply there. Russ: I say it's worse than that. I want to take an example, and I'm going to push you. You open your paper defending models. You just about four minutes ago defended them again in general. You are talking now about what you think of as the misuse of modeling. But I think the hard part is--I'm going to push you back into a corner. The question is: Are you going to be left anywhere out in the room or are you going to just be back in the corner? So, one of the obviously key issues facing economists today, and policy-makers, is the labor market. It's not doing very well. It has not rebounded after this last recession the way I think a lot of people expected it to, would have predicted it to. And so people naturally look for explanations. There were numerous explanations, based on various ways of looking at the world. And some of those are formal models, the kind we're talking about. Some of them are informal, really--people try to jazz them up and make them formal with some math. But basically, one set of views says there's a lot of uncertainty and people are having trouble making decisions. A second view says, well, we've distorted the labor market through a set of policies that we might like, but one of the implications of that is that, because of the high marginal tax rates that we've imposed on workers and on firms that hire them, we've made it harder for the labor market to expand the way it normally would. A third argument says, Well, the real problem is lack of aggregate demand; we should have spent more as a government, should have borrowed more; we shouldn't have worried about this or that, etc. And each side is totally capable of providing evidence, which seems to confirm the underlying assumptions of the model. And I would argue, even though I'm very sympathetic to one set of some of those views and very unsympathetic to others--what's the basis for my sympathy or unsympathy? Where is the science in any of that other than cherry picking both assumptions and empirical evidence? Guest: There's no doubt in my mind that people don't come to these problems with a clean slate, a tabula rasa. We all have our ideological predisposition, probably is a good way of putting it. And people, I think, then gravitate toward certain explanations. And indeed, what one can do, I think, is look at this one-time event--because this particular event is similar perhaps to what happened in the Great Depression but very different in many other ways--and try to figure out what is operative here, and try to calibrate things. But invariably you are making a certain set of assumptions, probably ignoring other things. And again, I think it's exactly right, that with a certain set of assumptions, you can come up with a result; with another set of assumptions, you can probably come up with the same result; and given how difficult it is to empirically test these things--we've only got 1 financial crisis of this particular sort; there are other ones that are maybe a little bit similar, but laws were different, all kinds of things were different--we don't really have the latitude to decide these things definitively. And I guess that makes some people say economics can't be the science that obviously some of the other sciences can be, where you can do experiments in very controlled environments and have treatment effects and hold other things constant. The one promising aspect of empirical work these days is that empiricists have gotten pretty good at looking for natural experiments, and looking for cases where we can perhaps detect the direction of causality. But I think it's well understood there that those natural experiments are oftentimes few and far between and not always directed at the big questions. So, I've heard some colleagues of mine lament that some of the empirical literature on natural experiments is driven not so much by looking at the important questions that we want to answer and then looking around for a natural experiment, because a lot of times we can't find one; but rather seeing some natural experiment that occurs and then doing the analysis [?]-- Russ: Aha! A paper, a published paper [?] Guest: That's it exactly. But I think that's a little bit too cynical. I think that our search for natural experiments to try to determine, when we look at for instance a state boundary where the law is different on one side of the state boundary than the other and it's pretty much to be taken as exogenous perhaps whether one is living on one side of the state boundary or the other because one didn't locate there because of the law, we can conclude some things there. But obviously if you look at the current debate, the natural experiments haven't decided in favor of any particular view of what's causing the situation we're in now with labor markets being where they are. And people can hold to various weightings of how much it's due to lack of demand, government policy, structural problems that still haven't been ironed out-- Russ: Technology, I would mention that one. Guest: Yeah, exactly.
22:59Russ: So, let me take--some of those natural experiments, of course, are very creative. People do some interesting work. A lot of times they have a different kind of theoretical cherry picking--it's a specification cherry picking problem, how the empirical work is carried out about those assumptions. But let me take a natural experiment that took place after WWII, which was the collapse of the size of government spending in the aftermath of the war. Which Paul Samuelson in 1943 had written that it seemed reasonable at the time that if the war ended and government spending fell dramatically, as it did, we would be engulfed in a major depression. A lot of other Keynesians were worried about that. But as we all know, very little happened to prevent that. Government spending fell, I think by about 60%. And there was no recession. Certainly not the worst depression--Samuelson had said it would be the worst depression of American history. There was a recession about a year and a half, two years after the war ended, but it doesn't appear to be related to the drop in government spending. So, what's fascinating to me is, you'd think that failure to predict that correctly would have led people to reassess their understanding of the underlying assumptions. It didn't. As far as I can tell, very little changed. The Keynesians found reasons to explain why they didn't change their views and why that experiment was not really decisive. It's just extremely hard it seems to me in empirical work in economics, maybe impossible, to disconfirm, to disprove, to reject a theory the way it is in the physical sciences. And to me it says we shouldn't be doing that. It just seems--I know that's what people want from us. But we don't really, I don't think, necessarily advance any knowledge by the way we talk about these issues. Guest: I do have to agree with you there. In a world where we are learning from data, we would start with some priors. Now, it's an interesting question why we might start with different priors. But perhaps if we were all born into the same situation and had the same configuration in our minds we would have the same priors. And then we see data and we update those priors to a posterior. And you're certainly right that after an event that seems to contradict, and I think there are examples that all sides can give here of some prediction that was made where it wasn't actually born out, we should presumably revise our priors and put more weight on one explanation or another. Or very least become somewhat less sure that our explanation which has just been contradicted is the true one. But I think here's where you have the great creativity that we can have with assumptions--we can go back and explain, if our prior is strong enough, that our original explanation, be it Keynesian or non-Keynesian or whatever it is, if our prior on that was strong enough then we can go back and explain why something didn't occur that was predicted, because something else was operative. Put in another assumption and tweak the model, if you will, to produce the result that we actually observe. So, that, I think most people would talk about scientific method would say is not playing fair. The only defense of that, I would suppose is that the world is so complex that what we do need to look at, perhaps, when something happens that is not conforming to our models, perhaps there is something important at play that we overlooked and our overall model may still be correct but we missed something important. So I suppose in the case of what you are talking about with demand being not diminished after WWII ended, the obvious answer that people can give is, well, there was all this pent up consumer demand because people hadn't been able to consume during the war, and that was unleashed. And to a certain extent, of course, we see that people started buying television sets and cars and all kinds of other things. Family formation went up. The baby boom, of which cohort I'm a member. But the exercise is one that's dissatisfying if our view of the world is one that we'll never reject, because we can always come up with a mitigating factor or some tweak to the model that preserves our most cherished assumptions. Russ: The problem is, of course--no one anticipated, it's bizarre that no one anticipated the pent-up demand. That should have been part of the model in theory. And then, as you say, people then put that in. The question then is, if you do that every time there's a large natural experiment, one should start to question one's beliefs.
28:16Russ: I want to come back to something you said a minute ago, because I think it's really a nice way to think about it. When Ricardo Reis was on this program a few years ago, he argued that we'd kind of mastered monetary economics but fiscal theory and stuff we still didn't quite understand. And I said, well, again, I guess 80 years isn't enough; we just need a little more data. And he was--actually believes that. And I think most economists do. But as you pointed out, if every case is different--Well, this financial thing, it's true that this is like the Great Depression because it had a financial meltdown but it's not exactly like it because we have shadow banking and they didn't. If every case is unique, you are just an historian. You can't be an economist. If we really need 30 more Great Recessions like the one we just had in 2008, some people could be optimistic and say: 'Then we'll have enough data. Then we'll be able say, these kinds, we can predict what's going to happen, say, to the labor market.' But I think that's unrealistic. Guest: Well, in the end, I'll make the trite observation--we do have to make assumptions. Janet Yellen has to have an opinion about how to do things and all the people working in the Fed do as well, and those voting in the Congress do. The IMF (International Monetary Fund), the ECB (European Central Bank)--everyone is having, is in a position where they have to take a stand. There's a decision to be made, maybe not at the end of the day, but perhaps at the end of the week or the end of the month. So, I think we--my emphasis here is on what I would call 'critical analytical thinking.' We have a course here at the Graduate School of Business called Critical Analytical Thinking, that all of our MBA (Master of Business Administration) students are required to take, which is to approach any issue of this sort with a healthy dose of skepticism, make sure that the arguments are logical, and make sure that one is asking every chance you get: Where is the evidence for that? If an assumption is, or a premise of an argument is not well supported, observe that it's not well supported, and if it's key to the argument, realize that your argument is not well supported. That discipline no doubt--or maybe this is just a belief of mine, and others--that discipline, one would hope, let's put it that way is going to lead to-- Russ: It serves him well-- Guest: lead to better decision making in the end. But of course the discouraging thing is we have very bright people who have access, presumably to the same data, who come to radically different decisions. And the hope would be that, put those very bright people in the room, and rather than shouting at each other, let's have a discussion. What's the evidence that supports your position, what's the evidence that supports mine? What's missing here? Go through a critical process. Perhaps at the end of the day or end of the week, end of the month, we're a little bit better off in making a decision. Russ: I want to come back to your Janet Yellen example. Because if we had private money, for example, she wouldn't have to make those decisions. Or if we had a monetary rule of a constant monetary growth her decision-making power would be much narrower. But I think the more important point in the real situation we are in now is one that I think comes out of your paper--maybe you want to hedge it a bit--which is: It's not so much that, well, she has to do something and she has to use the best available evidence. It's: The sociological tendency of our profession to endow what we do with more scientific merit than I think it deserves. So, it's not so much that, 'Well, of course she's got to make a decision and she does the best she can.' But to pretend it's somewhat scientific--which I think she has to do, she tends to do, she has the incentive to do--is the problem. Because it gives it a grandeur it doesn't deserve. Guest: I agree with that, but--and there's a comma there, and then the word 'but'--I think we have to be careful that we don't go to another extreme. So, I completely agree that we know less in terms of what we can conclude from evidence, the fact that there are lots of models that produce the same results so as a logical proposition we can't necessarily simply use logic to eliminate 9 out of the 10 models and, look, here's the surviving model; the others are illogical; here's the one that works. No. All 10 of those models are logical. And yet they have very different implications. So, we've got a limited ability to do that. We've got a limited ability to use that evidence. The world, unlike the world of chemistry and physics, the world of economics is highly non-stationary. The laws change. Unlike molecules or electrons, people actually think. They read what even economists are saying and they react. So if economists say people are probably doing this, it's going to lead to this, maybe some people read that and then they don't act that way. They act a different way. So we've got a much more difficult problem than what the chemists and what the physicists have, even though they've got a hugely difficult problem as well, no doubt. But if we go to the other extreme and say that modeling and logical thinking and particularly use of evidence is not going to get us anywhere, then I think we are just left with a complete shouting match where anything goes. I think we need to try to incorporate as much of the scientific discipline into our thinking as we can. But at the same time recognize that that gets us somewhere down the road, but it doesn't get us all the way. That, we can't pretend to know more than what we know. And so, here's where I completely agree with you: When someone has a model with a bunch of variables and they've got some nice tables and they've got some nice graphs, maybe 20 graphs at the end, showing what happens when there's a shock and exactly how things are going to play out, one has to look at that-- Russ: Don't forget the Greek letters. Guest: Oh, Greek letters are very important. You are absolutely right. One has to look at that and realize what's behind that enterprise. A bunch of assumptions have been made that produce a tractable model. Things have been tweaked. And yes, buried in that may be some insight. But it's also possible that it's totally vacuous and we need to look very carefully at that exercise to see what we actually learned. But it does have the patina of scientific truth behind it, especially when it's looked at by lay persons. And it carries, therefore, more than what it should in the debate. And that's where I'm coming in; and it sounds like you're in complete agreement with that, as well. Russ: I want to give an example from Finance. Because I think it's fascinating. A lot of people blame Value at Risk--VAR--VAR models for some of the risk-taking that firms took during the crisis. Let's put that aside for the moment, whether that was crucial or tangential or whatever. But certainly firms did use this model. And Nassim Taleb has criticized the use of that model, saying it's very inapplicable. And when I talk to people in the profession about it, they tend to say things like, 'Well, of course we know it's not perfect. We know it's based on these unrealistic assumptions about the distribution of returns, say. We know it's prone to black swans. We knew that all along. But you have to use something. And isn't some information better than nothing?' My worry--and it's the same with Janet Yellen--is that, after a while you kind of forget. It's a human problem. It's a human failing. You tend to overestimate--in part, maybe it's the Greek letters--but you overestimate the value of that information and you tend to forget the fact that it's "just a model." Do you think that's true? Guest: I think that's true but I would emphasize something else. So, if I'm sitting up here in my office as I am, looking out the window, and I see someone doing something out there and I want to ask them what's their risk, it might be that a Value at Risk calculation is a good measure to look at how risky whatever their activity is. I'm speaking somewhat metaphorically here. But, as soon as I go to a different situation where I'm telling that individual who has an incentive to take risk, I'm telling him, Here's how I'm going to measure your risk-- Russ: Yeah. Guest: Here's the benchmark I'm going to use. Then I'm in much greater difficulty. Because as we know many of these things can be gamed. And I think that a lot of our regulation is of that sort, where we like to have models, risk weights and capital requirements, value at risk model risk, various models that are used to measure risk. And if we are simply standing outside and asking, let's just measure these from the control tower just to see where we are, that could be problematic. But it's much more problematic who have incentives to take risk or incentives to deviate from what we would like them to do--here's how we're going to measure you. And especially we get people who are very, very smart, as have been attracted to certain industries, especially finance, who are very good at gaming those. So, I agree with the first proposition that relying on simple measures like value at risk or risk weights and capital budget--capital requirements--is problematic. But it's especially problematic when there's an incentive on the other side to game the system. Russ: Yeah. It's a great point. I always like to say: The regulatory regime said--Triple-A is safe. And then you look around and say, 'Well, there's not much Triple A, so we'll have to invent some.' So they did. It was very creative, very smart people. Guest: There's a good example. So, Triple-A--when one is rating corporate bonds, one is basically looking at a situation--and the rating agencies were doing that. A corporation has some ability to change the risk of those corporate bonds. But that's not really on their radar screen. If it's Hewlett Packard, they are making printers. They are not looking at sort of adjusting their risk, their corporate risk, to affect the rating on a particular bond that's really third order in what they are doing. But when we talk about securitized products, as we had in 2006, 2007, what we saw was that there was an incentive to get a Triple-A rating, or a particular rating. And there was this ability to fine tune it, in cahoots, I think in some cases--maybe that's a little bit strong way to put it, but-- Russ: Well, the incentives align. Guest: The incentives were aligned. That's right. And you have control over the risk and what gets rated Triple A. So of course what we had was everyone just getting right over the margin. And so instead of things being at the average level, we had things at margin, and a lot of tweaking that created more risk than what you would have had with corporate bonds that were rated Triple A. Of course, not too many of those left.
40:01Russ: Let's talk about Milton Friedman's classic article that is related to these questions, "The Methodology of Positive Economics," a paper I read in graduate school, which I thought was brilliant and wonderful, and now I'm not so sure. You're very critical of it. Friedman basically argued assumptions don't have to be realistic at all. All that matters is predictions. And it's not necessary that assumptions be realistic as long as people act as if they were. Talk about the 'as if' and why it gets misapplied in, say, finance models. Guest: So, first just a little bit of an overview. Milton Friedman's article is I think one of the most cited articles in the realm of sort of philosophy of economics, which is actually a field. And so, I'm not going to pretend to be an expert on that subject, it's certainly not what I've devoted all my career to. And there are people out there that have parsed the instrumentalism or whatever it is that they attribute to Milton Friedman very carefully, and lots of ink has been spilled over that. My discussion of that article was simply to say that his reasoning doesn't allow or shouldn't be used to allow chameleons to exist. In other words, it shouldn't be the case that one can simply say, Oh, you are criticizing the application of a model to a policy question on the basis of its assumptions' not really having any traction or any intersection with the real world; you can't do that because Milton Friedman said we don't judge models by their assumptions but rather just by their predictions. It's that line of reasoning which I consider false reasoning, that I'm actually questioning. But in terms of the 'as if' argument, I've been disturbed about this for a long time, at least its use in finance and more generally in economics. So, Milton Friedman introduce the 'as if' argument when he talked about a pool player, a billiard player, who is perhaps someone who dropped out of high school, doesn't know anything about geometry or physics, but is an expert pool player. We can't assume that he--or she--is solving complex problems in the dynamics of in the dynamics of billiard balls, or even the geometry of the pool table. But we could nevertheless predict what the pool player is doing if we used those laws of the dynamics, the physics, of a billiard ball plus the geometry of the table. If we assumed that the billiard player was solving those equations and solving the geometry, then we could predict what the billiard player was doing. And indeed that's right. Russ: It's a very clever example. Guest: It is a very clever example. But there's something very specific about this example. I asked one of my colleagues here who actually is someone who plays pool semi-professionally how many shots he thought he took in an hour, just in terms of practice, and he said probably somewhere between 60 and 100. So, if we take the notion that someone becomes an expert after 10,000 hours of practice, then a pool player has probably taken upwards of perhaps a million shots. And it's all in a very structured environment. I'm shooting the ball; I get instantaneous feedback or almost instantaneous feedback as to whether I hit that ball correctly; did the cue ball send this other ball into the pocket or not? And this is true for all kinds of things. If you think about a baseball player running and catching a fly ball, it's really quite an amazing feat when someone runs and they end up precisely where they need to be when that ball is falling; and the problem that they're actually solving in their head, in some way, to run to where that ball is going to land and to be able to make the catch, something we see every day if you watch major league baseball, is really quite incredible. But again, that's something that happened probably because they played the game probably since they were in Little League, and they get this continuous feedback. If you don't run in the right direction, you are going to miss the ball. So, we take that 'as if' argument, which I think works in billiards and my other example here of baseball because of the repetitive nature of the game that's being played and the very quick feedback that you get, whether you succeeded or not; and then we look at another realm, like--and I use in the paper the example of capital structure decisions that are made by firms. I could have used all kinds of examples. But capital structure is one area that's studied a lot in finance: How does a firm determine when to change its capital structure; actually, initially what its capital structure should be. How much debt, how much equity, what type of debt? And the problem that we have is that we see firms that are very similarly situated in the same industry, similar risk, whatever, having very different capital structures and evolving in different ways, and we realize that there's no simple explanation for it. So, some of the models that we find in the literature, and I actually have the equations for one of them in my paper, involve solving incredibly complex dynamic programming problems, that the researchers themselves who put these papers together takes them no doubt several months and a lot of numerical work and programming to actually solve. Now, no problem, perhaps CFOs (Chief Financial Officers) and those that are making these decisions, are exactly like the billiard player. They can't solve these complicated dynamic programming problems, no doubt. But somehow they've learned to do it. But wait a minute. How often does a CFO make a capital structure decision, whether to issue some more debt and buy back some equity or whatever the decision might be? Not all that often. Certain not a million times, as an expert pool player would. And what is the immediate feedback that you get? In the case of the pool player you see whether the ball went into the pocket or not. But in the case of the CFO you change your capital structure and do you see whether that's a good thing? Well, maybe there's a stock price reaction. But that assumes that the stock market can figure out whether that was the right thing or not, and they are solving these complicated equations. Or maybe you wait and then you see if you avoided bankruptcy. But you only get a few draws here. You don't get that continuous or almost continuous feedback. So, the 'as if' argument is being applied in a lot of places in economics, when you really have to step back and ask: Wait a minute; are these situations like the pool player or the baseball player, where someone is getting repeated feedback in a very structured environment, or are these cases where someone is solving a very complicated problem in an economic environment that we don't even know how to model? And the problem of course is that when people try to model such things as capital structure decisions, if you put 10 so-called 'experts' into a room--and I'm talking about academics--they come up with 10 different models. Whereas with the billiard player, or the baseball player, if you put 10 experts--these would be physicists who know geometry and the fact that a baseball is going to follow a parabola--if you put them in a room, they would come up pretty much the same model of exactly how the baseball player is going to run, or exactly how the pool player is going to shoot the shot. So I think the 'as if' argument certainly applies in some cases. But in a lot of cases where it's applied, it's just wishful thinking. We are going to solve these complicated problems because it actually is sort of fun to do as researchers and it gives us some credibility that we are actually able to solve these complicated models, but asking if these really apply to the real world and people are actually able to make decisions in that way, I think doesn't pass the smell test.
48:53Russ: I agree with you, and I think--I never thought about this before, but if we think about the baseball problem: nobody teaches a child to catch a fly ball by saying, Just try to figure, take a stab at where it's going to land; just head toward it. Because that's not how, certainly not how major league fielders go after a ball. What I'm told is they--people have actually tried to study eye movement and other things, and they make these subtle small adjustments, just like a football is doing the same thing for a pass. They make small, subtle adjustments in a very much trial and error way to get toward that ball. They have to start in the right direction--obviously. There's a certain intuition or gift. But nobody is solving a differential equation or programming problem or any of these things, complicated problems that we under the 'as if' hypothesis. Now, Friedman would answer, I think--he'd say, Well, I agree with you; of course it's true that the CFOs don't really solve the differential equations, and it's true that we don't really understand how they make the decisions, but it's as if they do. And I think the real challenge here is because we don't observe the full set of variables--because we really can't model, including things like what the CFO had for breakfast and the fight in, the fight that the CFO got in with her husband the night before--because we don't have all the variables and the information--this is a very Hayekian point, obviously--then it's not just that, well, we don't really capture what's going on. We mis-capture it. And especially when those underlying variables change in systematic ways. And if we don't--again, I think the lesson here is clearly, humility. Guest: I completely agree. I think we get into this situation in part because of the following. If I want to model some economic phenomenon, what I have to do is, first of all, describe the world in which the economic agents are acting in. I'm not sure that's the best-constructed sentence. But think about what the economic agents are doing, what they are attempting to do, and what the various forces are that they are doing. In other words, describe the pool table. Describe the baseball field. But of course it's much, much more complicated. Once we've done that, we'd like to think that we're done. And we'd be done if we could assume that the agents who were operating in that environment were optimizing, solving the problem in the same way that a physicist would solve it. Because once we made those assumptions, everything else is just math. Everything is just working out the implications of it. But if we step back and say, well, there's this very complicated environment that we have to describe, but then we know that economic agents can't solve those incredibly difficult problems: they are doing things heuristically; they are doing things by trial and error. But the problem is they don't get all that many trials; they don't get the same number of trials as a little boy does or a little girl playing baseball or a pool player gets. So, they are using heuristics and rules of thumb and whatever to solve these problems, and the state of the art such as it is, is changing; people are trying all kinds of different things. That's a much more difficult world to describe, because not only do I have to describe the environment, I have to describe how agents are trying to muddle their way through it. And there are a huge number of ways to do that. So it opens up much more in terms of the degrees of freedom in modeling because I have to describe sort of the satisficing or the heuristics that agents are using. And a lot of us, myself included, wouldn't like to have the responsibility to do that. It's much easier to say, Well, here's the environment; now I'll just assume that agents have rational expectations, infinite processing power, and can optimize. And in some situations that may not be a very bad assumption. But in a lot of situations I think it's highly suspect. And where as I said at one point in the paper, because you would expect in a paper like this that I would say it, we're looking for the keys under the lamppost even though we lost them somewhere else. I'm sure probably most of your listeners know the reference-- Russ: No doubt. Guest: You look for the keys where the light is, even though you lost them in the darkness.
53:27Russ: So, let me make a list of humble people. There's you. There's me. Lars Hansen was pretty humble, I'd say, about the limitations of models, the power of models, when he was on EconTalk a little while back. Your colleague and coauthor, Anat Admati, who sent me your paper, I'd say is sympathetic to that. That's 4 of us. But the problem is, and you allude to this, is that the incentives are all toward overconfidence. So, two questions. One, what kind of reaction have you gotten to your paper from your fellow economists? And two, what do you think might be done to head us in a more humble direction? Besides encouraging people to listen to EconTalk--which I'm in favor of. Guest: And I would be as well. So, established that. Let me first of all talk to the reaction to the paper. I was quite frankly surprised because I assumed that once it got distributed and people were looking at it, I expected a fair amount of pushback, that I had created a straw man argument, that I had gone overboard in one direction or another, that I'd mischaracterized, that I'd exaggerated. And I can tell you that I haven't heard any of that. Now, people may be thinking that, but they haven't communicated that directly to me. All of the communications--and I don't have to say 'almost all'--all of the communications I've got are from people who basically are in agreement with the major propositions that I've put forward, or very sympathetic to the arguments I made. And that really surprised me. Now, of course, I note there's huge selection bias-- Russ: Selectivity bias-- Guest: here, that those who didn't agree didn't chose to ignore it. Russ: Maybe. Guest: But in some sense, if--I know that in our profession, among academics in general, if someone disagrees and they've got a good argument to make, they usually step up and make that argument. So, I'm not concluding that there aren't good arguments out there against what I'm saying. But I haven't heard them. And the fact that I haven't heard them gives me a little bit of confidence that I'm probably pretty well justified in making the points that I'm making. But perhaps listeners will listen to this and I'll get a deluge of people telling me why I'm wrong. But the paper has been out there for quite a few months now and I haven't heard anyone make a-- Russ: Let me give you an analogy. The emperor is walking down the street. He doesn't have any clothes on. And we, his subjects, are cheering him wildly and telling him how beautiful his outfit is. We go back home, and I'm mowing my lawn, and I see you over the fence mowing yours, and say, Boy, the emperor looked pretty naked today. And you'd say, Yeah; I think he was totally naked; I don't think he had any clothes on. And the next day we're back on the street waving our banners and cheering him. So, I suspect that privately and intellectually we all agree with what you said, or a lot of people do. It's really hard to act that way, though, because the reward structure doesn't tend in that direction. Guest: I think that is the issue. You mentioned that those that have the confidence and perhaps don't have the necessary humility here, have the confidence to say that their model and their calibrations really tell us something. They are going to be probably given more credit than they deserve. People want answers. And if someone can come up with an answer, especially if it seems scientific, people are going to be willing to listen to it. I also feel for those who are going through Ph.D. programs in economics in general and maybe finance specifically, because we have a huge premium that we put on the ability to develop fairly complicated models and solve them. And again, I think this is a very important activity that can give us insights. But what is important I think among researchers especially as they become mature researchers, is to sort of step back from that enterprise and be a little bit more forthcoming about the limitations of these models and their inability in many ways to speak to various policy concerns. Or how we have to use them somewhat judiciously when we do that. So, there's a lack of humility perhaps among some who do this. But I think what really is the problem is a collective lack of humility--that somehow the economics enterprise is one that doesn't in aggregate quite have the humility that it should, in terms of what we can say about the world and what we can't. But I'm going to put another comma and a 'but' there. I don't see any alternative to the activities that we're engaged in. In other words, trying to formulate models that make sense and are well grounded, and doing empirical work and trying to find natural experiments, to learn as much as we can about the world. So, I don't want to be anti-scientific here at all. I want to be as scientific as we in the profession can be. But obviously realize the limitations. And that of course is scientific--to realize the limitations of what you can know and to put the right error bounds on whatever you have. And I think in many cases, both theory and empirics, we don't quite have the right error bounds on what we are saying. Russ: My guest today has been Paul Pfleiderer. Paul, thanks for being part of EconTalk. Guest: Well, thank you for inviting me. This has been an interesting discussion. I actually learned a lot from some of the things that you brought up here, and I would love to engage with anyone who wants to engage. Russ: Looking forward to that. Thanks so much.