Lars Peter Hansen on Risk, Ambiguity, and Measurement
Jun 30 2014

Lars Peter Hansen of the University of Chicago and Nobel Laureate in economics, talks to EconTalk host Russ Roberts about the power and limits of economic models and quantitative methods. Hansen defends the value of models while recognizing their limitations. The two also discuss quantifying systemic financial risk, how our understanding of financial markets has changed, the nature of risk, and areas of economics that Hanson believes are ripe for further research.

Eugene Fama on Finance
Eugene Fama of the University of Chicago talks with EconTalk host Russ Roberts about the evolution of finance, the efficient market hypothesis, the current crisis, the economics of stimulus, and the role of empirical work in finance and economics.
Noah Smith on Whether Economics is a Science
Noah Smith of Stony Brook University and writer at Bloomberg View talks with EconTalk host Russ Roberts about whether economics is a science in some sense of that word. How reliable are experiments in economics? What about the statistical analysis...
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.


Robert Kennedy
Jun 30 2014 at 11:15am

Maybe it was my mood today, but I didn’t get much out of this episode. The discussion seemed unfocused and Mr. Hansen seemed unwilling or unable to make clear points. my takeaway was that economic models and quantitative methods may or may not be valuable. Maybe.

Sorry to be a grump today.

Eric Falkenstein
Jul 1 2014 at 3:27pm

I liked his point that complicated problems do not necessarily imply complicated solutions. He seems very aware of model limitations. I like how he described himself as a modeler, which is really what top economists are.

I wish Russ would have asked him more about some of Hansen’s seminal contributions, specifically GMM. I think that has been a dead end, as I know of no fact that was documented or cemented via this technique, but many publications make this technique their center point, where you can assume something entirely ad hoc, throw it into GMM and via this rigor it is presumably ‘real science’ as you test the over identifying restrictions. Lots of bad articles were published via this device. It was a good faith effort to be sure, but with hindsight I think it was just part of a ‘scientism bubble’ (something Hansen understands, via Hayek’s Nobel lecture).

Jul 1 2014 at 3:32pm

Maybe I am over reading this, but I find Russ’ skepticism to be going overboard. You almost get the feeling that he believes all statistical endeavors are misleading rather than informative. I find this attitude strange when he’s even had Michael Lewis as a guest and seemed impressed at the idea of Sabermetrics. I personally believe advancing statistical analyses is overall better than continuing to shoot from the hip.

I suspect Russ is really being critical about how the profession is taught these days. I am guessing here, but I think Russ believes that the desire to make Econ a math based subject is done in order to legitimize it against the hard sciences. Here I disagree as well.

Jul 2 2014 at 6:02am

Great interview of Lars Peter Hansen (thank you, Russ).

@Eric Falkenstein

“He seems very aware of model limitations”
Seriously, do you know a modeler who is not aware of model limitations? What amazes me is that some people try to convince others that a smart modeler thinks models are perfect. Of course, they are not but it does not mean that they are not useful at all for anyone.


Agree with you. Some people go too far in dismissing quantitative techniques as useful tool in social sciences. I particularly appreciated the point of view of Lars Peter Hansen about Hayek ‘74’ and the pretense of knowledge.

Robert Wiblin
Jul 2 2014 at 6:10am

I think Russ spent too much time leading the conversation in the direction of his own views here. A Nobel Laureate is plenty capable of saying what they believe and should have new things to add which regular listeners won’t have heard many times already.

Shawn Barnhart
Jul 2 2014 at 7:08am

@Robert Wiblin

I think some guests are better speakers and interview subjects than others. Compared to other guests, Mr. Hansen seemed unwilling or unable to expound on the subjects discussed to any great degree; Russ seemed to have to jump in to keep the interview from dragging to a complete halt.

Mr. Hansen also seemed unwilling to take much of a position on anything discussed, wanting to take the middle ground on most topics. While I can appreciate his ability to see both sides, it doesn’t make for an especially compelling conversation.

Colin Fernandes
Jul 2 2014 at 1:53pm

In my opinion Hansen is not as much a great conversationalist as Roberts. Which I can easily understand given that he(Hansen) is a modeler. Hansen stalled a few times when Roberts asked for his thoughts on…
Having said that, I do think Roberts is on to something on quant methods and the hubris of modelers to believe they can model anything. I appreciated Hansen’s thoughts – be skeptical, think deeply, and most models are imperfect…

Jul 2 2014 at 3:27pm

“Seriously, do you know a modeler who is not aware of model limitations? What amazes me is that some people try to convince others that a smart modeler thinks models are perfect. Of course, they are not but it does not mean that they are not useful at all for anyone.”

I know modelers who give lip service to limitations, but discount them, are overconfident and seem to forget the limitations when a result falls outside of the model prediction.

Several have told me that their clients want bold, confident statements and/or conclusions so they don’t even touch on limitations when they give results to their clients.

I’ve also had some rather heated debates with modelers over the limitations of their models and many have taken my criticism of their models much too personally.

I had one debate specifically over a VAR model, even before it was cool.

Jul 2 2014 at 10:11pm

I agree with others that this conversation wasn’t as good as expected. Hansen’s work is extensive and extremely powerful. I was hoping, like others, he might try to discuss his work and convey it simply because it really is quite a marvel.

To this point, I don’t entirely blame Hansen – his work is extremely technical and very esoteric, even for economics. I personally have only studied GMM in a heuristic sense and I can tell you it’s pretty terse.

Maybe it’s just not possible. Communicating complex ideas simply is a very difficult skill, but if you simplify them too much, they probably lose meaning all together. This happened recently when I tried to explain non-stationarity to friends interested in predicting stock movements.

Michael G. Heller
Jul 3 2014 at 12:32am

This was an extremely good podcast. I listened in short instalments over a few days, and each time found it worthwhile. It was fascinating to hear someone at the top of this field cautiously express his uncertainties (or his ambiguity!) about uncertainty or uncertainty modelling in a variety of fields from financial risk, to bail outs, to fiscal policy, stimulus and finally also to climate. Also with nice humour.

There were particularly good forthright comments near the end as Russ and guest picked up on Hansen’s nervousness about the danger of “politicisation” (i.e. protection, favours, discretionary treatment) if Dodd-Frank policies designate finance corporations “systemically important”. Firms behave better when they are fully exposed to the risks of failure through market discipline.

However obviously this is a man whose primary response to uncertainty dilemmas is “build better models”. Naturally he would say that, it’s what he does, and I hope he keeps producing wonderful insights that improve prediction potential.

My own interest is in looking at reducing uncertainty from the point of view of people who need to act, interact, respond and transact every day without the time or expertise to use computational models or neuroimaging to predict human intentions and expectations.

So people might like to be reminded that science/modelling is just one among several ‘discovery procedures’ available to ordinary folk or policymakers and politicians — including market competition and institutions. Last week before the Hansen podcast I wrote a couple of short essays about these other dimensions of ‘solutions’ to uncertainty, if anyone is interested in having a look –

Daniel Fullmer
Jul 4 2014 at 3:14pm

Great episode. Since I’m very interested in this area, I wish he could have talked a bit about exactly how he sees us incorporating risk/uncertainty into models. e.g. Bayesianism, robust control, etc. That topic is probably too technical for this format, however.

Trent Whitney
Jul 7 2014 at 11:24am

Enjoyed listening to this podcast, and appreciated the brief talk on how Prof. Hansen views uncertainty and how he struggles to incorporate uncertainty into models.

That reminded me of a previous podcast with Nassim Taleb, where he said that his thinking at that time was focused on how people act in uncertain times…what do we do when we’re not sure what to do/what’s going on around us? And actually I thought that’s where the podcast was going to go when Russ mentioned Taleb, but went in a different direction at that point.

In short, I think it was another solid discussion with a great economic mind that had the undercurrent of a recurring theme: What exactly is economics? It certainly made me think at multiple points throughout the podcast.

big al
Jul 12 2014 at 10:31am

on a side note, the discussion touched on research showing that the introduction of a Wal-Mart store might lower wages in some situations, and that this tended to occur in larger markets rather than smaller. (in the 26:03 para)

that got my attention, and i had to think it through. assuming the results are true, one hypothesis could be this: when Wal-Mart opens a new store, they bring two things to the local labor market: (1) new demand, and (2) new information.

in a large market, especially one where there is a existing oversupply of labor relative to demand, the demand provided by Wal-Mart might have little effect on the overall supply-demand balance. however, if the existing local wage for comparable work is, eg, $10/hr, and the new Wal-Mart starts hiring at $9/hr based on their analysis of the market, that information could have a powerful effect on price discovery in the rest of the local labor market, driving wages down.

in smaller markets, however, a new Wal-Mart bringing the same absolute labor demand, will have a much bigger effect on the supply-demand balance, and thus would be more likely to push wages up.

so in a large market, the information effect of a new Wal-Mart is likely to be more powerful; but in a small market, it would be the demand.

Ron Crossland
Jul 12 2014 at 12:49pm

A disappointing interview. Too many loaded questions and too many uncertain answers.

Were I to use this podcast for serious economic students it would serve to show how economists must become better equipped as communicators.

Comments are closed.


EconTalk Extra, conversation starters for this podcast episode:

About this week's guest:

About ideas and people mentioned in this podcast episode:Books:


Web Pages, Videos, Blog Entries, and Resources:

Podcast Episodes:



Podcast Episode Highlights
0:33Intro. [Recording date: June 12, 2014.] Our topic for today is measurement in the face of uncertainty, drawing on a paper you wrote last year on systemic risk and your Nobel Prize lecture. Along the way we'll deal with some issues that have come up on EconTalk, dealing with measurement and the scientific nature of economics, if it is scientific. You open the paper early on with a quote from Sir William Thomson, also known as Lord Kelvin. I'm going to read the entire quote.
I often say that when you can measure something that you are speaking about, express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of the meager and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts advanced to the stage of science, whatever the matter might be.
And as I mentioned recently, in a recent episode, and you mentioned in your paper, it's carved in stone in the Social Science Research building at the U. of Chicago. It is hard to deny the truth of that quote. What is your perspective. Guest: Oh, I'm very sympathetic to the idea that economics at the end of the day should aim to be quantitative, and that should be our ambition. But we need to be quantitative in a sensible way. By 'quantitative' I certainly mean the fact that we should be able to use economic analysis to build models and these models should help us guide policy. These are models; but that should be connected to empirical evidence. Now, there's a challenge with all these perspectives, and that's the following. This is true of all models. Models are always wrong. It seems kind of strange to hear that initially, but there is a sense in which models are simplifications; they are abstractions. And they are wrong. It's always a challenge to try to assess if they are wrong in ways that are essential or inessential and the like. So whenever you have these quantitative ambitions, you have to also recognize the limitations of the model. And often that's the hardest part of the challenge. Russ: And, in the area of systemic risk, which is a term that's been used a lot recently related to the financial sector, the Crisis of 2008, the issue of Too Big to Fail--how are we doing on measuring systemic risk and quantifying it? Guest: Yeah. I think there we are at the very primitive stages. I'm certainly happy--that be an example where our knowledge probably is still quite meager. The term 'systemic risk' really was not on people's radar screen prior to the financial crisis. And it only became a topic of conversation among academics and policy makers prominently, after the financial crisis. Now, systemic risk, it's had a little bit of a danger of being a buzzword. I'm reminded--some people poke fun when I say this--of a quote by Justice Potter Stewart about pornography, that you kind of know it when you see it; and he wasn't going to give you a formal, rigorous definition of it. There's been a little bit of that aspect to this term 'systemic risk.' Because of financial regulation like Dodd-Frank we think, it's become the code word or mandate for how we should be looking at the financial sector. But the problem is we are still, from an economics standpoint, learning about it, trying to understand it. There's a variety of different models trying to capture it. In a quantitative fashion I think we've just barely scratched the surface. Russ: What worries me about it is it could be like Bigfoot--rumored to exist but hard to verify. So, it is a buzzword, but it strikes me as a buzzword that was invoked to justify a policy ex post. And we don't really have any idea whether that policy was justified or not. The bailouts, the TARP (Troubled Asset Relief Program), and other policies were invoked, were done because we had to--it was alleged. It was alleged that we had to because without it there would be this massive domino effect. Do you think we have any evidence that that's true? Guest: Yeah. That's a really open question. Somehow we didn't run the counterfactual about what if we didn't do x, didn't take these steps, what would have happened? There was a big fear that very, very bad things would have happened. Do we know that for sure, and the like. I think it's a tremendously important question and it's one that remains open. Let me just take a step back. We look at the Depression in the United States. It really took us a very, very long time to get firm understanding of the various different explanations and why the recoveries were slow and what triggered it, and the like. And I suspect the same is going to be going on as you try to understand exactly what the important forces were in this financial crisis. There are interesting conjectures out there, but we don't know for sure. And this concept of systemic risk--if you go and you read some work that comes out of people like Andy Haldane of the Bank of England--I find that Andy Haldane of the Bank of England has been the Principle in terms of looking at financial oversight there. But he's written very, very openly about our lack of knowledge, and our lack of knowledge and understanding of consequences make financial oversight regulation challenging. I remember that right up to the financial crisis and with the many conferences that would bring together academics and people from research departments inside various different central banks to try to talk about financial oversight going forward, and a couple type of observations really struck me at that time. And it was that this is a very complicated problem; and therefore it requires a complicated solution. And I kind of thought to myself, had--I would agree that this could be a very complicated problem and that we are still trying to understand it. But complicated problems in the face of limited knowledge don't obviously lead to complicated solution. Because the solution itself can add additional uncertainty; it can be counterproductive; it can be overreacting to false knowledge. And that can be harmful.
7:56Russ: Yeah. It also allows for the possibility of regulatory capture lost in the details, which is the kind of thing I worry about. I'm going to read a lengthy quote--lengthy: it's about a paragraph--in which you talk about this issue, which I thought was extremely apt. This is from the systemic risk paper of 2013.
What is at stake here is more than just a task for statisticians. Even though policy challenges may appear to be complicated, it does not follow that policy design should be complicated. Acknowledging or confronting gaps in modeling has long been conjectured to have important implications for economic policy. As an analogy, I recall Friedman (1960)'s argument for a simplified approach to the design of monetary policy. His policy prescription was premised on the notion of "long and variable lags" in a monetary transmission mechanism that was too poorly understood to exploit formally in the design of policy. His perspective was that the gaps in our knowledge of this mechanism were sufficient that premising activist monetary policy on incomplete models could be harmful.
So, talk about that, again, because I think it's such a crucial insight into good public policy. Guest: Yeah. Yeah, so, there's--as I stated at the outset of this interview, at the end of the day, I'm a model-builder. That's what I kind of do in my academic career. But I want to do this in ways that are very useful. Now, one can do the following: one can go out there and build some mathematical model. You produce some wonderful mathematical equations; you can do some rigorous analysis of a given model. And you can even generate beautiful computer output from it. And that can all-- Russ: You can get a good publication out of it. Guest: You can get publications out of it, as well. The thing is, that, just because you've written it down as a formal mathematical model--and I'm also big on mathematics, because I think it adds clarity between mapping some assumptions to the conclusion. That alone doesn't mean that it's right. And that alone doesn't mean that you should have 100% confidence in it. And you shouldn't just look to this formal apparatus to say, See, it's nice and formal, therefore we should have confidence in it. And that's why--I kind of like think of Friedman as saying, well, people are writing down these models of the monetary transmission mechanism, but I'm not sure how seriously I should take them; and I can think of other models that might have other implications; and until we really have the empirical evidence to discriminate among competing explanations, we have to leave alternative possibilities on the table. And we have to use models in sensible ways that recognize their limits as well as what they can--as well as the clarity he had. And we can recognize that they are wrong, and we have to always have our eyes open to the fact that, are they wrong in crucial ways that affect these policy conclusions? So, if you write down this complicated model--it's elaborate, it's got this wonderful mathematical structure; you work out the optimal policy that comes out of it--there's a danger if you take that too seriously and don't acknowledge the fact that the model has limitations that lead to bad policy analysis. And, I'm sorry, to a bad policy implementation. Russ: Yeah. So that was--you are talking about Friedman in the 1960s. In 1974, Hayek wins the Nobel Prize; and his Nobel Address is "The Pretence of Knowledge", which basically says that attempts to model the economy, the macroeconomy, are 'faux science'--they are scientism. They give the illusion of science. And now we come to 2014, so we are half a century after Friedman's early thoughts on this. We have John Taylor, at Stanford, who talks about the value of rules over discretion. We've had 50 years of data, 50 years of econometric sophistication and improvement. Have we gotten any better? Is there any evidence that Friedman's rules should have been replaced by something more sophisticated? Guest: So. Yes. Russ: Now do you want to give me a 'maybe'? Guest: So, our learning--it's the case in questions like this that our learning sometimes goes quite slowly. It doesn't--the knowledge, our advance of knowledge, is sometimes sluggish. But I think we've learned a lot about the potential alternative source of the monetary transmission mechanism. I think we've learned a lot about better implementations of monetary policy in kind of normal times. It is this case--and, there's lots of people out there that say they've predicted the financial crisis. My question to them is, did they really predict the quantitative impact of the financial crisis? Lots of people said, Well, the housing market might crash and that could have ramifications and the like. But I think what caught lots of people by surprise is the whole magnitude of the response to this, as well. That's new data. That exposed gaps in models that we had. And it exposed gaps of models they used in monetary policy, because at that point in time, a lot of the macro models had a fairly passive financial sector inside the models. And now we are trying to rethink all the models, to say, maybe this is far too passive; there may be this interplay between what goes on in financial markets, the macroeconomy, needs to be thought about much more carefully. We need to think about when can financial regulation lead to harmful effects, versus when does it seem to be necessary. So I do think that we've learned stuff; and I think we are going to learn more coming out of the financial crisis. But we've got a long ways to go.
13:59Russ: But it does raise the possibility that new data don't help us improve the model. It's just a different model, right, that we need to be thinking about. So you think about the recovery from the financial crisis, which has been disappointing. I don't think any economists really predicted that magnitude or understood it. Most of the formal predictions were wrong. And I've become somewhat skeptical--okay, I'm being polite. I've become very skeptical of our ability to quantify those things. In fact, I want to suggest--I'm not going to challenge Lord Kelvin. I think the question is whether economics is a scientific enterprise. Would we not be better off treating it more like history? Nobody pretends to quantify the relative importance of the causes of WWI, of which there are 50. We will have many causes of this financial crisis; some of them are more plausible than others. Certainly evidence will matter. But the idea that we could treat the economy with any precision seems unlikely to me. So, I'm going to stick with Hayek, '74. Do you want to disagree? Guest: Do you want me to disagree with you and Hayek? Or-- Russ: Yeah. Or not. You can agree. It would be great. Guest: [?] to talk about Hayek? Russ: Yeah. Guest: Hayek, '74 is fascinating reading. Sometimes I give talks[?] these days, I lift a quote out of his Nobel Address. And it's very interesting in that Nobel Address. He says--he's not against using mathematics, but we really ought to think about economics and even the use of mathematics as providing clarity and leading to more qualitative modeling, and the quantitative part is something that he takes--that he challenges, [?] possible amount. I don't take that extreme of a view. The part of the Hayek essay that's interesting is the fact that [?] confidence, the statement, the potentially harmful effects of overconfidence in quantitative modeling. And I'm completely on board on that. But I really--to me, we need models to help us understand systematically when evidence is more informative and when it's less informative. We need--and just because there's lots of uncertainty out there doesn't mean that models can't be useful guides. So, I want to incorporate uncertainties inside models, in credible ways. And that's what I view as the productive step forward. I think it's useful, it's important both in terms of how we use evidence to understand better the economic system, and I think it's also potentially valuable as a guide in policy, provided that we use it in sensible ways. So, I guess I will be disagreeing with both of you on that front. Russ: It's fine.
16:40Russ: Let me take you to a sort of micro area of those ideas. An issue I talk about sometimes--I find myself arguing with friends about--would be: Let's say you're in a financial firm, Goldman Sachs, and it's 2005. You are at Bear Stearns, you're at Lehman Brothers. And your risk people are using a model called Value at Risk, which is an attempt to try to figure out how systemic the risk is within your firm. How likely is it, what's the probability that your portfolio could have a really bad day, and have a catastrophic impact? And that's a very challenging thing to quantify. And there have been a lot of advances in trying to quantify that. And one of them that people use is called Value at Risk. And it strikes me that having that tool in your hands maybe would work fine, because you are very aware of the dangers; you are very skeptical about, as you've said a few times already, about the tendency toward overconfidence. But most human beings seem to struggle with that. And it raises the possibility--and Nassim Taleb has been an advocate of this view--that actually, you are so prone to fooling yourself on this, you are better off not using it at all. Talk about that psychological phenomenon--I'm giving the example of a financial firm, but it's obviously a problem for policy-makers as well. Guest: There is a--so, let me talk about the policymaker side of this first. There is a danger, I think, that--and this is relevant to the [?] economy--that politicians like to embrace economists who express their views with incredible confidence. Russ: Very well said. Guest: That can be problematic. That can be problematic because in a lot of cases that confidence is not real, or shouldn't be real. And it's not premised on solid evidence or necessarily solid analysis. It is opinion. It could be stated with great confidence, but if it's opinion, it's also good to ask, are there other opinions, if other opinions are consistent with the data and what are their ramifications? Though--so I do think there is a danger in having false confidence. And I think it can be very present and very evident in the policy arena. There are some other wonderful quotes from Milton Friedman on this topic. There are some great quotes in this Hayek essay, to make reference to on this topic, which I'm sympathetic with respect to. The tool, Value at Risk, is interesting in the sense that it's going really in places in which statisticians know are very, very challenging. So, Value at Risk is looking at what people call the tails of distributions. And the tails of distributions are places where the amount of empirical evidence we have is often fairly thin and sparse. And a Value at Risk model can work very, very well through normal times, and then all of a sudden just completely miss. Russ: Which is when you need it. Guest: Because what it does--just because if you have too much confidence about how it's working through normal times, that's the situation in which you can potentially be burned. So when you study these kind of low-frequency, tail events, I agree that you need to do some kind of robustness analysis. You can't just simply embrace a Value at Risk model based on one distribution, and really have full confidence in the outcomes coming out of it. Again, my view of it is I don't want to throw away models of tail risk. But I want to attach with those models the appropriate degree of uncertainty. And also to engage in some form of kind of robustness analysis. Suppose that it wasn't quite this; it was the distribution is off in this particular way: what are the consequences of that? Russ: So my claim is that's hard to do. Guest: Yes. Russ: In certain environments. And as a result--the counterpoint, when I say we shouldn't use models like that, that are so dangerous because it's hard to remember to do the robustness checks and all that--the counterpoint is, well, what are we going to do otherwise? What's the alternative? And the alternative is to operate in a world that you know that your knowledge is very poor. And again, people made investments--I think of history--people made assessments about history without quantitative knowledge. They did the best they could. It's possible that if you are in a financial setting where you have to take risks, you are better off not quantifying them because that fools you into thinking that it's safer than it is. Guest: Yeah; I think throwing away quantification completely--to me I think it's far too extreme and leads you to theory, not--almost throwing out useful parameter[?] for decision-making. So on that I guess I'm on a different view. I do believe that, I really do believe that any time of sensible management of firms or anything, it has some [?] of quantitative analysis. And I think the real challenge is to make sure we can do it better. And to expand our tools and expand our thinking; and not to, like--just because these are mathematical tools that look very nice, that doesn't make them necessarily right all the time. Russ: I agree with that. I certainly wouldn't suggest we throw out all quantitative methods in any setting, whether it's a firm or a financial firm or a regular firm. But it's interesting; I think it highlights the potential--it focuses on the question where do you draw the line? When do you start saying, This advance may not be an advance?
22:39Russ: Let's back up for a sec. Let's talk about risk versus uncertainty, or, as you phrase it, as it's often phrased these days, risk versus ambiguity. Because we are really talking about both those concepts in this conversation and I think it would be useful to highlight the difference. Guest: Yeah. So, I like to think about there being three different components to the concept of uncertainty. I guess the initial distinctions of these, two of these components, goes all the way back to Frank Knight, University of Chicago economist who was prominent decades ago. And Keynes was also wrestling with these issues to some extent [?]. So, let me just try to draw the following distinction. Suppose I write down some model and the model has what economists would call shocks, distributions attached to these shocks and the like. That model, when fully specified, will tell you probabilities of all the future events, what's in the domain of the model. You've got a full--there's uncertainty out there, but it's certainty[?] under which you've got a model that just tells you all the probabilities of everything. And so once you've got the model, it's done. So, I like to think of that as risk. Like, if I fully embrace this model, there's the risk component. Russ: There's a random element to life that we might be able to quantify-- Guest: Right. Russ: But we don't know what's going to happen tomorrow. Guest: Yeah. You don't know what's going to happen tomorrow, but with this model I can tell you the probabilities of what's going to happen tomorrow. Russ: Right. Raining, not raining. Guest: Right. So I want to think about that as risk. Now, in, in fact every discipline, and it's certainly very prominent in economics, and we could talk about those things like climate change as well, but let's talk about economics. There are different models out there. Even a given model, I might not know all the details of it--the so-called parameters of the model. There may be multiple models out there, and the like. So now, for me to assign probabilities on the future I have to start say, well how much weight do I want to put on this model versus that model? Each distinct model and the like. So this issue about how I want to weight, how much confidence I put in the different models out there--once I take a specification of that confidence and [?] continuous assign probabilities to things, that process of assigning probabilities across models--there I think of that as a potential source of ambiguity. I'm not really sure how to do that. And how do I confront that component of uncertainty. There's a third component that I think is probably the hardest part, but I think maybe that in many respects the most important part is all the models are in some sense wrong. How do I use models in sensible ways, in ways that are in some sense robust to different forms of misspecification? I acknowledge that they are wrong, but if I knew exactly how they are wrong, I'd just fix them. So I have to somehow confront that form of uncertainty, as well. So, those are the different pieces that I can think about when I think about uncertainty. Russ: Yeah, and we've really been talking about all three of them, I think, so far. Guest: Yeah.
26:03Russ: Let's go back a little bit in history, because you make reference to a work that was an important part of my graduate school education, which was Burns and Mitchell's 1946 work on business cycles. Which was--I went to the U. of Chicago in the late 1970s, and Robert Lucas was my macroeconomics professor, and that was really a--he was extremely interested in that work. What they did was they tried to give a very thorough picture of how economies move in good times and in bad. You reference a review of that work by Tjalling Koopmans, who won the Nobel Prize, that criticized their work as "measurement without theory." So, talk about how this interaction between describing versus understanding--whether it's good to just quantify or whether you need more than just the numbers themselves. And I think that's the discussion between Burns and Mitchell, and Koopmans. Guest: Right. I actually think Burns-and-Mitchell type activities can be very useful. It's just that if you want to do something with them then you have to put more structure on it. Suppose you want to go out and say, I'm just going to collect a bunch of data; I'm going to just let the data speak. Once the data start speaking the specific questions then that requires more structure to it. So once you want to get beyond the fact, well, here's some pattern in the data that looks intriguing--Burns and Mitchell were not just doing some massive fishing expedition. They thought it would be useful to try to get characterizations of what a business cycle was and the like. So that was--sort of obviously had some good intuitive insights in what they were doing. But to take it to the next stage, to really understand relevant policies, macroeconomic policy analysis, when it came to how they confront business cycles, that requires more. That requires more formal economic analysis and it requires more serious thought about the consequences. So--there's another part of this, as well, Burns and Mitchell had fairly ad hoc methods for [?] their evidence. Which are useful, but it was somehow challenging to figure out how the quality of the evidence they were producing. Lots of people after Burns and Mitchell tried to say, let's make this a little more formal statistically so I can assess the quality of the evidence. So, Burns and Mitchell got them going, and I think there was some follow-up work that showed how you could map it into more formal statistical methods that was also useful. That allows to make a little bit better assessment of the quality of the data and the quality of the evidence which they were looking at. Russ: Let me recount a conversation with an implied[?] economist on this issue of measurement and theory and get your reaction. He had found that when Wal-Marts come to a city, it drives wages down. But he found that only to be true in large cities. And I found that extremely surprising and counterintuitive. And I thought, wrong. And I said, Well, how do you reconcile the fact that in large cities there's more competition presumably, and Wal-Mart, adding to the demand for workers, should push wages up, not down? If at all. Certainly not down, though. And his response was: 'Well, I'm not handicapped by any particular theory of the labor market.' He said, 'You're handicapped by a neoclassical model of labor markets. I just let the data speak.' What do you think of that approach? Guest: I think in this case I'm with you in the sense of, if I've got this empirical evidence and it looks anomalous, I want to go out and understand it. And I want to understand it from the guise of what I think of as an economic model that puts on place some incentives that I expect to be out there and the role of markets and market interactions. And when I do that, it may well be need to think about other things going on that would help to explain this evidence. So, the first part of my career, I'm spending a lot of time kind of documenting puzzles in asset markets, trying to connect the trend of the economy to asset markets. And there were like dramatic puzzles out there, and they led to saying maybe I had to think about things differently as a consequence. And I guess I would put this in the same category. I'm not going to throw out economics. I'm going to say, this is a bit of a disconnect and I have to think hard about how to close that disconnect. Because I don't have a good, coherent theoretical explanation for things; and I just don't know how to use the evidence. Russ: His view is that not only, you don't have that theory; it's wrong. It's better to not have any theory, because then the data just speak for themselves. Guest: Well, if the theory is wrong, let's kind of figure out ways it's wrong that are also convincing and consistent with other evidence. Russ: Yeah, I think that's the right question. Of course the other point is that you spend a lot of time fishing and find a lot of things that may be fish or not, but be fooled into thinking they are fish. You have a lot of choices when you do those kind of empirical analyses about what to include and what not to include. So you said that every model is wrong. Certainly every conjecture about a fact is also often wrong without other clarifying evidence or understanding.
31:50Russ: So, coming back to the Kelvin quote, you have a remarkable bit of economic history that I wasn't aware of. Kelvin arguing that it's important to quantify our knowledge. Talk about what Knight and Viner's perspective--two mid/early 20th century, mid-to-first half--1920s and 1930s at the U. of Chicago. How did they respond to Lord Kelvin's insight? Guest: Well, apparently there's this fascinating work that was done by my colleague, Steven Stigler, detective work, along with some other people. That quote, when it went on the Social Science Building had some controversy attached to it. And, Viner, which I quote in my essay--I don't have the exact words now--basically even if he had expressed something in numbers and qualitatively our knowledge may be meager, kind of like, just because they show me some numbers, that doesn't mean that our knowledge has suddenly become firm and the like. I think it was more of a reminder that economy evidence is sometimes challenging to make it fully persuasive, and we have to accept the fact that we may be learning slowly or that our knowledge base is relatively limited when we are looking at direct evidence--behavior. Russ: I have the quote here, actually. Knight said in response:
Knight: If you cannot measure a thing, go ahead and measure it anyway.
Viner: ... and even when we can measure a thing, our knowledge will be meager and unsatisfactory.
So they were a little bit skeptical. Guest: Yeah. So Knight was kind of poking fun of the measurement enterprise, I guess in some sense. I see Knight as more flippant. Russ: Yep, I agree. Guest: And the Viner one is reminding us of the--you know, don't overplay this process, in the sense of it, we should acknowledge the fact that there's [?] character of things, just making things quantitative doesn't address that fully. Or even partially, I suppose. Russ: And then you have a rather remarkable insight about Lord Kelvin and the age of the sun. Talk about that. Extraordinary. Guest: So, Lord Kelvin was in this debate with Charles Darwin. He was challenging Darwin's theory of evolution, based on his own calculations. And he ended up missing theory. So he produces all models and he basically announced that Darwin's calculations can't be right because of, just through the guise of his own models. Turns out a key energy source is missing from his models, and his models actually had a mistake, or a flaw, or a gap in them that was critical to try and assess the insights of Darwin. [?] order of magnitude. Russ: Well, his calculation of the age of the sun was 20 million years old, I think is what you wrote? Is that right? Guest: That could be. I don't have it right here, but yeah. Russ: I think the upper bound--the upper bound I remember. The upper bound was 100 million. And there was a worry that that wouldn't give enough time for evolution to take place. Which, it's slightly off, that 100 million, it appears. Guest: Yes. Way off. Yes. In this case, Kelvin's own calculations were way far off. This is a case where perhaps he had too much confidence in his own models. Russ: But, but, I think the difference potentially is I think we have more confidence that the sun is billions of years old than Kelvin could have had; [?] being tens of millions. I think that's true. I mean, there's no way of really knowing, right? There's no way of really verifying whether we have a more accurate age of the sun. But given current scientific knowledge, it appears that the sun is billions of years old, not tens of millions. I worry that economics isn't like that. Can you list some areas where you feel we've improved our precision in anything remotely like that? Finance might be one of them, by the way. Guest: So, Finance's successes--the challenges in Finance are to connect the debates to economics. Accepting the principle of no arbitrage is some--but by itself doesn't have a huge amount of content. The challenge in Finance, for me anyway, is to understand the actual patterns in the data through a fully specified economic model. We can do it through more general [?]--the question is what makes things like financial markets fluctuate over time? What makes markets look more risk averse some periods than other periods? And the like. And so we have lots of, I think a huge amount of systematic evidence on the patterns out there. Some of them are flimsy but some of them are fairly robust. The kind of wizard at doing high quality empirical analyses in Finance is Gene Fama. When Gene reports stuff it seems to be reliable and robust. So I think we have a fair amount of knowledge out there. The challenge for me, anyway, right now, is exactly how we build the more fully specified models consistent with that evidence. But I think putting that evidence out there in ways that are interpretable, that has potential to links to models has been very valuable. And I think it's going to help us think better about modeling going forward. Russ: Do you think we're going to get better and better at Finance as well as in macro generally? Or do you think the likelihood of improvements is better in one field versus the other? Guest: Uh. I'm [?] going to argue that the evidence we have coming out of financial markets is richer. But the evidence that's really pertinent to understanding how the economy works is not--I mean, financial markets are forward looking so they are telling us there is some evidence on how people think. But that also interplays with how they confront risk and uncertainty and the like, so the forward-looking nature makes them intriguing. But on the other hand to actually fully process that requires some other components of the model that we're still trying to sort out. There are very, very rich sets coming out of Finance. The question is how much of that richness is directly usable to understanding the underlying basis of, say, the macroeconomy. And there it becomes much more limited. But I do think we--we know, today, models, much more about which models explain what type of evidence and which ones don't and where the challenges are. I do think we've made a dent there. Do we have a full, complete understanding of the empirical evidence? Absolutely not.
39:10Russ: And yet a lot of people claim that by the early 1990s, say, fiscal policy was widely understood to be unimportant, irrelevant, a thin reed you couldn't really lean on and not important for guiding the economy. All of a sudden it's back--I think for not very scientific reasons. It makes one worry that there are fads in economics unrelated to scientific progress, say akin to the age of the sun or the universe. What do you think of that? Guest: Just two parts to this: what I think about fiscal policy and what I think about fads. Let me address the issue of fads. There are research fads in the sense of, here's the hot topic syndrome. I often--sometimes my own discipline gets [?] of these citation counts. That, I'm often skeptical about. Even though I've had some papers that have been highly cited. Because-- Russ: Those are the good ones. Guest: There's a lot of evidence--a lot of them, is hot-topic chasing. So if you write a paper on the topic du jour, then everyone jumps in and they all cite each other. But then you have to ask: Where is this literature 5, 6, 7, or 8 years later? How much staying power and durability does it have? And I think that's the more interesting question. But there are indeed research fads. People are all the time looking for what topics to work on. There are graduate students writing dissertations, young scholars working to get their careers off the ground; how do I make big splashes? So there's the hot-topic chasing taking place. There are hot topics end up becoming important topics, and many fade. So, I'm certainly--so I do think there's an element of fads that takes place in economic research, and it takes time for the fads to die out. Fiscal policy--yeah. Fiscal policy, as it's been implemented, it's conceived of or implemented, there's no obviously big gains to be had there. And I personally think that for really going forward that fiscal challenges, long term fiscal challenges are contributing to uncertainty in ways which we ought to be addressing. There is always this notion that, well, let's go and do a bunch of fiscal stimulus now and worry about long-term budgetary consequences later. And there's this other component to that is the fact that--I think this is like, a quote of Milton Friedman's or somebody said, 'There's nothing more durable than a short-term increase in a government program.' Or, once you start these things they are hard to stop. I actually would have thought, or to me a more sensible approach to thinking about fiscal stimulus, would have been the following. Suppose that government has in place a set of projects, infrastructure projects that they've done cost-benefit analysis and that they think are really important to be done eventually. There's some flexibility in the timing and the like. Well, maybe in a time of economic downturn there's enough that that's a good time to be making these infrastructure investments, maybe if labor costs are down and the like, and then do sensible things. Now, this is premised on the fact that if the government would have had in place so-called 'shovel-ready projects' that you can document were of great importance and finish at the time. That's a different kind of perspective than people pushing for standard fiscal stimulus--let's just get the economy going by putting money out there. That lever, I agree, can be quite flimsy. Russ: I think their view is that in times when interest rates are close to zero and the economy is struggling, all infrastructure projects have a positive cost-benefit analysis--that the benefits outweigh the costs. Guest: Yeah. Russ: I've heard Stiglitz say it; I've heard Krugman say it. Guest: Yes. I agree. That's their view. So that part of fiscal stimulus would not be my view. Absolutely. Russ: And, how would we know? They're smart. You're smart. Both sides are smarter than I am. How would we educate ourselves? Guest: It's very interesting. One of the more sensible discussions of this, it was by a person who was a prominent policy rule at some point in time. But he [?] got up there and said, 'We really don't know the impact of fiscal stimulus. It could be there, or it's not there. But if we do it, the costs of doing it are not all that high; and the benefits may be there, so we should just go ahead and do it.' It was the one person pushing for fiscal stimulus that was doing it based on acknowledging the fact that at least there was some serious uncertainty out there. Now, we can debate whether the costs of doing it were really that low or not. And what the likely benefits might be. But I thought it was at least framed in a more sensible way. But unfortunately, that's not the way you can influence policymakers, if you frame the discussion that way. Russ: Yeah. And the other side of course also says that the future costs are very small. So--maybe the costs are bigger than the benefits but the costs are so small that probably not; and those future expectations about debt and taxes, not important. That's their selling point. Guest: Yeah. I'm always nervous when people say, 'Let's not worry about budgetary consequences now; we can always fix them later.' It's always later. Russ: It's dangerous. That's slightly dangerous. Yeah. Guest: Yes.
45:05Russ: Going back, I want to tell a story--it's interesting that you mentioned Arthur Burns. My favorite story about systemic risk involves Arthur Burns. George Schultz tells the story--he was head of OMB (Office of Management and Budget) in the 1970s, in the Nixon Administration, and Penn Central Railroad was going broke. And Arthur Burns, according to Schultz, who was the man we mentioned earlier in the Burns and Mitchell business cycle stuff, Burns was saying, we have to bail out Penn Central, because there's systemic risk. They're going to fail and that will bring down other institutions. And Schultz personally--again, this is his words and his memory--says: I felt really bad disagreeing with Arthur Burns, because Arthur Burns is really smart and savvy. And if Arthur Burns says this has systemic risk, it probably does. And they were about to do something when an adviser ran into the room and said: Penn Central has just hired Nixon's old law firm. So, for political reasons, we can't touch this with a 10-foot pole; we're going to have to let them sink or swim on their own. So, they sunk. They went bankrupt. Nothing happened. The systemic risk wasn't there. Comment on the political economy of that, that sort of the downside risk for politicians or policymakers, Ben Bernanke being an obvious example. The urge to do something seems to be very large when you are going to be blamed for the downside, if you don't do anything. Guest: Yeah. No, I agree; that's just kind of--there is this concern that you're in the hot seat right now, and it's difficult often for them to have all the right incentives for [?] long-term consideration. You 're going to be sitting in the seat and watch things go bad, and so if you can push off the possibilities of bad things into the future, there's a temptation to do that. But let me pick up on another part of your story that I think is quite important, as well. Part of coming out of Dodd-Frank is that we now are going to be in the business of designating systemically important financial institutions. And I'm very concerned that this is going to become some type of politicalization. And as soon as these financial firms get designated as being systemically important, associated with that is some type of political government guarantees to not let bad things happen to them. And the incentive effects of that look to me to be quite problematic. And I know some of the systemically important firms that have been declared so far would have much preferred not to have that status, even though there may be some benefits attached to it. And when there's a suggestion that firms like Fidelity are going to become systemically important, and the like, I'm very, very nervous that the designation is going to be applied in a very, very broad way. Part of the ways to get enterprises to behave better is to at least let them think about the risk of failure. And that can be a very important market disciplining device. So, the real challenge is, how can we let these financial firms fail, without having so much fear attached to that. And I think there's also a concern that if we make this systemically important financial institution designation politicized, I'm just really concerned about that having bad consequences. Russ: It seems to me part of the whole notion of systemic risk is trying to map the externality model onto decisions of private financial firms. The more that we politicize those decisions, the more systemic risk there is. Because we've said to them, there is something of a free lunch for you. You will be rescued if you get in trouble. Which certainly encourages them to become more interlinked with other firms like themselves, to make the probability of being rescued higher. It seems like a very toxic combination. What would you recommend instead? Do you have any thoughts on what might be a better way to reduce the risk of a 2008 in the future? Guest: I do think this idea, which I'm no expert in whatsoever, of working out very, very fast and efficient resolutions of financial institutions in ways that we don't have a fear of their consequences--the counterpart to bankruptcy, but done in a super-quick, fast way is quite important to going forward. Russ: Of course, one of the problems there is again, the firms have an incentive to make it very complicated so that those past bankruptcies seem unlikely, and therefore they are less likely to be allowed to go bankrupt. Guest: Yeah. Well, anyway, that's an important challenge, and I think it's kind of a critical one going forward.
50:09Russ: Let's go back to economics more generally. You talked about the Fed problem and the topic du jour and graduate students hopping on whatever is the latest thing. What are the areas that you think--fads or not--are more enduring? What are the research areas that you find most interesting as well as ones that aren't going to be for you but you can recommend to students out there listening or students of economics generally? Guest: Yeah. So, I really do think understanding better the connections between financial markets and financial market disruptions in the macroeconomy, building better models, understanding the evidence which we have, better--new evidence is now becoming available--I think that's a tremendously fascinating area. I also think this more general area of the consequence of uncertainty, in terms of policy analysis, in terms of even understanding how the economy works. We've only kind of scratched the surface on that one. The so-called 'risk model' in which lots of these risk effects in economic analyses kind of take on a second-order nature to them--that's been the past perspective on this. I think thinking through why uncertainty might be much more of a first order than second order impact in economic analysis is really critical. The part of this that I find challenging is how we get this into more discussions about policy analysis. How do we bring uncertainty into that? I think it's critical to be done, but that because of the political incentives and the like it's very challenging to get it done. Russ: Yeah. As you said, if you are not very confident, you don't tend to get a lot of attention. That's reality. Guest: I can be confident in my lack of confidence, but that doesn't go quite as far. Russ: Yeah, that's my strategy. It limits my audience. But we're doing the best we can. You referred to this interaction between the macro sector and the political economy. I was struck in the aftermath of the crisis, knowing very little about finance--I know a little about macro but knew very little about finance. And I realized I had to get a little more educated and understand some of those linkages. And I'm not alone. I think a lot of economists realized they'd specialized in one or the other without thinking about the connections. Is there anybody doing work out there now that you think is promising on that interaction? Guest: I think there's lots of interesting work being done on that now. But there's a little bit of a danger about: well, here's the crisis, we've got to rush to quick answers. And there's been a whole variety of different type of modeling approaches to this, all the way from looking formally at net worth structures and interactions, to looking more in more standard macro models--what happens if financing constraints are binding and the like. Sorting out which of these type of approaches is really going to be the most empirically relevant and the most sensible one for policy analysis is completely wide open right now. So I think our overall--once you get past quantitative ambition it's going to be a while before we have the next models we can place more confidence in. But I think there is--there's a lot of really smart young people that are going into this area as well. I've been at conferences and I've watched the job market. And there's a rush to do work in this area, not all of it good. Some of it is very superficial. But some very smart economic scholars have been drawn into this area, so I have some optimism there. Russ: Yeah, it's a Nobel Prize out there, for folks. So I assume a lot of people are going to try to grab that. There's going to be a big return if you could tell a good story and model that well. Guest: Yeah.
54:23Russ: Let's close--you mentioned earlier, you mentioned climate change. Have you thought about that empirical challenge, which strikes me as a very similar--I've claimed this before--reminds me a lot about economics, our understanding of the economy. There's both risk; and there's uncertainty--or I'll say 'ambiguity,' using your word. Have you looked at that? Do you have an opinion on the quality of that work? Guest: There's a really big challenge here. I've actually been involved in a research venture on campus here. The idea was to bring together climate scientists and economists and try to think about these issues, confronting uncertainty and the like. There's a whole class of climate models out there that are kind of very non-linear models of a different type than economists are used to because they don't really have models of these kind of shocks hitting systems; it's all kind of unknown initial conditions. They are very, very complicated models to solve; when solved they give very rich output. But they are not models that it's really easy for me to think about how to quantify the uncertainty coming out of them. And I think it's critical if we want to think about designing economic policy to confront climate change. It may well be, for instance, that even with meager knowledge you want to act now, rather than wait until our knowledge base increases, because the costs are lower now. Or, maybe you want to wait and learn. That's a critical tradeoff and it would be great to have ways to quantify that. So, I got drawn into this, with that type of ambition. Of course, as most research it's turned out to be remarkably more difficult than hoped, right? There's interesting stuff to be done there. Russ: It reminds me of fiscal policy, right? It's the same issue. We have to act now; we have people who are hungry, people who are out of work; there may be some costs, but they are in the future and they are probably small. So, better safe than sorry. It's a question of you really would want to measure the size of those costs to know whether it's a good policy or not. We don't seem to have much evidence on it. Guest: So, here I think the best statement is, there could be very big consequences. So maybe it's sensible to start doing things now just because of the possibility that there could be very dramatic consequences. I find that personally of some appeal, but would I love to quantify that or make it more of a systematic, formal statement? The answer is yes. But, I don't know how to do it yet. Russ: Let's close with a personal note, if you want to share it. Winning a Nobel Prize is kind of a life-changing event. No doubt. Going forward, what do you see as your research agenda? And how much of that has been affected by winning the Nobel Prize? Guest: Yeah. So, I've actually been wrestling with this question, when I've had my limited free time. Russ: It's a big distraction. Guest: The thing that is the case is the fact that because of the Nobel Prize, you are suddenly given more attention. I tell people, I'm the same person I was 8 months ago. But somehow you're treated differently, as if your IQ just jumped by 40 points or something. So that kind of experience is kind of interesting along some dimensions. I go to these public events and now people want to talk to me. Whereas before, they were happy to ignore me. And that's like, okay. But I think it's an opportunity out there. I've been very, very lucky over the years, having some great graduate students. Just in advance of me going to Stockholm they threw a conference in my honor. And I've had about 60 that I was a formal adviser of and well over 100 that I've been on committees of. And just seeing them out there doing a whole variety of important research really is very gratifying. Whatever I'd like to do, whatever attention I have going forward is to try to draw attention, to try to encourage and nurture young scholars in fields that are very interesting, of great interest to me. So if I can help to become, not only to continue to make my own advances but help to become a more effective advocate for lines of research, then I think that's probably the best I can do. Russ: I suspect some of them are listening.