Watch this podcast episode on YouTube:
This week's guest:
This week's focus:
Additional ideas and people mentioned in this podcast episode:
A few more readings and background resources:
A few more EconTalk podcast episodes:
|Time||Podcast Episode Highlights|
Intro. [Recording date: June 17th, 2020.]
Russ Roberts:Today is June 17th, 2020, and I have two guests today, economist and author Mervyn King, who served as Governor of the Central Bank of England from 2003 to 2013, and economist and author John Kay. Together, they are the authors of Radical Uncertainty: Decision-Making Without the Numbers. Mervyn and John, welcome to EconTalk.
Mervyn King: Good to be with you.
John Kay: Thank you.
Russ Roberts: I want to remind listeners we're recording this during the pandemic without our usual audio equipment, so please bear with us if we're not up to our usual standards of audio quality. We also hope to record the video of this episode, which you can find at YouTube. Please go there, search EconTalk, and subscribe.
Russ Roberts: I have to confess that I read this book, Radical Uncertainty, with some trepidation. As listeners may have heard, I am writing a book on decision-making and data. And I could have used this title. Now it's gone, although titles aren't copyrighted, at least in America. I could just steal it. I won't, gentlemen, though.
I am relieved that the book that you have written and the one I'd like to write or are in the middle of are not quite the same. This is a very interesting look at decision-making under uncertainty, as well as an indictment of much of the way economics is practiced. I take a different tack in my book, but I found this book to be--it's beautifully written and it's a very thorough survey of how we make, have come to think about decision-making, much of which you, as the authors, think is incorrect.
Let's start with the title itself, Radical Uncertainty. Mervyn, what does that title mean to you? Why did you pick that title?
Mervyn King: We picked it because so much uncertainty is, in the world of economics, people are desperate to quantify it. And our view of radical uncertainty is that it's uncertainty that you cannot easily quantify.
I mean, the best example, I think is what we're going through now, COVID-19, in which we knew, well before it happened, that there could be things called pandemics. And, indeed, we say in the book that it was likely that we should expect to be hit by an epidemic of an infectious disease resulting from a virus that doesn't yet exist.
But, the whole point of that was not to pretend that we, in any sense, could predict when it would happen, but the opposite. To say that: the fact that you knew that pandemics could occur did not mean that you could say there was a probability of 20% or 50% or any other number that there would be a virus coming out of Wuhan in China in December 2019. Most uncertainty is of that kind. It's things where you know something, but not enough and certainly not enough to pretend that you can quantify the probability that the event will occur.
Russ Roberts: Do you feel that way, John? Do you feel that way about the Imperial College forecast that--I think, in America that 2.2 million people will, or could die of the pandemic? A number that long-time listeners know I would find amusing because of the decimal point. It didn't say, 'A little more than two million,' or it didn't say, 'A whole lot.' 2.2 million.
John Kay: I think that's absolutely right, Russ. Same model in the United Kingdom. So that's 550,000. And we thought--it's interesting that it's exactly a quarter of your number. I suspect there's something in common there.
And you're absolutely right: that the precision of these numbers renders them, makes them suspicious. Why 500, not 550?
Russ Roberts: Yeah. I think there's a human urge for certainty. We don't like uncertainty and you talk in the book quite a bit about that.
I think that urge to quantify--and in some of the estimates would have been 550,384. Actually would have been 383.7, but they'd round up to 384, at the last digit.
On the surface, you could argue--and some do--that, 'Well, this is just part of science. It is imprecise, but we're getting better.' Mervyn, what's the danger of making that kind of precision on the grounds of it's just doing the best we can?
Mervyn King: It's not doing the best we can. It's pretending that we know more than, in fact, we do.
And, there is a real danger in that. If you pretend to know a great deal more than, in fact, you do know, what you're going to be doing is making judgments and decisions based on the false assumption that this is what would happen if you take one particular action rather than another.
And I think that deflects from what we in the book describe as the most important thing to do when confronted with a decision under uncertainty, which is to ask the very simple question : What is going on here? Because that is a way of actually getting to the bottom of what is happening.
Most big decisions that we take are unique. They're one-offs. And that means you really do need to think through yourself what is happening now.
And the great danger of this precision is that either politicians as decision-makers can defer or deflect responsibility on so-called experts who provide them with a number, or the experts pretend that they know more than they do, in order to have bigger say and expand their own influence. And I think this is extremely dangerous. It means you can often miss big things.
Cost-benefit analysis is a good example where people will come up with a precise number for the value of a particular road building or railway building project . But very often, that's built on very strong assumptions. It ignores important aspects of the decision, which you might actually do better to ask yourself, 'So, what is really going on here? What are the big issues we should think about?' Not just rely on some so-called expert coming up with spurious precision in the numbers.
John Kay: If you look at the way this model has been used in the United Kingdom --and I suspect it's true in other countries as well--I think there are two problems. One is, politicians say, 'We're relying on the science to make decisions.' And, of course, the scientists say, 'We're giving advice, but it's ministers who make decisions.' So that both are in effect deflecting responsibility. And, it's not actually clear that anyone is really responsible.
But, the second is --the way you should use models in this kind of environment and both of us are keen on using models--but if you develop these epidemiological models, you immediately discover that the critical parameter is the reproduction rate: How many people are infected by one infected person? That if you're going to make sensible policy, you need to get a quick estimate of how many people are actually infected with this and how many people are infected by each infected person.
In the United Kingdom, we had almost no data when we started off this pandemic. I think you're probably still in that situation in the United States. I'm glad to say that, in Britain, we do now have our Office for National Statistics actually doing some random testing of the population so that we can get a good handle of how many people are actually infected and try and infer from that how many people are infected by infected people. If we'd have had that kind of data three months ago, we could have made much more sensible decisions.
Russ Roberts: Yeah. I think there's a lot we've failed to execute in real time. And, unfortunately, tragically, there are also a bunch of lessons I don't think we've quite learned yet. It's the role of random testing or extensive testing. Obviously, tracing people is potentially a useful way to get a better handle on the virus. There's sometimes civil liberties issues there that wouldn't work so well in some countries compared to others.
I want to take one more example from the pandemic to let you talk about models generally. You're very critical of the way models are often--as you say, you often have to have a model, even if it's only what you call a narrative, a way of organizing your thinking. But, the way that models are used, often I think, in economics are as cudgels, as clubs, to beat up people who disagree with you.
There was a study done here--I think it was in the United States, certainly about the United States--that if we had only locked down one week earlier, 36,000 lives would have been saved. And it was carried out to the last digit, to the single digits--36,456--prompting someone to point out the last person who got it, if only they had locked down a fraction of a second earlier, they would have been saved. Which is, of course, the worst kind of black humor.
But, that kind of model was put forward, that estimate was put forward, I think as a marker to alert policymakers that they had, quote, "Made a mistake, a devastating mistake," in theory, and that, in the next time this comes around, we should act differently. What would your reaction be to that kind of point estimate of a single quantity like that? Mervyn?
Mervyn King: Well, I would put very little weight on it myself. And I think what's instructive is that the models which are being used now were really developed by--the work was done by two people, Roy Anderson of Imperial College and Bob May, initially of Princeton and then at Stanford, two brilliant scientists who wrote the mathematical textbook on epidemiological models.
And, earlier this year, Roy Anderson said, 'These models,' he said, 'They're incredibly useful for teaching purposes, but they're not helpful in making quantitative predictions.' And, the reason is that the parameters that go into these models are things which we know very little about.
Now, the model itself--quite complex mathematical models were very helpful because they demonstrated that a typical pandemic takes the form of a very slow start when you don't really know what's going on and it's very hard to judge how serious the pandemic is. And, then suddenly, it accelerates away from you. And it reaches a peak and then gradually comes down again; and there may be a second or a third peak. Knowing that general shape is extremely helpful in asking the question, 'What's going on here?' because you do worry about whether the health services can cope with a peak.
Russ Roberts: Sure.
Mervyn King: But what the models are not good at doing is telling you the precise numbers, because they depend on parameters which generally reflect human behavior. What will the impact of a lockdown be on the rate at which the diseases spread? And, when you come out of the lockdown, what will the impact be both on the virus and on the economy?
The honest answer is: We don't know. And we're trying to navigate our way between two very major costs: on the one hand, the cost of lives lost and illness through COVID-19; and, on the other hand, the massive loss to GDP [Gross Domestic Product], and indeed also lives of people suffering from non-COVID-19 diseases.
There is no easy answer here. We are going to be really feeling our way, navigating our way through it. These precise numbers--'If only you had locked down one week earlier, we'd have saved so many lives,--reminds me of nothing more than people who on monetary policy say, 'If only interest rates had been 50 basis points lower for six months, unemployment would have been 32,000 lower.'
The fact is: We don't know. And it brings discredit on the models when they pretend to know things that they can't know. The models are very helpful in getting us to think clearly about the problem. But what they're often very bad at doing is making quantitative predictions about the future.
Russ Roberts: You're very critical of cost-benefit analysis; and I am as well, although, of course, it depends on the particular implementation. But, one of the things that happened in the aftermath of the George Floyd tragedy is a number of people started protesting, out of anger and a sense of injustice, and, of course, many people went and justified that who had told us a week ago, a week before, that we couldn't get together in groups of more than 10 people, say, for a funeral.
Implicit in that statement about a funeral was some sort of cost-benefit analysis or, say, go to the beach if you were a college student on spring break--if you did that, you risked infecting your grandmother.
So, there's an argument to be made there, obviously. But somehow when this tragedy with George Floyd happened, people suddenly made a different mental calculus, at least for their own choice. And they stormed to the streets, many of them not wearing masks. They got close to one another. And we're now doing a large experiment about what the consequences of that will be.
But, what I found remarkable is that there are actually economists who tried to justify that difference in advice from the medical community--you know, that it was not allowed to go on spring break; it was not allowed to go to church or synagogue or your mosque; it was not allowed to go to a funeral; but a protest is different.
And, of course, you could make that case that it's different, but they weren't content to do that. They wanted to cast it in the form of science. So, they did a formal measure.
And, as you point out what goes into that measure--besides the unknowns like how many people are wearing a mask, which is impossible to measure--all these lockdown attempts, attempts to measure the effect of lockdown--can't measure how many people are wearing masks, not wearing masks, how close they stand to each other, how old they are--all the data that we know matter now in whether people are infected or not.
So, John, what do you think of this use of cost-benefit analysis, which is often justified as, 'Well, it's better to quantify it, because, at least that way, we'll have some idea of the magnitudes. We'll understand whether we need to--if it's close, we'll say, Okay, it depends; but if it's a huge difference, obviously we need to make a decision accordingly.' Do you think there's a use for cost-benefit analysis?
John Kay: I think there is. Cost-benefit analysis is a way of helping you organize your thoughts, just identifying the cost and benefits of any particular strategy. Just writing down what they are is quite helpful. And then you can almost certainly attach orders of magnitude to them.
So, with these classic stories like: what price should we put on destruction of a Norman church? Well, it shouldn't be nothing, and it shouldn't be a billion pounds. And, being able to frame these kind of bounds is probably helpful. But what we feel lost in cost-benefit analysis is you construct the spreadsheet that has all the considerations, which you would in principle like to build into your analysis. You then discover you know hardly any of numbers that should go into the cells of these spreadsheets, so you make them all up.
The answers that are derived from that are meaningless. Firstly, you don't know these numbers and you've no basis for it ; and secondly, since you have no basis, you can make them up to give whatever answer it is you want. I'm afraid that's what happens a lot.
Russ Roberts: Mervyn, do you want to add anything to that?
Mervyn King: No. I think we 've seen that whether it's transport projects, whether it's decisions to expand airports, all kinds of cost-benefit analyses that experts are recruited to go away and come back with numbers, a single number is the net benefit of the project. And, that throws away all the important information about what we know quite a lot perhaps about the cost of construction of a road, but we know far less about the potential benefits.
It would be much more honest to do what John said, which is to say, 'Well, in some areas of this cost-benefit nexus, we know quite a lot, and in other areas we know really very little.' And we have to make a judgment based on that.
But, the pretense that we have to know everything to a precise decimal point is what undermines, in the end, the value of these attempts to construct a net benefit of a project.
Russ Roberts: Let's turn to value at risk [VaR], which you write about a little bit in the book. We talk about this a lot in the many episodes we did on the Financial Crisis, so with a wide range of guests. And, I want to tell a story I've told before, but I want to get your reaction.
Value at risk is a flawed measure. It's an attempt to assess the riskiness of an investment portfolio. And its practitioners certainly recognize that it's imperfect. What I suggested, based on my understanding of human nature influenced by Nassim Nicholas Taleb, is that when you have a quantitative measure like how risky your portfolio is rather than, 'It's pretty risky,' or 'It's extremely risky,' but instead you've put a number on it--and day after day goes by and nothing bad happens, and you start to think that that number actually represents something. And then, it falls apart because it's not a reliable number.
And when I said this to a friend who is involved in investment decisions, he said, 'But, that's the best thing we have. What's the alternative? That's our best measure.' How do you react to that, John?
John Kay: It may be your best measure, but it's not your best course of action.
We'd done[?] a spell out as I'm sure you've done it before, the thing that most fundamentally went wrong with value-at-risk modeling, which was deriving the data that went into it from historic data set that did not have--and could not have--the extreme events which gave rise to the Crisis in it.
And, that's particularly bad in the case of a model like that because it's protection against extreme events that is exactly what you are constructing these kind of models for.
So, I don't know what level of capital banks should have. I don't believe anyone can sensibly quantify it. I know that actually if they're engaging in the kind of transactions they were engaging in before 2008, the answer is: A lot.
Russ Roberts: Yeah. Mervyn, do you want to comment?
Mervyn King: No. That's absolutely right, and I think that the attempts to calibrate capital risk weights--the amount of capital the banks should issue, the amount of liquid assets that they should hold--all of these things have been examined endlessly in various international fora. I chaired some of them.
But, what comes out in the end is that you produce a set of arbitrary numbers. And then, the danger is that, as long as the banks meet those arbitrary numbers, whatever the underlying circumstance is, then they're fine and people stop worrying about it.
And, instead of actually asking the question again, you know, 'What is going on here?' then I think people tend to rely on these arbitrary figures for regulation, which can go very badly wrong.
I think, under COVID-19, what is interesting is that with the shutdown of the economy in many countries, many businesses will fail and they won't be able to repay their loans. So, there is going to be a significant hit to bank capital as they realize the losses, bigger than the losses that they had anticipated making. It's very hard to put any number on that, but the right response to that is to make sure the banks certainly do not do anything at all to become--to pay out dividends or buy back shares in such a way as to make them less capable of absorbing losses. That's the essence of it.
And, to say that we've done a stress test in the past and banks have passed it, I think rather misses the point. Because, you don't really want to ask the question, 'Can banks pass this specific stress test?' because you don't really know what the stresses in the future will be like.
You really want to try a number of different stresses of very different kinds and see if it's the case that some of the banks are particularly vulnerable to some stresses, but not others, or whether they are broadly, equally susceptible to all the stresses. Just get a feel for what's going on, rather than the pretense that you can predict the future and know exactly what stress it is that you have to protect the banks against.
Russ Roberts: Mervyn, I want to follow up with you on this. I've argued--I don't know if you're sympathetic to this or not; we don't have to get into the details because this is not a conversation about the Crisis of 2008--but I've argued that the bailouts of the past encouraged the imprudence of the 2008 time period, for both the run-up to the Crisis and that any incentive that people had to be careful or to create a fund for a rainy day or to reduce leverage was greatly reduced--not eliminated, but greatly reduced--by the prospect of a government bailout.
And, certainly, here in the United States, in the COVID-19 time, we have airlines who are in that situation you are talking about, getting money from the government. Why wouldn't they, the next time they have some money laying around, use it to pay out a dividend? Why would they ever be--they have no incentive to be prudent and to take the kind of caution that they should take for that rainy day.
Mervyn King: So, it depends on the circumstances in which the situation arises. I think with COVID-19, one can't really argue that this was the result of imprudent behavior by--
Russ Roberts: For sure--
Mervyn King: companies of various kinds. And in which case, enabling those companies to stay in business until we can come through the other end of the disease seems to me a sensible action to take.
But, in the case of the banks--where we knew that bank crises could occur--we should have made the system a good deal more resilient. And we didn't. And I remember that before the 2008 Crisis, we carried out in the Bank of England some surveys. We would go to people in the financial community and we'd say, 'Where do you see the risks? What kind of risks do you see ahead?' And, they'd describe risks that they could see ahead. And then, our guys would ask the question, 'But, what do you see as a really big risk?' And they would say, 'Oh, we don't worry about that, because if it was a really big risk, that will be your problem, not ours.'
Russ Roberts: Yeah. Yeah. Yeah, I think there's some of that. I guess history will--historians will argue about it going forward. But I definitely think that's part of the problem.
Russ Roberts: Let's turn to rationality, which it took me a long time, but I've started to realize--it took me a long time, but I've started to realize is what--I don't remember who--Rodney Brooks taught me this term: I think he was quoting someone else; we'll add it in the notes--but it's a "suitcase" word. It's a word you can stuff anything into or not, and people often do. It has become--it was for a while a parlor game, in academic circles, to show that the models that economists have of rationality are wrong. People are not rational. They make systematic mistakes, particularly in the face of uncertainty.
John, what's wrong with the way we think about rationality in economics--and in Behavioral Economics as well? You're critical, really, of both of them.
John Kay: We are. And that's because when rationality became defined by economists as conformity to certain axioms, and these axioms may be appropriate for dealing with what we describe as 'small worlds,' where actually you know a great deal about the circumstances you face. But, it's much less clear that they're appropriate for 'large worlds' in which you have to confront a kind of radical uncertainty we describe.
And we argue in the book that a lot of the things that are called biases by people who are wedded to the economists' concept of rationality--and we're talking here about Behavioral Economists as well as traditional economists--that what a lot of what are called 'biases' are actually evolutionary adaptations to an uncertain world.
Russ Roberts: I had Mary Hirschfeld on the program. And, she talked about how resistant people are to the economists' idea of treating sunk costs as sunk. You know, we smugly say, 'Oh, people are so stupid. They're so irrational. They don't realize that sunk costs are sunk.' But she pointed out there are often good reasons for treating sunk costs as not sunk, and paying attention to rather than ignoring them.
Shouldn't it give economists pause that we have to debate people to convince them of something that we claim is in their interest? It should be a wake-up call. Mervyn, talk about that and talk about it more generally, this issue of rationality.
Mervyn King: So, I think both John and I have found that when we talk about radical uncertainty to groups of economists, they say, 'Well, of course, we understand that there is something called radical uncertainty, but if we go to produce our models and if our Ph.D. students are going to get their theses, then they have to have something to optimize. Expected utility is what they have to maximize, and then you can write down a mathematical model and get their answer out.'
And, we describe these really as in the genre of puzzles rather than mysteries.
Now, a puzzle can be very helpful because it may give you an insight into something.
We give examples in models where the assumption of optimizing behavior gives real insights. We give the example of David Ricardo on trade between England and Portugal, which produces the counterintuitive conclusion that even if you are more efficient than any other country in the world, it still pays to trade with them.
And, the example that George Akerlof produced of a market in which price wasn't determined by supply and demand because the price was conveying information to one side of the market but not the other. And, that's sort of what we had in which people have differential information, which undermine the ability of the market to reach an efficient outcome, is a very helpful insight,
But what it isn't is a description of the world.
And I think the great danger of the use of models by economists is that they believe that this is applies to the whole panoply of human life.
As John said, it applies only to small worlds--a very narrow range of situations in which the axioms of choice under uncertainty, which Morgenstern--von Neumann and Morgenstern--popularized before the Second World War. In that sort of world, then the models can give you useful insights.
But the vast range of decisions that people actually take, whether in government, or in business or, indeed, in their personal lives are not decisions which are characterized by a small world. They're in the larger world in which we simply don't know enough to pretend that maximizing expected utility is the right way to make a decision.
And indeed, whenever economists are appointed to serious public policy positions, they do not go around saying, 'I maximized expected utility this morning using a model and this is the result.' They actually approach it in a very different way. And I think we need to recognize that sensible people do that in the world, and we have to broaden our horizons in realizing that the sort of rather narrow models that economics have produced can be extremely useful in generating some insights, but they're only part of human life, and they're not the greater part, either.
Russ Roberts: Well, you tell the apocryphal story--you say it's probably apocryphal. I've heard it as well: One of my favorites of the decision-making theorist who had to make some large decision for himself about, say, taking a job offer or moving or whatever it was, maybe an investment decision, and was asked by a colleague, 'Which of your models are you going to use for this decision?' And the reply was, 'Come on! This is serious! I'm not going to use those models! That's for publication.'
But, I wonder if--I share your attitudes, so I'm going to push back against it a little bit. You could denigrate and dismiss most of what is called Behavioral Economics--that is, the Kahneman, Tversky, and Thaler and others who have done a lot of experimental work giving people choices about whether to take this bet or that bet, making an assessment of whether Linda is more likely to be somebody in banking or a secretary or a feminist. I don't even remember the details because I just find it as mildly bizarre. But in those examples, those are very unrealistic--they're not part of the kind of decision-making that people make in real life. There's very little riding on them. If something were riding on it, presumably, people would spend more time and ask for advice or figure it out more carefully.
Do you dismiss that literature on those grounds, John, or do you think there is something there that we need to pay attention to?
John Kay: No. We don't dismiss it. I mean, some of these experiments are plain silly. There was one that rather amused us in the book, which is there was a Kahneman experiment of asking people how many words in English have 'K' as the third letter rather than the first letter?--
Russ Roberts: That's huge--
John Kay: And they describe it as the availability heuristic, because most people say there are more words with 'K' as the first letter. And they said, 'They're wrong. This shows how stupid they are.' Actually, they didn't have access to the computers that we now do have, and it was Kahneman and Tversky who were wrong. There are more English words that have 'K' as the first letter.
And then, you ask: Why on earth did you want to know the answer to this question in the first place?
The right answer to that kind of question is, 'I don't know the answer, but if for some reason why the answer matters, I will try and find out.'
And, that's the answer to a whole variety of these problems. These problems are devised to show that people given rather silly questions make silly mistakes.
I don't know what we learn from that. That's not to disparage the idea that we can learn from observing the choices people make. But we shouldn't assume that they ought to make them in accordance with some kind of model that we prescribe.
And, it's interesting that Behavioral Economics began in the 1950s and 1960s with Maurice Allais and Daniel Ellsberg, and it began as a critique of the economist's concept of rationality. In the 1970s, with Kahneman, Tversky, and the Behavioral Economists that followed them, it got turned into a critique not of the model, but of the people for not conforming to the model. True of quite a lot of economics: that, if the world isn't like the model, it's the world's fault and not the model's.
Russ Roberts: Yeah. I have found--I mean, it's not my area, so I have the advantage of being ignorant--the disadvantage of being ignorant, but I also have the advantage of being ignorant.
And so, the idea that people would maximize expected utility is such a strange concept, as if I'd be indifferent to variance, I'd be indifferent to the implications of ruin that Taleb and others have pointed out. I mean, it's such a bizzaro narrow view of thinking about how to narrow uncertainty down to one factor. It's like saying, 'Once I know the mean of a distribution, I'm good.' Which, of course, is foolish under many, many situations in real life.
Mervyn, do you want to say anything about rationale--about the Behavioral Economics and the small worlds of these experiments and whether we learn things from there or should we ignore them? Your thoughts?
Mervyn King: I think, as John said, what was good about Behavioral Economics was that it asked the question: How do people behave in practice? But, what we need to do far more, I think, is do this outside the artificial setting of experiments, which themselves have rather little interest, and look more widely.
And, as John said, the problem with it is that anything that deviates from expected utility maximization is deemed to be a bias. And we've now identified hundreds of biases, apparently. And you have to ask the question. If the human race is subject to hundreds of biases in their behavior, how come we're the dominant species on earth? We're obviously doing something right.
And, I think that we therefore need to go back and ask deeper questions about: what is the sensible way to react to a situation of great uncertainty?
The reason I think why expected utility maximization became the norm was that it was what appeared to be a rather natural extension of behavior under certainty. Maximizing utility seemed to be, people had preference orderings. None of this seemed silly. And, it produced very interesting results, analyzing a world of certainty.
But it was when the extension was made to the world of uncertainty that I think we got into serious trouble. Because, that is a non-trivial departure from the world of certainty. And people, instead of really thinking deeply about the fact that what we call 'radical uncertainty' really changes the nature of the problem you're facing, there was this great wish to say, 'But, all these wonderful tools that we've developed for analyzing a world of certainty, by very clever relabeling of commodities or constraints, we can apply the same methods to a world of uncertainty.'
And, that's why we look back to Frank Knight and John Maynard Keynes who, in their different areas--one, entrepreneurship, the other behavior of the economy as a whole--recognize that actually you couldn't really understand what was going on without invoking the idea of radical uncertainty. Things that you couldn't quantify the uncertainty. And it was the whole basis of innovation and entrepreneurship there are people doing things that no one imagined could be done before. And Keynes could see. And he got very frustrated by the fact that economists in the 1930s wanted to interpret the macroeconomic problem in terms of a framework which was very much the same framework as a world of certainty. And he said, 'No, no, no. It's not a difficult concept, this uncertainty; but it's vital because it totally changes the way you think about what drives spending and output in the economy.'
Russ Roberts: Well, again, it starts with an idea that is quite deep and that is at the root of how economists do think about uncertainty, which is that: Expectations matter. Right?
Mervyn King: Yep.
Russ Roberts: That's a wonderful insight that is not obvious, the idea that a stock price today could embody an expectation of what might happen tomorrow is profound, and true. The fact that, if I plant a fruit tree, it would be foolish--but then people make this mistake--if I plant a fruit tree, people assume it has no value until it produces fruit. That is not true. It has potential value because people anticipate correctly that fruit will come. There's uncertainty around it, but it's not worth zero.
And so, that natural thought that economists have, which is, as I suggest, quite deep, got distorted by--your insight--which I love, which is, 'Oh, okay. So, it's like everything else. So, I just have to call it T1 instead of T0.' So T0 is today, when I make my decisions; and T1 is tomorrow; and it's just I just make choices across both of them.
You know, I was trained at the University of Chicago. We've very big on thinking about expectation--we were, at least--about thinking about expectations in a quote, "rational" way. I think your insight that that's--I will call it a fool's game. That's how I would describe it. You want to react to any of that, either of you?
John Kay: Yeah. So, there isn't a market in airline tickets from London to New York in 2030. It's pretty obvious why there isn't. So, the contingent [inaudible 00:40:07]--and, of course, if there was, then airlines would be able to plan their aircraft fuel purchases. Oil companies would be able to make explorations and so on, on the basis of that. But, that market doesn't exist, couldn't possibly exist.
And the attempt to construct a general equilibrium model that assumes that all these markets that could conceivably be imagined to exist, too, is quite an amusing thought-exercise, but that's as far as it goes.
Russ Roberts: And, human beings, without economists, created insurance. And, hedging is something that every farmer has some understanding about. Risk diversification, portfolio diversification, it's kind of an obvious thing, yet you don't have to have a Ph.D. to think about. Mervyn, do you want to comment on that at all, on the previous point?
Mervyn King: I think the great danger was that the idea, that: if only markets could be complete in the sense in which John defined it--namely that for every conceivable event in the future, you can define the outcome--you know, 'I want to take a holiday in Barbados in 15 years' time, provided the temperature is above a certain level.' That you can buy and sell these particular contingent outcomes in the future was very seductive because it enabled people to investigate.
And Arrow-Debreu, who pioneered this way of thinking, used it to say, 'Well, let's ask the question under what circumstances would a market economy be efficient?' And, the answer was: only a limited set of circumstances.
That doesn't mean to say governments can do better, because they're also subject to the same informational constraints. But nevertheless, the idea that, this view that, 'If only, if only the world had a complete set of markets, we'd be fine, because the market economy could cope with it and lead to an efficient outcome,' led people, I think, down the path of believing that if only we could create more and more financial instruments, then we would eventually be completing the set of Arrow-Debreu markets.
Russ Roberts: Oh, yeah.
Mervyn King: And, this is an astonishing delusion, really, because so many things that could happen in the future, there are clearly no markets for them at all and never will be.
And, just narrowing down, increasing more and more the range of derivative financial instruments does not actually complete the set of markets that you would need in order to avoid the problems that Keynes was talking about, in terms of the macroeconomy, or Frank Knight was talking about in terms of entrepreneurship and innovation.
Russ Roberts: Yeah. Actually think that the Arrow-Debreu effort was insidious. I think it led most economists to forget that little caveat you added about government subject to the same problems. Instead, most people forgot about that, assumed that all the "imperfections,"--and I put that in air quotes for those not watching the video--all those imperfections--asymmetric information, imperfect information, imperfect competition--therefore all the claims of markets working well were wrong because the assumptions of the model--the Arrow-Debreu Model and those of others--didn't hold.
And this unleashed an enormous industry of showing how imperfect models, how imperfect markets are forgetting, as you point out in the book, that those are the markets that exist in the theory, not in the real world. Of course, real world markets don't conform to any of those perfect assumptions and things.
Russ Roberts: I want to turn to that next. I want to turn to modeling and methodology and Milton Friedman's as-if hypothesis. But before I do, I want to read a quote from the book that serves as a nice capstone to our conversation about rationality. It's a long-ish quote, but it's so good. Here we go.
If we do not act in accordance with axiomatic rationality and maximise our subjective expected utility, it is not because we are stupid, but because we are smart. And it is because we are smart that humans have become the dominant species on Earth. Our intelligence is designed for large worlds, not small. Human intelligence is effective at understanding complex problems within an imperfectly defined context, and at finding courses of action which are good enough to get us through the remains of the day and the rest of our lives. The idea that our intelligence is defective because we're inferior to computers in solving certain kinds of routine mathematical puzzles fails to recognize that few real problems have the character of mathematical puzzles. The assertion that our cognition is defective by virtue of systematic 'biases' or 'natural stupidity' is implausible in the light of the evolutionary origins of that cognitive ability. [quotation marks in original--Econlib Ed.]
Now, before we move on, someone could respond to that quote by saying, 'Well, yes. We evolved on the savanna when the biggest threat was a predator of large length and fierce claw. But now is different and we're stuck with these lousy biases.'
Do you want to say anything about that, John, before we move on?
John Kay: I'll just add a footnote to what you were saying there, Russ, which is: both Mervyn and I, I think, discovered when we went outside universities and talked to real people who ran real businesses or operated in the financial world--we discovered they weren't optimizing. They weren't maximizing anything. And, that wasn't because they were stupid and irrational, although to be fair, quite a lot of them were stupid and irrational--
Russ Roberts: Yeah. We all are. Yeah.
John Kay: but because they couldn't possibly have the information needed to make these kind of optimization calculations. And that meant they were doing what you described in the quote: They were finding strategies that were good enough.
And, we were very struck when we looked at the work people like Gary Klein have done on practical decision-making--what firefighters and paramedics, people who are very good at these jobs, actually do. And he discovered, they didn't compare options in the way a rational economist would tell them to. They looked for something that was good enough. And if it didn't work, they went for something else. They made comparisons sequentially, not simultaneously.
Russ Roberts: And, you reference the work of Gerd Gigerenzer, who has been a guest of the program, who has often--has taken a similar approach: the power of rules-of-thumb, decision-making he calls fast and frugal, which is what in crisis-environments, you often have to act quickly. You don't have time to sit around and figure out the separating hyperplane that leads you to the optimum.
But, I think you make a point I hadn't thought of before, which is that in our real lives, we never have all the data. Ever. Sometimes, we have none of the data, because it's about a decision in the future that cannot be quantified. Yet, in these laboratory settings or in models, it is presumed, to make them tractable, that everything is known.
And I think it's a great point that, when a human being has spent their entire life having to deal with radical uncertainty is suddenly told, 'Okay. Now, here's a case where you do know everything. Just pretend that their end goods--and you know all the ends, each one of them--and you know what every price is, and there's no variation,' etc., etc., etc.; that we don't act what the researcher thinks of as wisely in that setting, it should not be surprising.' Mervyn, you want to comment on any of that?
Mervyn King: I think that the argument that computers are superior to humans in making decisions, and therefore we should give as many decisions as possible to computers, is a dangerous one, because it applies only to these very small worlds--the one that you just described where we do know all the relevant information. And there is no doubt that computers are very good at fast and rapid solutions to mathematical problems which are well-defined.
But, the great secret of the human race is not that we are as fast as computers. We're obviously not. But, what we are very good at are making leaps of imagination, dealing with problems that are ill-defined, which don't necessarily have a unique answer or indeed any answer, but to which we have to make a decision to cope with the challenge facing us today. And it's that ability to adapt that I think is the great success.
So, if you look at the responses in different countries to the COVID-19 disease, a variety of different responses; but what you see is people struggling to ask, 'What is going on? What is this virus when we first saw it? Did it seem to be rather like influenza or was it something very different?' These are the questions that you have to sort of puzzle through. And you don't have the information to make a clear determination. You have to make judgments.
The whole nature of decision-making in a situation like this, as in most decisions in life that really matter, is a question of struggling to cope with a problem that's ill-defined and you don't really know what information you've got, what you're missing. You have to ask yourself, 'Well, what should I try and find out about now in order to come to a better decision?' But, it's this adaptability, it's coping with an ill-defined problem, is what makes human beings so successful.
Russ Roberts: You mention in the book that you almost called your book Through a Glass Darkly. I think you made the right choice, calling it Radical Uncertainty. But, in that discussion, I wanted--
John Kay: It was the publisher.
Russ Roberts: Yeah. I know. They're picky. But in that discussion, you talk about the power of thinking of decisions or reality, I think, as opaque.
The analogy I use is: we know the idea of the person who has had too much to drink coming home from the party, lost their keys, and you mention, looking under the lamp post or the street light and asked, 'Well, is this where you lost them?' 'No, but the light's better here.' And, I think my version of the opaque insight is that: Most of life is in the shadows. There's some light. It's not dark, but it's not bright and we're doing the best we can.
It's also interesting to think about the reality that we, poor, flawed human beings, create computers. And we also have to create a way to cope with the fact that life is a mystery and not a puzzle. You make the contrast--you alluded to it earlier--the difference between puzzles and mysteries. Puzzles typically have solutions. Mysteries don't. Computers are really good at puzzles. Some human beings are. But most of life is in the mystery area. And, I would just add--and, part of what I'm thinking about in the book I'm struggling with is: Coping with that mystery is a huge part of the human condition.
So, we'd love to not have mysteries. We have an actual inclination to think of mysteries as puzzles. Unfortunately, they're not. We have to live with that radical uncertainty. We have to live with regret. We have to live with the fear of regret. And, I think it's a part of the human enterprise, that it's a hard part about being human. John, you want to comment on that?
John Kay: Yeah. I'm not sure you're right when you say that we'd like to be rid of uncertainty. And we use a striking example in the book--
Russ Roberts: You do--
John Kay: of Bill Murray's Groundhog Day, where he lives the same day over and over again. And, it's not pleasant. It's hell, actually.
Russ Roberts: It's a good point.
John Kay: And, uncertainty--it's not just that Knight was right when he said that uncertainty was what made entrepreneurship and profit possible--although it is. But uncertainty is what makes life interesting: going to new places, meeting new people, eating new food. All the kinds of things that we enjoy. And that's why what we conclude the book by saying what we need to do is manage risk but embrace uncertainty in the context of having managed risk. And, that's why it's so important to restore that distinction between risk and uncertainty, which Keynes and Knight made and economists managed to elide in the century that has followed.
Russ Roberts: Mervyn?
Mervyn King: I think this is absolutely right. I've always been struck that, when I speak to graduating classes of students, quite often, they'll say, 'I face a very uncertain future.' I ask them what they mean by that. Sometimes, they can identify things that John has just called risk--that they might not get a job or they might get a job and then lose it fairly quickly. So, those are things you need to manage.
But, then I say to them, 'You know, if there was no uncertainty, I could give you a list today of the six people who might be your life partners and the probability that each of them could be your life partner. I can give you a list of four particular careers that you may follow and the probability that you may follow, each of them. A list of the six towns you might live in, and the probability attached to each of those.' And, they would be unbelievably depressed if they knew that's all the future had to offer.
The fact that there is uncertainty at the age of 21, 22 is what makes their life exciting and worth living. They don't know what the future holds. Some of that is risk, which they need to manage, but much of it is the uncertainty which John and I argue that we should embrace and enjoy.
Russ Roberts: Yeah--
John Kay: And that information, they wouldn't be very likely to find a life partner or a job. One of the lines in the book I'm just proud of, actually, is the one that said, 'Rational economic man dies out because nobody would want to mate with him.'
Russ Roberts: Heh, heh, heh, heh, heh, yeh. I enjoyed that as well. And I will just not comment on it, but I will just say I enjoyed it.
You know, this pandemic, people are struggling with the emotional challenge of the fact that we don't know when it's going to end. I think that's very hard. Back in April, we thought, 'Well, by June, we'll know when we can travel.' Or, 'By August, we'll certainly have the knowledge about testing.' Or whatever it is. And it's just, the future keeps moving ahead. The 'When we'll know' keeps moving ahead at a constant rate, it seems.
And I'm struck by the fact that there's many tragic consequences of course of this virus. The death is the obvious one. The not-so obvious one are the losses of dignity and pride and other things that have the response to the lockdown and the consequence of the lockdown that have often been forgotten because they haven't been quantified.
But, having said all that, this is nothing like what it would have been like to live through the Blitz of 1940 in Britain when not only did you not know when it was going to end, you didn't know which side was going to win. And every night, there were bombs falling on your house. People were dying in greater numbers, ultimately in the tens of millions around the world. So, we're very blessed, I think that this is the worst thing in our life right now, as hard as it is for some. It's not particularly hard for me, but the emotional part of it, of, like, 'Oh, I just wish it were over,' which I hear increasingly from friends. And this is after three months. This is nothing to what human beings have had to cope with. So that the young people say, 'I don't know whether I'm going to have a good job,' or--boy, we live in good times' if that's one of our bigger problems.
Russ Roberts: I want to turn to--and you can comment on that if you want, before I get to the next topic--but the next topic I want to turn to as we try to finish up is the way we think about human behavior generally in economics.
And, Milton Friedman was one of my teachers--enormous respect for him. And his paper, I think it was 1953, on methodology ["The Methodology of Positive Economics"--Econlib Ed.] which I don't even remember the formal title, but it's the as-if idea: Of course, people don't literally maximize. They don't literally sit down and make calculations either about how to improve their satisfaction from the present or how to deal with the uncertainty of the future. But they act as if they do. And our models, when we assume that they do, yield usable and testable predictions. And in many ways that is the essence of the Chicago School of, the methodological essence of it.
I have since become greatly unenamored of that argument, but I know you are as well. I have a slightly different argument, but I want you to each make your own. Mervyn, what's wrong with that as-if idea?
Mervyn King: It's because--the problem is that any model, making a prediction, contains many assumptions. It's not just that people behave as if they were maximizing something. There is a structure of the model, too, which you are allegedly testing.
And, so, you can never easily identify which bit of the prediction is false, why the prediction is falsified. Is it because the as-if assumption is wrong or is it because part of the model itself is wrong?
But, the fact is that whenever economics has tried to make forecasts about the future, the results have been pretty woeful. I mean, we're not good at making macroeconomic forecasts except when nothing really happens.
When anything significant happens, we don't predict it.
And, of course, what economists tend to do is to say, 'But, of course: We shouldn't be able to predict it.' Because it is part of an unexplained shock.
So, you end up with this other odd position in which macroeconomic models comprise two parts. One is a model which you believe in utterly and totally without any doubt. And the other is a series of stochastic shocks about which you know absolutely nothing.
Well, this is not very helpful when it comes to making predictions.
So, when Milton Friedman says, you know, 'The test of all this is the ability to make predictions,'that at least in terms of making predictions about the future in the economy as a whole, we've done a very bad job of it.
I don't think that's very surprising because we live in a world of radical uncertainty where the laws that are governing behavior are not stationary. They are changing all the time. So we shouldn't expect to be able to predict.
But, in the Friedman view of the world, we should be able to predict. And I think the test of it is that we don't.
Russ Roberts: Yeah, I think the--some of that I think is pretending that we're doing something like physics. Just as physics has equations and data, we have equations and data. One of my favorite parts of your book is Max Planck saying that he was going to study economics, but it was too hard, so he turned to physics. A lot of people take that as a joke about, 'Oh, well, you know, in economics, the electrons have a will of their own.'
I think that's the wrong way to think about it. I think the real issue is the complexity issue. There are too many aspects of the reality that can't be measured, can't be observed, unlike, say, planetary motion. So, I think the impulse to quantify that has not been as successful.
But I think the model that Friedman had in mind was, 'Well, sure: once we thought they were like the orbits of the planets were circles. Then, we got more data and we realized it was ellipses.' But, that sequence of improvement of both prediction and actual knowledge doesn't have that analog in economics, I think, for the reason you're talking about, Mervyn. John, you want to comment?
John Kay: I think that's right, Russ. And, actually, that Friedman article, I know one philosopher of science said, this is the only article of methodology that most economists have ever read.
Russ Roberts: For sure.
John Kay: That was true of me for a long time.
But then I read a bit more about the philosophy of science, and discovered that that article was written in a rather brief phase of the philosophy of science in which that kind of Popperian argument was popular.
And, philosophers of science quickly discovered that it was erroneous for the reasons that Mervyn has described. To make a prediction about the real world, we have to make a whole lot of what I call auxiliary hypotheses, other assumptions about the world. And then you don't know whether your failure of your model is due to the failure of the underlying hypothesis or the failure of the auxiliary hypothesis. So, you don't have this opportunity to make clear predictions that Friedman was asserting.
And, that takes us back, in a way, to the observation you made about through a glass darkly: That, the thing is we know something about the world, but never quite enough. And that means we can't make do with macroeconomic models in which either we know everything or we know nothing. That's the structure that much of our macroeconomics has.
Russ Roberts: The only part I thought you missed in the book or that I wish you'd talked about--it's a subtle point, it's similar to the--I don't mean to suggest by that that you're not subtle, gentlemen. It's a very subtle book. There's a lot of very good, nuanced thinking. But this is just my pet dislike of the as-if hypothesis as I turn on the hand that fed me and bite it, but, Paul Pfleiderer, EconTalk guest talking about this said, his critique, which suggests he expounds his, is that after you keep using this as-if argument, you start to actually believe, you forget it and you start to believe that people actually do maximize, that they do follow the model, because the model's been confirmed, say, and you start to take it as the reality.
And I think that's the same Value at Risk fallacy-- that you start to think that you're actually, because the firm hasn't collapsed, you start to assume you've measured risk accurately and here in the case of human behavior, you've started to believe that people do actually make these kind of calculations because you've taught it to your students for 30 years, you've read it over and over again in papers.
And then you crazily--this is unimaginable, but we do it--you then leap to policy conclusions about social welfare because you've claim to have understood how people get pleasure and satisfaction from aspects in life just because your model seems to work in some settings. That's nuts.
Mervyn King: It's very common, in terms of economic research. And I think you're absolutely right. And, even just within one course, the first class in a course tends to say, 'Well, if we adopt these axioms, then we can assume that people will maximize expected utility.' And, you rush through that. There's really[rarely?] a sort of really deep analysis of whether this actually makes sense as a way of describing how people behave and how the world works.
But, once you've got through the axioms, then, you can get onto the really interesting stuff. Which is: you can maximize the expected utility. In different settings there are puzzles. Some of the puzzles are more difficult than others. And if you can solve a really difficult puzzle, you get a high grade in the class. And if you do a really, really difficult puzzle, you get a Nobel Prize. And, this is the way in which the subject sort of advances.
And you're quite right that the difficulty with it is that people do then say, 'Well, we've done a lot of work on these models. We've now investigated this empirically. We can add some numbers to it.' And you come out with what looks like a cost-benefit analysis where you make quantitative statements; and that leads directly to policy recommendations.
And, in some cases, that may be helpful. In other cases, it makes absolutely no sense because people haven't gone back and said, 'What's really going on here? What is it, the problem that people are struggling with? What's the problem that governments are struggling with?'
Russ Roberts: John?
John Kay: And, to go back the insights from the learning about business in a practical way, that was realizing that business people weren't maximizing profits. So, being a trained economist, I then thought that they must be maximizing something else. Then, suddenly, it occurred to me: Perhaps they're not maximizing anything. Perhaps they're just making the best they can of a radically uncertain world. And that's how I've felt ever since. But I thought somehow the scales had almost dropped from my eyes when I got out of that narrow way of thinking that economics unfortunately too often teaches you.
Russ Roberts: Yeah. I like to say--I think I stole this from somebody--Life is not an optimization problem. And, we'd like to think it is sometimes. Certainly when we're doing our exams and our blackboard work. But, it's not the way we behave. And we often take account of what other people think of us. And we try to work that into the model because, oh, that's part of the thing that gives us satisfaction.
And Gary Becker did that with great artistry. And most of the rest of us can't. So, I think it's often best thought of as a flawed way at looking at human behavior and not the one that we should either assume or embrace.
Russ Roberts: Let's close with--I want to close with a quote from Robert Skidelsky. I had a chance to interview him in the aftermath of the Keynes-Hayek Rap Videos that I did with John Papola. Skidelsky says something that--he said this without humor. He said this with a straight face. He said, "Economics is not a progressive science." And he didn't mean progressive in the American sense of left of center. He meant progressive as in the sense of making progress, meaning, we're not really like physicists narrowing what we don't know down to a few smaller things to deal with. We kind of start from scratch to some extent. We don't have enough data. We only have had one Great Depression. It's a small sample. Now we have the Great Recession we can add to it. We get 50 or 60 more, maybe--I doubt it--but maybe we'll have a better way of making macroeconomic forecasts.
But, you know, one way to think about our conversation is: here's three older economists who've lost faith. A lot of my listeners--and I would add Skidelsky to that: he doesn't pretend to be an economist; he's an historian--but a lot of our listeners are young Ph.D. students. They're listening and thinking, 'Oh, these dinosaurs, these old folks who think economics, that, oh, it doesn't work so well or it's imperfect.'
So, let's close--and I have my own defense to that; I'm not going to voice it. But, I'm curious how you would defend yourself against that charge that you're just taking cheap shots at an incredibly powerful social science that has helped us a great deal. You can throw in what other economists have said in reaction to your book if you feel like. John, why don't you go first?
John Kay: Well, I find it really helpful to read the description from Charles Sanders Peirce, American philosopher, pragmatic philosopher a century ago: the distinguished deductive, inductive, abductive reasoning. Deductive is you set up some premises and you draw logical conclusions from them. Inductive is kind of what medics brings up. Physics is mostly about deduction. Inductive is what, say, successful medics do. You've got a lot of observations and data and you formulate hypotheses from these and you test them in subsequent cases. And, abductive reasoning, which is perhaps best what lawyers do, inference the best explanation is what it's called. You're faced with a unique set of facts and you try to find an explanation that makes sense of them.
Now, economics in my view needs all of these things. That's actually what was in Planck's mind when he said economics was too difficult and that was certainly Keynes's comments on what Planck said there. Economics has been much too much focused on the deductive side of things. There's quite a lot of good work, especially in macroeconomics going on in the inductive sphere. There's very little in the abductive except from a few economic historians who need to do a lot more thinking in that way, and we need to understand that to solve economic problems and give advice on issues, we need to have all these styles of reasoning.
Russ Roberts: Mervyn?
Mervyn King: So, I'd echo that and say that economics is actually a very powerful social science. It's contributed a lot: we understand a lot more about some things today than we did 50 years ago. But, if you want to think of economics as having something to say about the world as opposed to being just a closed, self-referential discipline, then it's vital that it has the abductive part of the reasoning, too; because if you simply use the deductive approach thrown in with a bit of inductive, you will not actually be able to understand the problems that policy-makers or businesses or, indeed, households confront when making their own economic decisions.
And, so, I think recognizing that people are confronted with decisions in a world where they are facing radical uncertainty, in which they don't have all the information that they want to have--but they have some information--and through a combination of deductive--it can be very mathematical. We're very clear in the book that this is not an attack on the mathematics used in economics. Some of the most useful models in terms of giving insights are some of the most difficult mathematically. But, the inductive is a way of using empirical data to try to learn something about relationships. But, on their own, neither deductive nor inductive will help us in giving advice to policy-makers or businesses or families confronted with decisions about how much to save for retirement. There has to be an element, a big element, of the abductive. And recognizing that, in the world, as opposed to in academia, decisions do involve a significant degree of abductive reasoning is fundamental to what our argument is all about.
It's not about throwing economics out of the bath and throwing the baby out with the bath water. It's saying that the house is much bigger than just the bathroom and we need to have a much broader set of ways to think about decision-making under uncertainty than you would get from a standard economics course.
Russ Roberts: I'll close with your quote. You say, "People who know only economics do not know much about economics." I would just add that people who don't know any economics often do not know much about economics either. So, it is a challenge, the real world.
My guests today have been John Kay and Mervyn King. Their book is Radical Uncertainty. Gentlemen, thanks for being part of EconTalk.
John Kay: Pleasure, Russ.
Mervyn King: Thank you.