Jim Manzi on Knowledge, Policy, and Uncontrolled
Jun 18 2012

Jim Manzi, author of Uncontrolled, talks with EconTalk host Russ Roberts about the reliability of science and the ideas in his book. Manzi argues that unlike science, which can produce useful results using controlled experiments, social science typically involves complex systems where system-wide experiments are rare and statistical tools are limited in their ability to isolate causal relations. Because of the complexity of social environments, even narrow experiments are unlikely to have the wide application that can be found in the laws uncovered by experiments in the physical sciences. Manzi advocates a trial-and-error approach using randomized field trials to verify the usefulness of many policy proposals. And he argues for humility and lowered expectations when it comes to understanding causal effects in social settings related to public policy.

RELATED EPISODE
Russ Roberts on Wealth, Growth, and Economics as a Science
EconTalk host Russ Roberts talks with reporter Robert Pollie about the basics of wealth and growth. What happens when the stock market goes down or the price of housing? When wealth goes down, where does the wealth go? How do...
EXPLORE MORE
Related EPISODE
Noah Smith on Whether Economics is a Science
Noah Smith of Stony Brook University and writer at Bloomberg View talks with EconTalk host Russ Roberts about whether economics is a science in some sense of that word. How reliable are experiments in economics? What about the statistical analysis...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

BZ
Jun 18 2012 at 3:12pm

Great stuff.

Every Monday for the last four years I’ve come into work, put on my earphones, and become both amused and a little more enlightened for an hour or so. Great for me. For my employer, not so much. 🙂

Mike M
Jun 18 2012 at 9:32pm

I really enjoyed this podcast. I listened to it while running. I ordered the book as soon as I finished my run.

I look forward to reading about more of the examples mentioned during the podcast – especially those like the jelly and choice experiment and some of its flaws.

Ralph Buchanan
Jun 18 2012 at 10:33pm

This is a list of things that popped into my mind while listening to the podcast:

Standard deviations and outliers

“exceptions that prove the rule”

“Good enough for government work”

Map of the world, actual size (Steven Wright): there is no perfect model

Bruce Bueno de Mesquita’s
utility maximization statistical models in which subjective preferences are ranked numerically and the model generates outcomes.

Nuclear scientists disagreed on the outcome of uncontained nuclear fission until the atomic bomb was tested – they weren’t sure if they would burn off the entire atmosphere or not. Gravity works on subatomic particles also.

Risky Business: “Sometimes you just gotta say “What the …”

As always, Great podcast Russ. Thanks.

Donald Plugge
Jun 19 2012 at 8:20am

Another great interview, thanks. Interesting, another listener was reminded of a previous interview you had with Bruce Bueno de Mesquita. This interview popped into my mind as I was listening to Jim. It would appear the two are on opposite sites of the spectrum, Bruce claiming that complex modeling can achieve non-intuitive solutions. Any thoughts?

dgp

David B. Collum
Jun 19 2012 at 9:39am

All sorts of great stuff. I thought you guys put the hammer on a deserving group of over-confident folks. I usually jot down notes, but decided to just finish by noting that I got into a debate with one of your Hoover Institute colleagues about the role of WWII and the Great Depression. He asserted that it did usher in the recovery as indicated by GDP numbers. I asserted that the GDP numbers are misleading in that they were accompanied by huge debt and that the WWII represented the final Mellon-esque purge of the rot (setting up the boom to come).

Jim
Jun 19 2012 at 10:55am

I happened last night to catch a fawning PBS interview with some guy named Paul Krugman, who apparently is quite the expert on all things macroeconomic, according to the interviewer (can’t force myself to write ‘journalist,’ as that might suggest a certain modicum of objectivity that was lacking).

I mention this because of the awe-inspiring confidence the fellow has in his own abilities and knowledge, as compared to the humility required by this podcast’s subject. There’s simply no need for further discussion, his take on things is simply borne out by history.

Turns out this Krugman fellow believes now’s the time to be spending as much as is humanly possible (er, as much as governmentally possible) because money is so cheap right now (and if you’re going to spend, shouldn’t you do it when money is cheap?). Also, all you naysayers are wrong about his arrogance, he only cops that attitude because his wife told him that everyone on the “other side” is a bully and he needs to stand up for himself, else they’ll continue to ride roughshod over him and his opinions would never be heard.

I missed any mention of Enron (Enron? What’s that?) but there were single-line quotes from three GMU types (one of whose surname matches our host’s), but oddly enough that was the extent of any mention of any alternative view. I think there must not be one …

jeremiah
Jun 19 2012 at 11:07am

Russ,
You have this on going theme of misrepresenting what government finances actually look like.

With government as issueer of a fiat currency, taxing and untaxing powers etc for Government, the issue is not MONEY but goods and services verses idle capacity.

Money/currency is merely a means of exchange. Therefore, as long as the goods and services produced are what people want and there was idle capacity, the stimulus money was NOT wasted, and idle capacity was gainfuly put to work by government spending.

Russ Roberts
Jun 19 2012 at 12:44pm

David B. Collum,

The bigger problem with using GDP during the war is that the increase in government spending on tanks, planes, etc is a big part or maybe all and then some, of the increase. The prices paid for these items are not market prices. If the government decided to pay 10x the amount per plane, GDP would go up even more. That’s why Higgs looks at the value of private consumption. It falls during the war by his calculation…

John Berg
Jun 19 2012 at 2:30pm

Examples of “natural” experiments which could be studied:

Learned from TV that by the 2nd grade one could not tell which child had Head start.

The “Arab Spring” was proof in several Muslim countries of the inherent cry for freedom and democracy.

The US National elections of 2008 and 2010 were proof of the inherent cry for freedom and democracy.

The US National election of 2012 will prove that those citizens and non-citizens that vote will want capitalism.

John Berg

Mike G
Jun 20 2012 at 4:51pm

This discussion covered a variety of fascinating issues, but I especially liked the philosophical discussion about truth and the purpose of science at the beginning of the podcast.

The idea of truth seems to imply that for each distinct, properly defined phenomenon that composes reality, there exists one correct explanation for its existence and one correct description of its qualities.

If this is the definition of truth, then it follows that in generalizing we inevitably obscure truth because we lump together distinct phenomenon under the same heading. But as humans we have to generalize in order to make sense of our world, for it would be too complex for us to comprehend otherwise.

Thus, another way of stating the purpose of science could be the constructing of explanations and descriptions that come as close as possible to the true ones without obscuring the significance of the phenomenon under study and the way in which it relates to the rest of reality.

I would be interested to hear other people’s thoughts about the purpose of science.

Sam Field
Jun 21 2012 at 11:19pm

It seems like science is a lot of fun when the its activities are oriented towards the production of evidence that would change the mind of an honest skeptic. Making sense of the world is also nice, but it is also nice to occasionally be surprised by the world. Most of the social science research that I am involved in hardly seems to surprise anyone, largely because the evidence produced is hardly persuasive.

I love the emphasis on replication. It seems to me that replication should be the core activity of any science aimed at changing the minds of honest skeptics.

R Chopara
Jun 23 2012 at 4:23am

Excellent podcast, as always.

But after asserting that the impact of the stimulus can never be estimated because of all the other things that happen in a complex modern economy, I was somewhat bemused by the assertion that welfare spending has improved as a result of experiments at the level of states. How did the states manage to isolate the factors that worked? It feels very contradictory to me.

Jeff Burrow
Jun 25 2012 at 5:54pm

I am 1/2 way through my PhD. And this podcast, and the ones from Ed Yong, Ed Leamer, and even Jonah Lehrer, make me want to become a qualitative researcher. Thanks a lot:)

Steve T
Jul 7 2012 at 10:34pm

Every EconTalk is both entertaining and either confirms my world view (biases) or teaches me something (or both). Thanks, Russ!

jeremiah: Your view is one way of looking at government spending — as I understand it, it is basically what Keynes and his followers argue. There are other reaonable theories, including those espoused by Russ and most of his guests (and, no doubt, most of his listerners). The argument for money is, essentially, that money/currency is NOT “merely a means of exchange,” but also a store of value; when the amount of money grows faster than the real amount of goods and services, prices TEND to rise.

Mike G: For the most part, I agree exactly with what you wrote. My only major addendum would be that while humans must (and, I would argue, should) “generalize in order to make sense of our world,” it behooves us to recognize that doing so carries dangers and we should therefore be humble about the predictions we make about policy prescriptions arising from the outcomes of our generalizations (one of Mr Manzi’s main points, too, it seems 🙂 ).

Allan
Jul 9 2012 at 4:06pm

It’s unfortunate that the transcript is incomplete, since the guest is interesting, but suffers half the time from fairly severe fishbowl gargling. I think I understand his meaning decoding 80% of his words, but I’m not the details of my comprehension would survive replication.

Comments are closed.


DELVE DEEPER

About this week's guest:

About ideas and people mentioned in this podcast:Books:

Articles:

Podcasts, Videos, and Blogs:


AUDIO TRANSCRIPT

 

Time
Podcast Episode Highlights
0:36Intro. [Recording date: June 13, 2012.] Russ: Your book is a really extraordinary overview of the history of science, the current state of economics, of social science generally--what we know and what we don't know--and what we might do about all that. And I want to start with a thumbnail sketch of the history of science, which you devote a couple of chapters to. How did our knowledge begin to grow so dramatically? Why don't you start with Francis Bacon, as you do in the book. Guest: Sure. I think he is, obviously, a seminal figure in the development of modern science, who is often referred to today but not read as much as he ought to be. And one of the things I discovered as I really went back to some of his books, which I hadn't seen since high school, is how incredibly prophetic he was and how much he laid the philosophical foundations for modern empirical science. And I think the most foundational transformation that red because of his thinking was what he called the transition from where from to where by. And what he meant by that was abandoning the Aristotelian attempt to understand things like final or ultimate causes and instead simply think of the world as particles plus the rules of their interaction. He was very clear that the purpose of this and the purpose of science was not to attain philosophical truth, but ultimately, in his words or a translation of his words, to increase the limits to the power of the greatness of man. In other words, for Francis Bacon, the ultimate purpose of science is not truth. It's improved engineering. Russ: It's: Is it useful. Guest: Exactly. And he was clear of course--it's crazy to get back, it's crazy language, translation difficult, but once you do, you realize how modern he is. And he talks about the concept of experiments of fruit versus experiments of light. And by experiments of fruit he means experiments oriented around solving immediate practical problems. And experiments of light--he means research and analysis, really distinguishing between experiments and observations rigorously. As we know how to do today. But research and analysis, determine general principles. And he was clear that the ultimate goal was improved engineering; and in fact it made a lot of sense to focus on experiments of light for a long period of time, and not only on experiments of fruits; and [?] what we have applied and basic research; and emphasized the need to do basic research. And for scientists themselves to be motivated by a belief that they were seeking truth. Whereas he saw the ultimate goal of the overall scientific endeavor ultimately as being practical progress. Russ: And, as you mention: We've been really good at that, at the practical side--a few obvious examples: MRI, airplanes, your cellphone. We've done an extraordinary job of mastering the physical world to produce useful tools. But we'd like more than that. And there, our understanding is more limited. Guest: You mean applied to social and economic policy. Russ: And the part that Bacon was going to put to the side, about the truth. I think a lot of people have a romance about science, and social science, that it seeks truth and it finds truth, and it illuminates truth; but there are much better experiments, and scientific knowledge generally, one of the themes I got from your book is they're much better at highlighting what works most of the time. They're not so good at discovering what works all the time. Which is what truth would require. Guest: That's right. And there are several points embedded in your question. I think as your foundation you have to [?] to really think rigorously about social sciences. What he emphasizes and what I think is true--you look at the actual modern practice of science is science like markets, and I argue analogous institutions--I'm not the first making [?]--creates progress as an emergent phenomenon. That individual scientists typically feel like in some way they are participating in a process that is finding truth. One of the things I say in the book is the idea that scientific findings are literally only a predictive tool and unrelated to the actual "really true" structure of the universe is called instrumentalism. And there are zero, literally zero, successful scientists of my acquaintance who are themselves instrumentalists. However, the process of science is brutal[?] about treating theories as predictive tools that are discarded when better predictive tools come along and evaluating the truth in the scientific sense of that term of a statement as the ability to make reliable, nonobvious predictions. And therefore I think this idea as you say of romanticizing science is a category error. The goal of science is not actually in the strictest sense finding truth in the classic philosophical definition of correspondencing statement in reality. The purpose of science is to build reliable nonobvious predictive rules that allow us to master a physical environment better than we could without those rules. And I think clarity on that point is crucial when we start to consider social science and economics and so on, because, while distinctions that can be hypertechnical when dealing with pure physical science start to become very significant when we deal with social reality, and many of the inherent and conclusive[?] and therefore unstated rules of thumb and heuristics used by the scientific method that are a tolerable approximation in something like classical physics start to come unglued when applied those methods to more complicated phenomena like the social structures.
7:39Russ: And of course, the gold standard of scientific progress, which is the replicable experiment, is much tougher in the social sciences, both in terms of the ability to replicate as well as its generality. You start off spending a reasonable amount of time which seems like just philosophical theorizing that's not really important, which is David Hume's problem with induction. Which seems like a nitpick in certain applications. But is kind of central when you think about the social sciences. Guest: That's right. I think that at the level of philosophy of science, Hume's problem of induction, which is essentially that if I conclude that a relationship is causal by observing that x is always preceded by y, I cannot know for a fact that that relationship will hold in the future. So, to take a seemingly nitpicking example or kind of crazy example, every--just think of every time I've let go of a coin, it's fallen, doesn't mean that I know that if I let go of a quarter I'm not holding in my hand it will fall. It might just sit still in the air. And Hume in fact made fun of himself. In fact, he jumped ahead of the reader, knew he was going to make fun of himself, people are going to make fun of this, but actually I think it's important. And of course it's crucially important. At a philosophical level, even for physical science. But it becomes extremely important and practical when we deal with social sciences, because the complexity of the phenomena under study makes it much less practically certain that when we observe a relationship and induce that it is a cause and effect relationship, that we can reliably generalize that to future instances and take action based upon it. Russ: So, let's talk about an example you mention in the book, which I think about all the time, and write about way too much; and I think I've spoken about it a number of times on this program, which is the stimulus package of 2009. Which was about $820 billion, it turned out to be after the fact. And there are a large number of economists who "know" that it created a certainly large number--millions--of jobs. And I guess the relevant question would be: How do you know? And your answer-- Guest: My answer is you don't know. The only people I'm extremely skeptical of are people who insist they know the answer to that question. You know, one of the reasons I started writing this book is I had started a software company at the very end of the 1990s, and anyone who has ever done a startup knows you go gown into a very deep tunnel when you do that, and you are focused only on your business. And I sold a portion of the company and kind of reemerged into the light just before the massive financial crisis occurred in 2008. And I remember, for the first time, I hadn't watched TV in a long time, and seeing a baritone-voiced, very serious-looking economist saying: We know what caused this crisis, and by the way, here's the proposal that will have a positive effect. And of course, you know, ten minutes later you could see another very serious sounding baritone voiced economist saying exactly the opposite. And watching this, I said: You know, I've just spent ten years trying to figure out how many snickers bars are going to go on a shelf in a convenience store and what the effect of the change of adding more or fewer snickers bars would be. And that's really difficult, difficult because human beings are complicated. And it's extremely difficult to make predictions that are nonobvious, reliable and useful about the effect of our interventions. And I start from a position of extreme skepticism that you can really know something like that. Really I spent several years diving into it. And as I think you've mentioned in at least one Wall Street Journal column that I've read--even at the time, and I started writing about the time, early 2009, the debate is happening; you can see Paul Krugman and Joseph Stiglitz, people with Nobel Prizes in economics arguing that we need a stimulus and in fact it ought to be much bigger than $820 billion, and at the exact same time you can see James Buchanan, Edward Prescott, Vernon Smith, Gary Becker, other Nobel Prize winners in economics saying this is a really bad idea, this is not a good use of money at all. And what I wrote in early 2009 is, you know, I have an opinion on it; I don't pretend [?]; but I don't believe any of the folks making these confident assertions really know what the effect will be. And the only prediction I'll make is this: I'll predict that, in early 2011, you know, professor, famous economist X said: unemployment will be about x%, say 10 percentage points without the bill and 8% with the bill. When it gets to be 2011, if unemployment is 10%, here's what that professor is going to say: You know, conditions were worse than we thought they were; so without the bill unemployment would have been 12%, not 10%. Now unemployment is 10%. See, I was right all along; it lowered it by 2 points. And that's exactly what happened, of course. That's exactly what the economist said. And it has nothing to do with Democrats versus Republicans, by the way. If John McCain had been President, it would have been Republican advisors, too. And what I said is you cannot know the counterfactual reliably.
13:39Russ: Now, the other side of that, the counterpoint--I'm on your side, but the counterpoint would be when you see two groups of economists claiming either that the stimulus needs to be twice as big or it needs to be zero, one possibility is that one of the groups is right and the other is just wrong. It is strange to me that people argue with such vehemence about the certainty of their position, given that there are intelligent people on the other side, credible people, people with similar credentials. And the reason I say that is because if it were the case that one side was obviously right and it was a scientific question, as opposed to an ideological and philosophical question, well then they could just show them the evidence. But of course, there is no evidence that's decisive. There's only cherry picking. So, it's easy for each side to cherry-pick. And I think the fundamental question on this topic is: Do we make progress? Are we getting closer? And my view is: I don't see any sign we're making progress. To me, it takes an immense amount of hubris to be confident about your position in this story. One more point, because I want you to introduce this other concept, which I thought was so useful: you introduce the phrase high causal density. And there's few things that have high causal density, meaning lots of simultaneous changes that affect behavior and actions and outcomes than the economy as a whole; and to then pretend that you can isolate the effect of one of those changes and ignore all the others--and you do this many times in the book, but I did it yesterday on my blog, inspired by you, which is if you list the things that happened since 2009 when the stimulus was passed, and you would list just to start with--it's easy to make a fairly long list--you have an enormous change in monetary policy, you have enormous changes in housing prices, you have huge policy interventions in health care and in financial sector regulation, you have animal spirits, consumer confidence bouncing all around, doing all kinds of unexpected and unknown things, you have international changes; and you don't pretend that you can quantify--oh you also have a recovery that starts at one point in the output market at least. And so you are trying to measure the impact of one of those changes--the stimulus spending--on, say employment. And you can't quantify five of the things I just mentioned. A couple of them you can. But you are going to pretend that you've therefore isolated the impact of the one you really want to care about? To me, it's so intellectually dishonest. And I'm an economist. Guest: Exactly, right, right. So, you know, I guess, there are obviously multiple great parts to this question. So when you say, two groups arguing and there's the potential that one's right and one's wrong, and so on--one is right and one is wrong about what the effect of the stimulus was, the direction it leads. I just don't know which one. It's like asking me the question: Is the number of stars in our galaxy odd or even? Well, there's a real answer to that question. If you have a bunch of people yelling odd and a bunch of people yelling even, one of those two groups is right. But unless one of them has access to knowledge that I don't think we have a species right now, we don't know. And that doesn't mean it is a theoretically technically unanswerable question, how many stars are in the galaxy, but we don't have the knowledge right now and we don't have the capacity of knowledge right now. And that's the way I feel about that debate. Russ: And if you did--for example, if we had a debate on a baseball team, on the defense, how many players are in the field, is it odd or even? We know the answer. I say it's 9, and 9 is odd. You say: no, no, no, it's an even number. And we count them. And you would have to go: Oh, I guess I was wrong. There's nothing analogous to that, nothing remotely analogous to that in the case of economic policy intervention. Guest: I think that--certainly there's a macroeconomics right. This idea of causal density is an important. So, in the book I try and break apart this idea of generally phenomenological complexity into components, one of which I call causal density. If you think about using an experiment, the analog to your let's just count the number of people on the field metaphor, think about the classic--probably that's not the way it happened--Galileo dropped unequally weighted canon balls off the Leaning Tower of Pisa to determine whether or not Aristotle was correct--the theory that heavier objects should fall to the earth faster than lighter objects. And of course famously he falsified the theory because he dropped these two unequally weighted balls and they hit the ground at the same time. Imagine, when he did that, if you think about even classical mechanics, various bodies are interacting with these balls as they drop. There are the canon balls themselves and there's the earth, and we can model the rate of descent, to an excellent engineering approximation, as being only the gravitational interaction of the balls and the earth. But of course there are gravitational interactions between those balls, and the sun and the moon and each of the planets. Russ: Wind resistance that isn't the same for each ball because atmospheres vary. Guest: Exactly. Imagine if instead of gravity attenuating with distance, by 1/r^2, imagine gravity doesn't attenuate with distance. And so as these balls moved in relation to each of those particles all over the universe, they started slinging around in crazy directions and not moving very steadily and rapidly towards the earth. What you'd see is instead of balls dropping, you'd see two balls moving around all over the place, and it would be extremely difficult to measure the effects of any of these interactions such that you could falsify or confirm the theory. And that's really I think what goes on in the economy. And so whenever I hear not just economists but social scientists describe some very plausible idea about a causal relationship--when we execute some program or take some action--look, people care a lot about financial incentives so when we change the following price, this behavior will change. Almost always the causal effect they are describing is sensible and almost certainly there's at least one human being in which exactly the story they told is a cause which will create some action like that. The problem is, there are millions and millions of other causes acting on people who are being subjected to the program. So, while it sounds in a narrative sense so compelling, to try to use regression analysis and show that I've really held everything else constant and measured the isolated effect of this cause sounds so compelling to us, in fact it is always more complicated than our non-experimental methods can handle. And the argument in the book is we kid ourselves when we think we have isolated cause and effect.
21:50Russ: So, let me play someone on the other side, of which I would say it's most economists who gladly use regression analysis without shame. They would say: Look, of course we can't control everything. Just like in the case of the falling canon balls, I'm not going to take account of the distance of the earth from the sun that day and the fact that one of the balls is closer to the sun than the other. You're right, but those effects are small. And what I've done in my regression analysis is I've isolated the significant, important factors. You are nitpicking; you are just being--technically you are right; wind resistance matters, but when it comes to dropping the canon balls, they land at the same time. I don't need to take into account wind resistance or distance from the sun. And similarly, when I'm doing regression analysis and isolating the impact, which, regression analysis being a multivariate technique for isolating multiple causal vectors, where I can then isolate the impact of one holding the others constant, when I'm doing that it's close enough. Close enough for practical purposes. What's your response to that? Guest: Well, first of all I've built thousands of regression models in my life, and they are not useless; they are useful for certain purposes. What I argue is that they are not capable of determining reliable, useful, and nonobvious effects of interventions. I'll give you several pieces of evidence which are extended at much greater length in the book for why. First of all, when you take celebrated--as I try and do in the book--regression analyses you can show over and over again that the same problems recur and are significant; and in semi-technical terms, omitted variable bias is not a kind of nitpick. Omitted variable bias when it comes to human systems is massive. And you can show over and over again this is true. Russ: Explain what omitted variable bias is. What do you mean that? Guest: What I mean by that is if I have a regression analysis that tries to predict a set of variables--say, I want to predict what unemployment will be as a size of the size of the employment, hypothetically, the size of the population, economic growth rate in a prior period, the education level of the population, and so on, and I say: I am trying to use this to measure the effect of changing education levels on unemployment. And all the variables other than education level are meant to be controls, or to hold constant these other effects we described. That if I neglected to include a variable in my model for, let's say, the amount of immigration into the society, that turns out to be causally important, that what happens is by failing to include that, I modify or create instability in all the parameter estimates, including the estimate of the variable I care about. And therefore, if I've left out any significant variables from my equation, the estimate of the impact of the variable I care about is called into question. And my argument is that all regression models, including I take one that I built and take it apart in detail to show how this is true, all models like this are subject to omitted variable bias because we can't get data on all the potential causes. The complexity of this phenomena overweighs our ability to build terms, to build interaction terms, and so on; and they are always subject to this problem of significant omitted variable bias. Such that we cannot rely on their results. Russ: And the response, I think, of the typical applied economist is: Well, yeah, but I'm doing the best I can. That's the best I can do. Guest: Yeah. Sometimes the best you can isn't good enough. In other words, the relevant standard, if I'm being practical about I want advice as a person making decisions about programs, whether to do the program or not and so on, is, it's not: Is this the best analysis compared to other analyses. Is this analysis add value versus other analyses, or expert judges. And part of my argument is that in many instances, the analysis doesn't clear the hurdle of practicality. It doesn't actually create useful information I don't already have in the absence of the analysis.
26:46Russ: Yeah, I agree with you. And the reason I agree with you--again, this is really embarrassing as a professional economist--but I've come to believe that there may be no examples, there may be zero cases, of where a sophisticated multivariate econometric analysis--which is what we are talking about, multivariate regression--in a high causal density case, where important policy issues are at stake, has led to a consensus. Where somebody says: Well, I guess I was wrong. Where somebody on the other side of the issue says: Your analysis, you've got a significant coefficient there--I'm wrong. No. They always say: You left this out, you left that out. And they're right, of course. And then they can redo the analysis and show that in fact--and so what that means is that the tools, instead of leading to certainty and improved knowledge about the usefulness of policy interventions, are merely window dressing for ideological biases that are pre-existing in the case of the researchers. Which is a very harsh statement, and I'm not going to try to defend it here. But I want you to use the example you use in the book, which I thought was fantastic of the physicist and the historian and later the economist advising the President, which to me highlights the gradations of science--I almost said degradations--that are present and how much we know and what we don't know. Guest: Well, it's very difficult to prove a negative. So, is there no example of that ever happening is a very tough statement to defend. However, one of the things I point out in the book is: Greg Mankiw is a very eminent economist who has written, if not the, one of the most widely used textbooks in America, has a chapter in his economics textbook, where he says: Is economics a science or not? And he lists--I think it was 14, I used in my book--I think it's expanded to 20 or so--propositions to which economists agree. And my reaction to that, when I go through that list is kind of: Where's the beef? You take these 14 assertions, and first of them, 7 of them are first of all completely non-falsifiable value judgments. I'm quoting from memory so this won't be exactly right. A country should not impose tariffs. It's interesting the word "should" or "would", etc. You know you are dealing with a normative statement, not really a predictable statement, so how can you possibly test that? Even those which are theoretically falsifiable in practice aren't, because the statement will be so general, like: Stimulative spending in an economy in the following conditions will create some gain in employment. I suspect but don't know that many of the people opposed to the stimulus program would agree that, look, I can't measure it but I think as a practical matter that very likely that U.S. GDP was at least $1 higher in one quarter because we spent $820 billion. But the point is I need to have a parameter estimate that allows me to say: is this a good use of investment resources or not? And at that point, you don't have alignment. And, by the way, all these 14 statements are generally agreed to by between 75-90% of economists. Russ: Not 100%. Guest: Yeah. You know, I bet I could find a physicist somewhere, a tenured professor of physics somewhere on earth who would disagree that Newton's Laws of Motion provide an excellent engineering approximation for motion of bodies at non-relativistic speeds and above quantum size. But I could probably only find one. Russ: Can't find 20%. Guest: The other 99.99% agree. So, I think there's good evidence that--you have to clear the hurdle of there's good agreement among economists. Then you have to clear the hurdle of: It's agreement and they are right. You don't even clear the hurdle of agreement, I think. And I don't think that's because they are dumb or not working hard. I think that's because they are studying very complicated phenomena. And the example that you cite are the little kind of parable levels, that I open that section of the book with, is basically saying: Look, imagine you are the President of the United States and you are receiving, you are considering an Iranian nuclear weapons program, what to do about it. And into the room walks your science advisor and she says: Look, if the Iranians take the following amount of physical material and combine it in this size and using this method, it will create an explosion big enough to blow up the city. And next into the room comes an historian. And the historian says: Well, you know, if an attempt to subvert the Iranian nuclear weapons programs, my reading of the history of Iran is that the people will want this enough they will continue to replace, one way or another, the government until this happens. So it really is not a good idea to try and stop this. And what I say is, no, even if this happens to be President Carter, trained as a nuclear engineer, even if you know nuclear physics, for the President to sit there and begin debating the empirically validated laws of physics with his physics advisor is kind of foolish. On the other hand, not debating the historian, not bringing in different historians of different points of view, talking to people who have lived in Iran, personal introspection about human motivations, would be equally foolish. And so really you ought to treat the prediction made by the physicist very different from the one made by the historian. Both are very valuable. I would never advise taking action without listening to both those. People make lots of use of historian experts and non-historians make lots of useful predictions about this situation. And then imagine, third, his economic advisor walks into the room. And she says: Well, you know, the CIA has a program to counterfeit currency in Iran. And this amount of currency will create this amount of inflation and unemployment. The question I pose is: Should you as the President treat the economist's prediction more like the historian's prediction or more like the physicist's prediction? And what I say is: A lot more like the historian's prediction.
33:04Russ: Yeah. It's interesting because I've come to believe--one of the things I really like about your book is that it parallels my thinking, so I have to be careful, being subject to my own confirmation bias. But in recent months--I used to say economics isn't like physics; it's more like biology. But I'm starting to really think it's more like history, in the cases that we care about and the applications that we care about. And I want to say, for all those listening, that of course I think understanding economics and understanding tradeoffs and understanding emergent order--the parts of economics that I think are glorious and important--are very useful in helping you organize your thinking and understand the world. What I think it's not good at is predicting the impact of government spending on unemployment, say. And it's ironic that you pick for your example one of the few areas where I think we do know a little bit. Not a lot, but a little bit. So, when I talk about empirical work that has changed people's minds in the profession, Friedman and Schwartz, their Monetary History of the United States, which I think came out in 1960, actually I think did have an impact on how economists of differing ideologies and methodological views came to see the impact of the money supply on inflation. Now I don't think we can quantify it. We can't quantify the impact on unemployment. But I'd be confident if the U.S. government increased the money supply through counterfeit money that was accepted, that it would raise the price level, and maybe continuously if we continued to do it in Iran. But that's about it for me. There aren't any other economy-wide experiments I'm comfortable with. And I say that, the little certainty that I do have because I think there have been a lot of natural experiments where not much else changed except the money supply. And we know a little bit about that. Those are not multivariate regression analyses; they are not complicated econometrics. [More to come, 35:12]