Joshua Angrist on Econometrics and Causation
Dec 22 2014

Joshua Angrist of the Massachusetts Institute of Technology talks to EconTalk host Russ Roberts about the craft of econometrics--how to use economic thinking and statistical methods to make sense of data and uncover causation. Angrist argues that improvements in research design along with various econometric techniques have improved the credibility of measurement in a complex world. Roberts pushes back and the conversation concludes with a discussion of how to assess the reliability of findings in controversial public policy areas.

Ed Leamer on the State of Econometrics
Ed Leamer of UCLA talks with EconTalk host Russ Roberts about the state of econometrics. He discusses his 1983 article, "Let's Take the 'Con' Out of Econometrics" and the recent interest in natural experiments as a way to improve empirical...
Adam Ozimek on the Power of Econometrics and Data
Adam Ozimek of Moody's Analytics and blogger at Forbes talks with EconTalk host Russ Roberts about why economists change their minds or don't. Ozimek argues that economists make erratic but steady progress using econometrics and other forms of evidence to...
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.


Dec 22 2014 at 9:48am

Great discussion, I like to hear a guy really defend his position and he did.


But the evidence is relevant and worth attending to, and it tends to fail to find large dis-employment effects, and anybody who discusses the minimum wage has to contend with that.

I wish he had not used the word large. I think that there are non monetary benefits to employment in the above board economy so I am concerned about small dis-employment effects of minimum wages.

Russ Roberts
Dec 22 2014 at 11:15am


I too worry about non-monetary effects. But it is not obvious to me that even the monetary effects are small. Part of the challenge of any study of the minimum wage is that it affects so few workers that it is difficult to measure its impact accurately. A new study using an innovative research design of the kind Angrist mentions finds large negative effects on employment and earnings of low-wage workers from recent increases in the minimum wage (though “large” may always be in the eye of the beholder.) It will be interesting to see if this study dents the conclusion that Angrist points to for the current state of the minimum wage literature.

Dec 22 2014 at 8:00am

First of all, I LOVE It easily has the most interesting podcasts on the Internet. Great job, Russ.

Now that I’ve gotten that out of the way . . . I listen to this podcast as a complete layman–I have a Bachelor’s degree in Business, that’s it–and I think: Jonathan Gruber isn’t the only MIT professor who has a healthy dose of arrogance–how many times was Prof. Angrist going to say he doesn’t care about what this person thinks, or that person thinks? And when prodded he says he doesn’t want to pick out any particular individual.

Granted, I can understand, at least to a point, that many times the best thing you can do in life is ignore your critics. They often have their own biases in mind no matter if it concerns economics, entertainment or relationships.

But the concern I have is that Prof. Angrist seems too biased in the opposite direction, in that he wanted his work to be important to those people who make policy decisions. And we know who make those decisions: Government.

This, to me, is a lethal mix. We’ve seen it in climate change where scientists continue to this day to tweak their work supporting climate change so as to keep themselves relevant to policy makers. Why? Because policy makers don’t want to hear anything about keeping their hands OFF the economy–they want to hear everything about putting their hands ALL OVER the economy. Thus, climate scientists give the politicians what they want to hear because, otherwise, these scientists have to go back to obscurity.

Having mentioned Jonathan Gruber above, I think it’s obvious he biased his own studies so as to make them palatable to policy makers at the State and Federal level. He made a lot of money showing how government involvement in healthcare could create a better system. Whereas, no health economist out there gets any grants from any government to show how keeping politicians out of healthcare will make medicine cheaper, better, and more efficient.

I think we should worry about any economist or scientist who gets gratification from having powerbrokers like their work. Prof. Angrist presented himself as this type.

Dec 22 2014 at 1:17pm

Thanks Russ for another great discussion!

This piece reminded me a bit of your interview with Nassim Taleb where you were discussing if it was better to have a poor map or no map at all when you don’t know where you are going.

Dr. Angrist seemed to essentially be arguing that some map is better than no map at all, and after all, there are a lot of robust findings out there to use.

I share your skepticism Russ, especially given that even in medicine a large majority of peer reviewed studies fail to replicate.

Dec 22 2014 at 6:13pm

What a great talk. In the beginning of it I was with Russ, because that’s the way I lean, then in the middle I was with Joshua Angrist as Russ was coming across as overly-skeptical, then towards the end I was back with Russ. Seems like both made important and valid points. Policymakers, scholars and the general public tend to overplay scientific findings, awareness among the general public regarding the limits of any given study should be emphasized by all. The default position about MOST questions seems it should be “We don’t yet know definitively”, this study found this we’ll see what further work shows. I am not sure suspending belief is something we’re very good at doing though, worse than clinging to an unsubstantiated truth is clinging to an “I don’t really know”. Joshua Angrist seems to be saying yeah science is messy and dirty but we are better off with it, this is also true. We want to keep doing theoretical and empirical work and hopefully with time and after many minds have looked at a problem we are closer to the truth, even if it’s at least discovering what the wrong answers are. Is science a worthwhile exercise in areas like Macroeconomics which are extremely complex? I think the answer is yes, striving for truth is a systematic way when evaluated in the long-term is better than throwing your hands up in the air. Is it possible that in the case of macroeconomics what economists are engaging in is a futile task? Yes, this would be extreme skepticism, could be true though. Maybe there’s something about complex systems we don’t really understand, and if we had a deeper understanding of the degree of complexity we would see the impossibility of our task of pinpointing the effect a particular variable has on an incredibly complex process. We’ll see, time will tell whether the frontier of macroeconomics can be expanded or if it’s expansion will result in the realization that certain questions are not answerable, from a logical and practical standpoint.

Dec 23 2014 at 9:49am

Definitely a top ten for 2014!

Russ, your skepticism is often warranted, but the larger issue that Angrist raises only at the end is where does skepticism cross the line into nihilism? And where does confidence cross the line into arrogance?

He seemed even-handed, strongly supporting the work of Friedman and Schwartz, and strongly arguing that insurance significantly doesn’t affect health outcomes.

It would have been nice for him to address directly some of the meta-issues of problem-design. But what really struck me was when he said that what he really cared about was the respect of his peers. All I could think at that time was, “Be loved, and be lovely.”

Again, this was clearly a keeper! Great job, Russ!

Dec 23 2014 at 1:05pm

I don’t think Angrist gave much evidence for the usefulness of econometrics, usefulness defined as providing non-obvious, replicatable findings about the world. Not defined, as he’d apparently prefer, as having your work influence peers and policy makers.

We trust the scientific method using controlled experiments in some fields because we can see the useful outcomes. Drugs sometimes get developed that improve health. Faster ways to transmit information are discovered.

Social science is obviously a different animal. Probaly we shouldn’t associate it with actual science. Even (especially?)economists don’t generally trust findings from the “experiments.” Can anyone name an econometric “experiment” that produced findings that were a) widely accepted as true by economists, and b) robustly replicated?

If a medical researcher was challenged to show that his craft’s methods tended to produce non-obvious, useful results, he’d quickly list a bunch of relatively new treatments that no one would reasonably contend weren’t generally helpful. Angrist response to the challenge was essentially “my findings are important if they further the discussion.” Pretty weak from someone claiming to do science.

Looks like, for now, we’re stuck with Hayek’s curious task of economics.

Dec 23 2014 at 1:08pm

Interesting discussion. I was confused by the reference to Paul Krugman. It started with:

“If you mention health insurance, for example, Americans are not very healthy compared to other OECD (…) countries. … The evidence overwhelmingly suggests that it has nothing to do with health insurance. And we see that in two randomized trials, extremely well done, very convincing.”

There was a discussion of who was convinced by the evidence and the guest went on to say “I don’t really care if I convince, say, Paul Krugman” to which Russ replied, “No, I understand. There are people with an axe to grind, there are partisans…”

Being from the UK, I’m not familiar enough with the US health debate to know what evidence is being referred to, why there is such apparent animosity to PK and how PK’s position on health insurance would differ.

Can anyone enlighten me?

Dan Hanson
Dec 23 2014 at 1:22pm

I like that you touched on the complexity argument during the interview, but I would sure like to hear a more detailed discussion of those issues.

For example, if the economy is a complex adaptive system, that suggests that outcomes are very sensitive to small changes in initial conditions. What does it say about empirical research in economics when the system you are measuring is complex and chaotic?

Let’s say a study looks at the response to a an economic shock in 1990, and uses the evidence of how the economy responded as data for modeling how the economy would respond to such a shock today. That seems valid if you treat an economy as a machine with inputs and a predictable transfer function (the model) which creates outputs.

But that’s not how complex systems behave. They aren’t deterministic like that. If you could get into a time machine and re-run the shock, you could get a completely different response. And the response would be different every time you ran the test.

Making things worse is the adaptive nature of the system. People learn, and the higher order system evolves. It learned from the last shock, and that information will be incorporated into its response to the next one – in unforseen ways. A government stimulus might work the first time if people mistake the stimulus as a permanent change to their well-being. But after they see the effects wear off and the ensuing debt hang-over, the next stimulus might provoke a completely different behavior.

I think this is the real crux of the problem with econometrics – you’re measuring a moving target, and one which mutates and behaves in seemingly random and chaotic ways.

Then there are the rapid technological and social changes that cause economic responses to change over time. Today’s economy has high-speed trading and social media: How does that change the way we respond to economic or regulatory change? Does that completely invalidate pre-internet economic data? Or is it irrelevant? How would we know? What about the millions of other ways in which the economy has changed? Which of those invalidate previous empirical data?

Dan Hanson
Dec 23 2014 at 1:36pm

Regarding minimum wages – if regulation is a lagging phenomenon that simply brings the legal minimum wage up to the current market-dictated minimum wage, you would expect to see no negative effect on current employment. However, you also wouldn’t see a change to the effective minimum wage.

Do the studies that show no negative effects on employment also show that the real average wage at the bottom increased significantly? Or even better, that the cohort that actually saw wage increases did not also see negative employment effects?

Assuming that the law just brought the legal minimum wage up to the real minimum wage, you could claim no harm, no foul. It’s harmless legislation. Except that what’s really happened is that you’ve created a huge problem if productivity falls due to changing economic conditions such as a recession. Now employers are forbidden from adjusting wages downwards, and the only other option would be layoffs.

Have there been studies which look at the change in employment during recessions among workers at the minimum wage as compared to workers who were slightly above it and therefore their employers had some room to lower their wages instead of firing them? I know wage stickiness is a problem even without the minimum wage, but a price floor for labor should make that problem worse.

Greg McIsaac
Dec 23 2014 at 4:49pm

Dr. Angrist considers Friedman and Schwartz “…the largest macroeconomic victory for economic policy relevant to empirical work.”

Russ started to make a point about lower than expected rates of inflation in recent times, but Dr. Angrist interrupted Russ by saying “I don’t react to short term current events.” I wonder what Russ was going to say about the people “on his side” who expected higher rates of inflation than occurred in recent years.

Maybe the more pertinent questions on this topic are: Are the relatively low rates of inflation in the US in recent years a failure of Freidman and Schwartz? Or did those who expected higher rates of inflation incorrectly interpret the work of Freidman and Shwartz? Or has there not been enough time for inflation to become manifest, and therefore Russ was premature in admitting he was wrong to have expected high rates of inflation in recent years?

I realize this topic was only raised as an example to get at the question of why aren’t people convinced of one thing or another. But to answer that question, I think you have to get into the details of the specific topic with the specific person who isn’t convinced. Sur, people engage in motivated reasoning, as discussed in the episode with Jonathon Haidt. It is good to be aware of the ways we engage in motivated reasoning, but I think resolving any specific question about what people find convincing (or not) involves delving into the specifics that topic.

Perhaps a future Econtalk episode can focus recent experience with monetary policy and inflation in the US and whether it conforms with the work of Friedman and Schwartz or not. I would be interested in such an episode.

I thought this episode was interesting, but I thought the questions about why people aren’t convinced about Angrist’s methods or results was misdirected at him. Those who are not convinced need to speak for themselves.

Russ Roberts
Dec 23 2014 at 5:03pm


When Angrist invoked Paul Krugman, I think he meant him as being representative of someone who is in the public eye and who often writes on policy issues, including health care, for a general audience. I think Angrist was saying that even though Krugman has a Nobel Prize and has a strong professional reputation, he might write things in his column for the New York Times that don’t take into account the latest findings from the frontiers of econometrics related to health care. Krugman, for example, has been very supportive of trying to increase the proportion of Americans with health insurance. Angrist was saying that Krugman’s support of increasing health insurance coverage isn’t necessarily useful for figuring out whether econometric evidence is persuasive.

Dec 23 2014 at 6:28pm

Russ, thank very much for the personal reply. I’m still rather confused.

If I understand correctly, PK wants to increase health insurance coverage. The implication of what was said is that he wants this *because* he thinks it will improve the health of Americans (relative to other OECD countries and presumably in absolute terms too). And consequently that he is being partisan or has an axe to grind because he maintains this view despite evidence that Americans’ relative bad health has nothing to do with health insurance.

I don’t know PK or his writing well enough to know whether he has said that – whether it is an accurate assumption. He could equally want fuller insurance coverage because he thinks that is what a developed country should offer; or because he understands what living without insurance cover entails or doubtless for many other reasons.

So even if PK accepts the overwhelming evidence that Americans’ relative health has nothing to do with health insurance, he could still believe that increasing health insurance is a good and necessary goal.

I know PK is unpopular with the right of US politics (and I don’t have a dog in that race, so it matters not). Was the discussion a reflection of that?

Dec 23 2014 at 10:33pm

It would be one thing for Professor Angrist to brush aside the critique of someone like me as I have no economic training outside of EconTalk, Cafe Hayek, MRU, etc.. But in professional circles with of highly respected people on the other side, “I’m right, they’re wrong” doesn’t sound like a convincing argument.

Dec 23 2014 at 10:45pm

If you are suspicious of empirical work, see this:

Dec 24 2014 at 10:28pm

Great episode!

Have you ever tried to bring in Christopher Sims? I have seen him give talks where he talks about how science works in economics..a favorite topic of yours. Here’s one example:

In addition, now that this episode has occured, he could articulate his dissent from Angrist.

Dec 25 2014 at 6:39pm

Paul –

I’ve listened to the conversation twice, the second time while reading the transcript. And my interpretation is that Prof. Angrist first brought up Paul Krugman as an example of someone who has a well-known position on health insurance, then started to say that he (Angrist) does not feel a need to win him over.

Russ sort of inserted the characterization of “partisan” and “axe to grind” on top of that. Prof. Angrist never suggested that Krugman’s views are due to partisanship or an axe to grind (or the “latest findings”)…just that it’s not necessary to move that metaphorical mountain in order to do good work.

Maybe the guests need to be told beforehand that any mention of Paul Krugman will be turned into a partisan jab at him, because that’s what happens whenever his name comes up on this podcast. In this case, the guest did his best to sidestep it.

Russ Roberts
Dec 25 2014 at 11:09pm


I’d like to see you document that claim about Krugman. I feel like I almost never bring his name up and when guests do, I try to ignore it. Perhaps I misremember. He is always welcome to appear on Econtalk if he ever wants to and I actually learn things from his books. I’m not sure exactly what Angrist meant by mentioning him. He did not respond to my mention of partisanship and axe-grinding but maybe he just didn’t feel like bothering.

Kevin Driscoll
Dec 26 2014 at 5:10am

Russ, thanks for doing this episode and putting yourself (and by extension many of us) in the line of intellectual fire. It’s less comfortable but much more helpful that preaching to the choir.

I really really wanted to enjoy this episode, learn something, and feel enlightened, but I came away feeling unfulfilled. I don’t mean to disparage Dr. Angrist, his book, or his scholarly work, but in this podcast he seemed to not really be saying anything. There was basically no nitty-gritty discussion of the methods and why they’re more reliable than previous designs (then again, Russ didn’t really ask super specific methods questions, perhaps to avoid too much inside baseball). I didn’t hear about any major successes where an effect was predicted, measured, and replicated. Dr. Angrist seems totally unconcerned with those who disagree with him. I was under the impression that some authors had published a response to Angrist, but when Russ mentioned it Angrist didn’t want to get personal. The Paul Krugman thing I get, but this seemed like an actual scholarly disagreement which he refused to address.

All in all, I just can’t specify any information or arguments made past the introduction. I have no idea why someone who disagrees with Angrist should change his view. Personally, I guess I’m agnostic because I just don’t know enough about this, and I didn’t hear anything to change that. Perhaps I just don’t ‘get’ what he was trying to say.

Dec 27 2014 at 5:34am

[[I’d like to see you document that claim about Krugman. I feel like I almost never bring his name up and when guests do, I try to ignore it. Perhaps I misremember.]]

For you Russ, a Christmas gift:

According to the search function on this site, Paul Krugman’s name shows up on 49 different episodes of EconTalk (and a 50th result, which is a “Continuing Conversation”). It’s possible that there are more mentions (I’ll explain later), but for now I’ll assume that these 49 mentions represent all of them. Other than Adam Smith, I could not find another economist that is discussed more often than Paul Krugman. But that’s a different search.

Because of the way the episodes are transcribed, there are only nineteen episodes where I can easily find (in the transcription) who brought up the topic of Paul Krugman (this is why it’s possible that there are actually more older episodes that include Paul Krugman as a topic).

Of those 19, one episode seemed to include Paul Krugman’s views as an explicit topic of conversation (Don Boudreaux, 3/26/12). That one seems like it would be impossible to avoid.

Of the remaining 18, three episodes seemed to mention Krugman tangentially. For example, two of the episodes mentioned that Krugman (and Greg Mankiw) wrote a textbook, but they did not discuss any aspect of that textbook or his economics views. In that sense, Krugman was not really a topic of conversation. Just a name that turned up in a search. In all of those examples, Paul Krugman was brought up by the guest.

Of the remaining 15, the guest brought up Krugman’s views 11 times. Russ brought up Krugman unprompted in the other 4.

Of these same 15 episodes, Russ steered the conversation away from Krugman 4 times. He encouraged immediate further discussion about Krugman 2 times. 7 times Russ returned to Paul Krugman later in the episode (including this Joshua Angrist episode). The guest was the one to steer conversation away from Krugman 2 times (in both of those cases, Russ was the one to bring up Krugman).


That’s the data I’ve assembled. There are times when Russ moves the conversation away from specifically referencing Paul Krugman. But I would not say that he almost never brings his name up or ignores it when guests do.

I feel like this exercise has made me understand Joshua Angrist’s perspective a little better.

Dec 27 2014 at 4:10pm

But in professional circles with of highly respected people on the other side, “I’m right, they’re wrong” doesn’t sound like a convincing argument.

Agreed. Worse, it ruins the spirit of the podcast. The episodes I tend to get the most out of are the ones where Russ and his guest civilly debate opposing viewpoints. I often realize that the differences between seemingly polarized views are actually smaller and more nuanced than they first appear, and it’s through open-minded discussion that we start to understand and weigh the spectrum of costs and unintended consequences that our minds are blind to when we form our initial views.

Russ, given Angrists aversion to any sort of dialectic, I think you should try this again with another guest who can put forth a compelling argument in favor of models/econometrics. Taleb paints such a convincing picture and his writing is so damn addicting that I am afraid we may start to throw the baby out with the bathwater. As we’re able to collect and process increasing amounts of data, we should really seek to understand the extent to which we should.

Dec 28 2014 at 10:47am

This was a helpful addition to EconTalk. For me EconTalk is at its best when my bias is represented by Mr Roberts and gets effectively challenged by a guest. Thanks.


“I feel like this exercise has made me understand Joshua Angrist’s perspective a little better.”

Wanted Jerm to know that I didn’t need to undertake a vast study of EconTalk comments to figure out that the above is the best sentence ever printed here. I literally snorted when I got to it.

Dec 28 2014 at 9:13pm

Isn’t it intellectually dishonest to stress a ‘lack’ of ‘proof’ that minimum wages laws reduce available jobs when we never have a chance to test it in the real world?

From the tone of the author I detect an extreme disdain for common sense, otherwise known as the ‘Austrian School’ 🙂 No disrespect to any deserving person, folks like this guest are the reason I now tell young people that unless they want to be a doctor, lawyer, or engineer – university ‘schooling’ is an extremely poor use of resources. Education on the other hand can be obtained without school for anyone motivated to learn just about anything, for free for the most part -IMHO

Dec 29 2014 at 6:58pm

Russ, it’s sad that you’ve cultivated this nihilism among your comments section. You’re skeptical about empirical results…except for the one (Clemens and Wither) that confirms your prior? That’s rather inconsistent.

An example is Jeff above, who writes “Isn’t it intellectually dishonest to stress a ‘lack’ of ‘proof’ that minimum wages laws reduce available jobs when we never have a chance to test it in the real world?”, somehow (quite skillfully!) ignoring the entire discussion altogether, with the many studies that fail to find an impact. And as for the “common sense” that comes from Austrian “scholars,” Jeff, the point of formal schooling is to give you a guide so that you can distinguish real scholarship from, say, the guy trying to put forth Rothbard as someone with meaningful contributions.

Russ Roberts
Dec 30 2014 at 2:20am


You missed my point. I am skeptical of the Clemens and Wither study that I mentioned above. My point in mentioning it was simply to make the point that innovative research design does not always lead to clean results. The Clemens and Withers study uses the kind of natural experiment that Angrist champions and that has been used to show minimal effects of the minimum wage. If this study holds up to scrutiny, now what? Do you think the supporters of increasing the minimum wage will change their minds? Or will they decide that there just hasn’t been enough or the right kind of scrutiny?

There are dozens and dozens of older studies that show large negative effects of the minimum wage. I am happy to concede that they are not decisive either. (As I said above, it’s hard to measure the impact of the minimum wage when such a small proportion of the work force is affected by an increase. See this review by John Kennan for a more general discussion of the problem.) When almost all economists accepted those results, there was a consensus that these were the best studies. Now a new consensus has emerged. I don’t think the quality of the empirical work is the cause of the new consensus.

Being overly skeptical to the point of denying all empirical evidence is indeed an unattractive trait. So is overconfidence. I prefer to do no harm so if I have to err in one direction or the other, I think I prefer the former to the latter.

The other challenge we all face (and this includes the free-marketers like myself who want to believe the Clemens and Wither study as well as all the older results) is the temptation to cherry-pick the studies we want to believe and ignore the others, because of course, the others are flawed. We all have that tendency and I think it is a very healthy thing to be reminded of it often.

Earl Rodd
Dec 30 2014 at 5:14am

I can’t think of an analog in economic studies to a common flaw in psychology studies. I wonder if there is an equivalent concept. The problem I refer to in psychological studies is that the subjects are very rarely controlled by differences in personalities (and the corresponding differential abilities). This is amazing since psychologists are the ones who know that personality differs so much person to person (although they usually over-simplify in my opinion). Selection methods of subjects run very high risks that a set of subjects shares common personality factors. This ends up making the studies impossible to replicate.

Dec 30 2014 at 12:07pm


Do you believe that study designs can be improved?

I think the reference to the “older studies” are interesting. According to Angrist, the older studies shouldn’t be evaluated on there results, but rather there methods.

A trivial example is that often very old research didn’t seem to understand that correlation itself wasn’t causal. We’d regress crime on prisons and find that states with more prisons had more crime. Of course, now even undergrads know that causation may be going the other way. States with more crime have to build more prisons. The conclusions of this study are irrelevant to the causal question, because it’s a bad study.

We can know that old minimum wage studies are bad studies without disproving their results. The results themselves aren’t informative. Do you agree? You didn’t really try to flesh out what a “natural experiment” is and how it might help us address causal questions (which is Angrist’s main point). It might have been useful to compare and contrast the old methods with the new methods.

I didn’t know the Clemens and Wither study, but it doesn’t at all seem to be a smoking gun versus Angrist’s position. Rather it seems to be a nuanced and novel approach in support of Angrist’s position. With new methods, we can continue to learn and refine our knowledge.

I am not a labor economist, but I am always struck with how nuanced and qualified the new consensus is. They are always careful to add caveats like “large disemployment effects” and “at historical changes in the minimum wage.” The Clemens Wither study finds that after a 30% change in the minimum wage, the Employment to Population ratio was lowered .7 percent. Maybe that will move some priors, but is it so earth shattering?

Lastly, there is something strange about Russ’s insistence that for econometrics to be useful a new study has to rapidly change everyone’s mind. That doesn’t happen in any field. Evidence accumulates and knowledge is gained slowly. New studies change the debate a little at a time for good reasons. Asking, “what’s the one study that changed everyone’s mind?” isn’t the right question.

Russ Roberts
Dec 30 2014 at 6:41pm


I don’t think the challenge of causation and correlation is an old problem. It’s a constant problem. The old minimum wage studies were standard regression analyses that assumed that all the relevant variables that might affect employment other than the minimum wage had been controlled for. That is of course, not always the case. And maybe never the case.

The same problem arises with new research designs that purport to be natural experiments. In Clemens and Wither for example, the idea (and it is clever) is that when the Federal minimum wage was recently increased, there were some states where state-level minimum wages were relatively high. That meant the increase in the effective minimum wage was much smaller in those states. That allows Clemens and Wither to look at the differential impact the larger increase might or might not cause on employment and income.

Their findings don’t exactly refine our knowledge, as you word it. They challenge it, at least as described by Angrist. You managed to pick the result of the Clemens and Wither study that looks small. It’s not that small–it’s actually a .7 change in percentage points, which explains 14% of the decline in the national employment ratio.

But the effect on the lives of low-skill workers is actually not small at all. Clemens and Wither find:

Relative to low-skilled workers in unbound states,
targeted workers’ average [monthly] incomes fell by $100 over the first year and by an additional
$50 over the following 2 years.

In other words, the minimum wage makes low-skilled workers poorer, not richer as its defenders claim. Worse, the trajectory of earnings is affected:

We find significant declines in economic mobility, in particular for transitions into lower middle class earnings. For the full sample with average baseline wages less than $7.50, the difference-in differences estimate implies that binding minimum wage increases reduced the probability of reaching earnings above $1500 by 4.9 percentage points. This represents a 24 percent reduction relative to the control group’s medium-run probability of attaining such earnings.

That’s very depressing if it’s true. Is it? Hard to say. As someone who doesn’t like the minimum wage, I’d love to believe these results. But I know that implicit in Clemens and Wither’s analysis is the assumption that the states with high state minimum wages are otherwise the same as the states with the low state minimum wages. We know that’s not literally true because if it were, they would have the same level of state minimum wages. But are the underlying differences across states important? Hard to know. Clemens and Wither understand this is always a problem. So they do spend some time showing that at least with respect to one variable–the effect of the housing crisis during the time period they study. They find that the differences in the housing crisis across states bias their results toward zero. So if anything, the effects of the minimum wage they try to measure are even larger. Convinced? If you like the minimum wage, probably not. If you dislike it, you’re likely to point out that the effects are even bigger than the ones that are found. But of course, it is impossible to control for everything that is relevant. The claim that is usually made, then, is that “enough” of the variables are controlled for.

But the bottom line for me is that in this case (not all empirical questions but in this one), my opposition to the minimum wage comes from other types of evidence (including what might be called common sense). My opposition comes from the ease with which most employers substitute technology for low-skilled workers or lower priced workers for higher priced workers. It is those basic economic effects that make me think that raising the minimum wage will hurt the poorest and least-skilled among us.

Dec 31 2014 at 12:43pm

I thought this was a very good episode and interesting debate. I’ve gone back and read Leamer’s paper and despite Angrist’s claim I still come away feeling that even if these are improved methods, they still have their flaws. At one point he seemed to defend data mining, which leaves a lot of room for the kind of spurious correlations that Taleb discusses in Antifragility (page 417).

I have mixed feelings about it, because I don’t do a lot of randomized trials and I think the idea of precommitment becomes very difficult in some of the research designs that I use where you really need to see the data before you can decide how to analyze them.

I also came across this analysis that looks at teen jobs in California where the minimum wage is above the Federal standard.

“[In October] the number of employed teens between the ages of 16 and 19 in the U.S. suddenly increased by 266,000. Employers, predominantly in food and drinking service-related businesses, had finally responded to the increased demand they were seeing by adding U.S. teens in large numbers to their payrolls for the first time in years.”

But California did not see anywhere near a proportional gain.

Of course, the reason why California lagged in fast food jobs for teen jobs might be explained by their healthy living. We’ll need some more data to confirm it.

Dec 31 2014 at 5:14pm


You completely missed my point, and that makes me also wondering if you are missing Angrist’s point.

You said, “I don’t think the challenge of causation and correlation is an old problem. It’s a constant problem.”

That’s the principal argument of Angrist and me. That is what Angrist’s two books, Mostly Harmless Econometrics and Mastering ‘Metrics, are all about. The difference is that now these concerns are front and center in every applied economics seminar and paper. We no longer run naive regressions and interpret the results as causal and doing so would have had you laughed out of a seminar.

Here are two more examples: Regressing Crime on Cops and concluding cops don’t prevent (or even cause) crime. Regressing wages on education without considering that higher ability people get more education.

These are the regressions of past generations and continue to be standard in many fields. Appealing to “old studies” as if they were equally good, but just different is awfully misleading. The results of the above studies are completely immaterial. We know they are flawed. They are so fundamentally flawed as to literally be textbook examples of flawed studies.

All of new metrics is about addressing the problems and trying to overcome them. I agree with you that objections can still be raised, in fact, in every seminar I sit in they are raised (sometimes convincingly and sometimes unconvincingly), but that is the very evidence of the progress we’re talking about.

To the minimum wage debate:

You said, “They challenge it, at least as described by Angrist. You managed to pick the result of the Clemens and Wither study that looks small. It’s not that small–it’s actually a .7 change in percentage points, which explains 14% of the decline in the national employment ratio.”

I do appreciate the subtle accusation of cherry picking, but do note I picked the only number presented in the abstract and I think it’s quite fair to conclude the authors think it’s the studies main finding.

Can you please back up this statement? “They challenge it, at least as described by Angrist.” What do you see as the point estimate from leading research? And can you point to the cite?

I am not a labor economist, but it isn’t clear to me this is the case. Here’s an example of a post by Casey Mulligan with a quick and dirty back of the envelope:

“So let’s say that 2 million workers were affected by the 11 percent hike on July 24, 2009. With a labor demand elasticity of -3 (that’s what Cobb-Douglas would predict), the textbook theory says that a half a million part-time workers would lose their jobs (or fail to be hired) due to the July 24, 2009 hike (525,177 = [1- (6.55/7.25)^3]*2,000,000).”

So he put the impact of just the July 2009, an 11% hike in the minimum wage, change at 500,000 jobs. The Clemens and Wither paper studies the entire hike of 30% from 2007 to 2009 and find an decrease in jobs of 1.1 million (.7*154 million). Is the Clemens and Wither estimate so different?

If you plug in $5.15 into Casey’s formula, you’ll see he’d actually have overshot to 1.3 million. Perhaps, he should read the Clemens and Wither study and conclude that the minimum wage doesn’t actually have as large an impact as he thought. So is CW really so radical? What do you see as the consensus point estimate? Please show your work.

Russ Roberts
Jan 2 2015 at 4:07pm


I don’t think I misunderstood your point. Maybe you misunderstood mine.

The idea that correlation is not causation is pretty old. What Angrist is claiming is that new research designs have made claims of causation more credible. I disagree. That’s all. Looking at the effect of the minimum wage on New Jersey and Pennsylvania when only one state changes its minimum wage, for example, doesn’t change the fact that there are many things that are not held constant. Are those other things important? That’s the question. A clever research design doesn’t solve that problem.

The Clemens and Wither paper finds that the recent increase in the federal minimum wage makes low-skilled workers worse off. I don’t think that’s the consensus among those who continue to support increasing the minimum wage. So to my mind, that challenges the consensus Angrist sees as arguing that large disemployment effects are hard to find. “Large” will always be in the eye of the beholder.

Jan 2 2015 at 11:21pm

“I don’t think that’s the consensus among those who continue to support increasing the minimum wage.”

It’s the second time you’ve used that phrase and I find it troubling. We are trying to discuss the consensus estimate of a field of research and you are confounding it with advocacy for policy. Why not just ask Angrist about quantitative point estimates? Why not just ask him what he thinks a 10% or 30% wage increase would do to unemployment? Why not try to establish quantitatively the differences between different researchers?

I think you are putting people into policy camps and ignoring that their work is separate from that. It’s quite possible two researchers completely agree on the point estimate and still disagree whether raising the minimum wage is the right policy.

Much less, you didn’t even ask Angrist whether he thinks the minimum wage hurts poor people or even if he advocates for the policy? I have no idea why you’d bring it up now.

Let’s try to dig deeper into point estimates and bounds of disagreement in future podcasts, rather than very vague statements.

“So to my mind, that challenges the consensus Angrist sees as arguing that large disemployment effects are hard to find. “Large” will always be in the eye of the beholder.”

This doesn’t have to be a big secret, when Angrist says “not large” just ask him to quantify that. All the studies he’s appealing to have point estimates. Ask him if it’s conditional on the economy and the size of the current minimum wage. Ask him what effect he’d expect the 2007-2009 minimum wage hike to have. Yes, maybe “large” is in the eye of the beholder, but let’s stop having to guess what guests mean when they use vague words. If one person’s “small” is another person’s “large” ask for the numbers.

I’m assuming that since you didn’t cite a point estimate for your statement again and didn’t contest the one I provided of Casey Mulligan, that you don’t really have one in mind. Fine, but all the more reason to get to the bottom of this.

Russ Roberts
Jan 4 2015 at 2:49pm


Thanks for the work. Present appreciated.

Jan 5 2015 at 2:59pm

As always, love econ talk and Russ is exceptional at it.

I always want to ask this to everyone who is deeply skeptical about econometrics – what instead should we use? WE all know empirics without theory is garbage, but how useful is theory without empirics? Fact is, every economic paper today has to have an empirical section to test the validity of the models.

As Russ said, cause and effect is an old problem, but the tools available keep advancing so we have more ways to find robustness.

Ron Crossland
Jan 8 2015 at 10:52am

I have no formal economics expertise, just two years of adult curiosity which has lead me to enjoy this podcast series and read a number of books and articles from a variety of economists.

But I do have some skill at considering large scale social dynamics and the limits of empirical testing against social phenomena. What strikes me the most, at this moment, is that far too much of macroeconomics relies upon microeconomics thinking. It seems the wrong tools are used because they are the best tools available.

Yet strong opinions, even ideologies, become rather concrete (which is not surprising), even by those who caution us to remain skeptical of the empirical work. Even in this episode, in which both interviewer and guest remarked on these limits and discussed the best empirical processes, each seem to have pretty solid “mental models” based upon dated evidence (several of the papers or bodies of work mentioned are 20 to 50 years old and continue to be contested).

The macroeconomic conditions of a post-WWII world are not the same as those of the 21st century, yet we argue as if certain fundamentals haven’t changed. I suspect policy, technology, population, and the increase in the global interaction of large scale national economies does modify even some of the fundamentals.

Thanks for a stimulating interview. Much for a novice to consider.

Jan 9 2015 at 4:21am

Minute 50 “The answer is in the standard errors”. Bzzzt.

Say I take a large-enough sample of SAT scores in Greenwich, CT and try to generalise them to Camden, NJ. No smallness of standard error on the former measurement is going to tell me about the latter because these are two different distributions.

No statistical tool can make this go away.

And there can be no mathematical guarantees that I haven’t overlooked a pocket of people with a different distribution on the parameter in question. Like the famous example (due to I don’t know whom) of US CDC survey of number of sexual partners missing a +20σ group at the epicentre of the AIDS vector.

Ditto with Nassim Taleb’s example of missing Bill Gates or Carlos Slim in a wealth survey. A random sample simply will not work to estimate the mean wealth of various populations, CLT be damned.

What’s the meaning of “statistical precision” if not “Will the findings generalise?”. The answer can only be vapour, because at that point you’re making semantic arguments in favour of results that add no value to the problem at hand. “That’s a harder question to address” = hide behind skirts.

Jan 9 2015 at 4:47am

“Empirical” is not the same thing as “experimental”. Economists don’t seem to do ethnography, I guess because it’s not quantitative and therefore not respectable. But where else are you going to get the ideas for your rigorous theory from besides personal experience, reading about history, or talking to people with the personal experience? From journal articles?

Jan 9 2015 at 5:35am

It’s sad to hear (minute 57) misleading fakey-pop-science characterised as an innocuous “consumption good”. That sidesteps the issue of how what we read affects what we think. (Of course what we think changes what we inflict on ourselves or others.) News and media generally shape people’s views of the world. That’s uncontroversial.

Is 3 million Atlantic readers reading a false result labelled as “science” a benign or neutral outcome? Keep pushing and this kind of thing eventually becomes indefensible.

Hacky Dacky
Jan 14 2015 at 1:35am

Angrist said, “One of the most influential documents in the history of social science is Friedman and Schwartz.” Many commenters are familiar with this document, but I am not. And unfortunately, no work by Friedman and Schwartz is listed in the bibliography at the top of this page.

I would much appreciate it if you could tell us what specific work by Friedman and Schwartz Dr. Angrist was referring to.

[It’s Milton Friedman and Anna Schwartz’s A Monetary History of the United States, 1867-1960. You can find out more in Friedman’s podcast episode, Milton Friedman on Money, and in Friedman’s bio, which is listed above. –Econlib Ed.]

Comments are closed.


EconTalk Extra, followups and conversation starters for this podcast episode:

About this week's guest:

About ideas and people mentioned in this podcast episode:



Web Pages and Resources:

Podcast Episodes, Videos, and Blog Entries:



Podcast Episode Highlights
0:33Intro. [Recording date: December 15, 2014.] Russ: Joshua Angrist is the author, with Steve Pischke, of the book Mastering 'Metrics: The Path from Cause to Effect and also with Pischke, the author of "The Credibility Revolution in Empirical Economics: How Better Research Design is Taking the Con Out of Econometrics," which was published in the Journal of Economic Perspectives (JEP) in the spring of 2010. That article and the book are our topic for today's conversation, and I want to thank David Beckworth and Adam Ozemik [?] for suggesting professor Angrist. Josh, welcome to EconTalk. Guest: Thanks, Russ. It's a pleasure to be talking to you this morning. Russ: So, the world's a complex place, and the goal of econometrics is usually to try to assess the impact of one variable on another. What are some of the techniques that the field uses to do that? Guest: Economics, or applied economics is evolving, and there are many different ways to look at the causal relationships, the effect of something on something else. I have my favorites; and those are outlined in the book and the article in the JEP you mentioned, then in our other book, the other book I wrote with Steve, Mostly Harmless Econometrics, which is focused on graduate students. We take as an ideal the kind of randomized trial or field trial that's often used in medicine to determine cause and effect or to gauge cause and effect and that's increasingly popular in empirical work in economics and in other social sciences. An important theme of my work and the book, and the new book in particular, is that even when we can't do a real randomized trial in the sense of going out and dividing people up into comparable treatment and control groups as if by a coin toss, there are methods that we can use, econometric methods that we hope will approximate that. Russ: And how do they do that? Guest: Well, different ways. Different methods, different ways. Different sorts of assumptions. Everything of course is built on assumptions, and we're always alert to the foundation of our work and the need to probe it and see whether it's solid and whether it supports the conclusions that we are trying to draw. The simplest empirical strategy--we identify 5 core methods in the new book. The first one is the randomized trial. And that's both a method and an ideal or a model, where people are actually divided on the basis of random assignment, and we have two very important examples where that was done in social science, both related to health care. The first one is the RAND Health Insurance experiment from the 1970s, which is really a landmark in our field. And the second one is a much more recent work by my colleague, Amy Finkelstein and a team of co-authors looking at random assignment of health insurance in Oregon. So that's both showing how it can be done and explaining why it's valuable. The other alternative, that is, the non-experimental approximations of random assignment involve various sorts of strategies. The first of these and the most common is just regression, which I imagine many of your listeners will be familiar with. Just a way to control for things, to try to hold the characteristics of groups that you're trying to compare fixed. So, there's an example in the new book where the question is the economic returns to going to a more selective college or a private college. This is based on empirical work by Alan Krueger and Stacy Dale. And the idea is that we can produce a well-controlled comparison by knowing the schools to which you applied and where you were admitted. And this produces a very striking finding, which is if we compare people who went to, say, private colleges--think about perhaps Boston U.--versus U. Mass. (University of Massachusetts), or even Harvard or MIT (The Massachusetts Institute of Technology) versus U. Mass., naively you'll see that the people who went to the private colleges earn a lot more. But conditional on where people were admitted, they do about equally well. There's no advantage. And that suggests that most of the observed difference, perhaps all of the observed difference in earnings between people who went to private and public universities is due to the fact that the people who went to the private universities were destined to do better anyway; they were on average people who were either more ambitious or had higher test scores. But those characteristics are reflected in their application decisions and their admissions results. Conditional on where they applied and where they got in, there doesn't seem to be any earnings advantage. So that's the example we use to illustrate regression. Of course, it isn't really a randomized trial, but we can tell that it looks very controlled because we can see that after appropriate conditioning--and in this case we think that 'appropriate' means holding fixed an individual's own assessment of how qualified they are for different sorts of schools and of course we're also holding the admissions' offices issues constant--how the admissions office gauged the applicants. Conditional on that, it looks like a good experiment in the sense that people who went to different sorts of schools have similar family backgrounds and they have similar measures of ability, like SAT (Scholastic Aptitude Test) scores. Russ: So this is not a finding that you wave around too much in front of your administration, presumably. Guest: In fact, it's a little awkward. I work at a very selective school and I'm friendly with our admissions officer, Head of Admissions here. And we discuss these results fairly often. There may be reasons why you'd like to come to MIT besides the earnings advantage it's likely to give you. Russ: Absolutely. Guest: But certainly on economic grounds alone--I'm not speaking specifically about MIT, but the difference between Penn (U. of Pennsylvania) and Penn State is not apparent in the data. So that's a very striking finding, and it shows the power of regression to produce a better, a more well-controlled comparison, if not a slam dunk; and in particular to eliminate some of the obviously sources of selection bias that are likely to be misleading. Let me just say parenthetically, 'selection bias' is an econometric term for differences observed between groups that are not in fact causal effects. So, for example, we observe that people who have health insurance are healthier than people who don't. That's mostly selection bias. The people who can afford or have access to health insurance tend to be healthier people, without regard to the fact that they have the insurance. And we know that, actually, from the results in the RAND study and from Amy Finkelstein's work. Russ: I want to come back to that in second. First I want to say something nice about your book. There's something that is very special about the book. It's a real rarity in economics writing, at least in my experience, which is that it's mainly about the intuition and less about the formal results. The formal results are there; but they are in the Appendix. Usually it's the other way around: we put the formal results in the book and then in a footnote or two we say, 'Oh, by the way, you should take this into account,' or 'that's what this is trying to accomplish.' But what I love about the book is that it's really an extended conversation about the nuance in art and craft of econometrics, which is something I think is extraordinarily missing from both the literature and the instruction. When I taught econometric or statistical analysis to Master's students, I wanted to teach them how to think like an econometrician, not what the formal results are; and it's remarkable how difficult it is to find material to help people to do that. The easy thing, of course, is just to give people tests on various formal results; and they're easy to grade. But to teach people and to grade them on craft is really, I think, the gold standard. And your book is really a step in that direction. Guest: Well, it's wonderful that you see the value in that. Steve and I both, of course--we're researchers, but we're also teachers. And we were well aware of the enormous gulf between the way econometrics is taught and the way it's done. And we see our job in this book and also in Mostly Harmless, our earlier book for graduate students, to try to bridge the gap between econometric practice and the econometric syllabus. And I hope that we're successful in that. That's really what we're trying to do.
10:24Russ: Now, having said that, I have some disagreements with it. So let's turn to some of those. Guest: Okay. I didn't finish all the other--I don't know if you want me to go through those. Russ: Oh, go ahead. Please do. Go ahead. Guest: So, we start with random assignment. We talk about regression next, not because it's the best method but because it's a natural starting place. And I can't imagine seeing an empirical paper about cause and effect which doesn't at least show me the author's best effort at some kind of regression estimates where they control for the observed differences between groups. That may not be the last word, but it ought to be the first word. The other methods are Instrumental Variables (IV), regression discontinuity designs, and differences in differences. Each of these is any attempt to generate some kind of apples to apples comparison out of observational data--that is, data that were not generated by some sort of purposeful random assignment on the part of researchers. Instrumental variables is a strategy for leveraging naturally occurring random assignment, or something that looks like naturally occurring random assignment. So, the example we start with there is--well, let me add also that sometimes instrumental variables is a method for leveraging experimental random assignment in complicated experiments, where the treatment itself cannot be manipulated but there is an element of manipulation in the treatment--there's a kind of a partial manipulation. The first example in the instrumental variables chapter is a study of charter schools; and there we're interested in whether kids who go to charter schools--charter schools are essentially publicly funded private schools, an important part of education reform that's growing in many states, including Massachusetts where I live, but also elsewhere, like New Orleans is now an all-charter district in the Recovery School District in New Orleans. So, there's a big public controversy about the sort of semi-privatization of public schools, at least insofar as their operation goes; and a big debate about whether the charter schools are actually doing better than the public schools that they serve alongside with, or even replace in some cases. So, to answer that question, we use the fact that oversubscribed charter schools pick their students by lottery. That is, when they have more applicants than seats, they use a lottery to allocate the seats. And that creates an instrumental variables situation where we compare kids who are and are not offered seats at a charter school and then we adjust for the difference in the likelihood of attending the charter school that that tool generates, that that manipulation generates. And that's a great, simple example of IV estimation of causal effects. We also have an example from a randomized trial where the intervention is the arrest of suspected batterers in the cases of domestic abuse in the city of Minneapolis. This is a real randomized trial; it's a very famous criminological study from the 1980s. In that study, police officers who were called to the scene in cases where there was a presumption of assault--ordinarily a policeman has to make a decision about how to handle it. In this case the policeman was encouraged by virtue of random assignment to different strategies to either arrest the suspected batterer or to simply to separate the parties or refer them to counseling. And this is an IV situation because you can't actually tell the police what to do. They have to be free to make their own calls both in the interest of their own safety and in the safety, interest of the safety of the victims on the scene. So, there is an element of random assignment, but there's deviation from random assignment. It turns out that instrumental variables is the ideal tool to analyze that sort of scenario, which is quite common field trials that involve people and the messiness of social policy. So, those are two out of three of the IV examples. The next chapter discuss is regression discontinuity designs, which is growing in importance. Regression Discontinuity (RD) Designs are research designs, non-experimental research designs, that tend to mimic an experiment by using the rules that determine allocation to treatment states. So, an example there is somewhat--one of the examples there is very much along the lines of the regression study I mentioned. Instead of the [lead colleges?], one of the applications in the RD chapter is to the study of the lead high schools. And that's based on some work that my colleagues, Atila Abdulkadiroglu, Parag A. Pathak, and I did on the legendary elite High Schools, like the Boston Latin School and New York's Stuyvesant. And we used the fact that those schools admit kids on the basis of a cut-off. So, you have a test score. It isn't exactly a test score; it's a kind of an index; it's based on your GPA (Grade Point Average) and your tests, your admissions tests. And they admit you according to whether you fall above or below a threshold. And the idea there is that very small changes in test scores are arbitrary, so that if I look at kids who have scores just above and just below the cut-off, they are likely to be quite similar in terms of their family background, motivation, and so on. And so that's something like a randomized trial, the question of whether a kid is slightly above the cut-off or slightly below. There's a serendipitous. And so we can compare the achievement of kids across that threshold and gauge the value of education in an elite high school. And just as in the analysis of elite colleges, the RD study of elite high schools shows no--in this case, no achievement advantage for kids who go to these more elite schools. In spite of the fact that their peers are much better. So we are also relating them to the age-old question of social science of peer effects, whether there are benefits from studying or working with more productive or more talented colleagues, co-workers, and classmates. The RD is particularly interesting because it's relatively new in economics. When I was in graduate school I did not learn about RD and really didn't hear about RD until I had been working as an Assistant Professor for a few years. But now RD is one of our core methods and probably one of our most convincing non-experimental methods. So, Steve and I are especially pleased to kind of bring that in to the undergraduate curriculum. It's not commonly found in the mainline textbooks. Russ: That's Regression Discontinuity --RD. Guest: Right. RD is Regression Discontinuity.
17:31Russ: So, I want to come to what I think is the heart of the matter, which is what I think is the convincing part. Since I'm kind of a skeptic. And I want to be on the couch; and you can counsel me and give me some cheer. So, when I look at these results, I have two issues. One is a theoretical point, which Leamer and Sims bring up in their response to your 2010 article. So, your article, your title, is playing on the 1983 paper by Ed Leamer, which is "Let's Take the Con Out of Econometrics"-- Guest: Right, wonderful paper. I read it with great pleasure in graduate school Russ: So, Ed's been a guest on this program before, a number of times; and we've talked specifically about that article. That article was worried about the fact that most of us don't get to go into the kitchen and see the enormous range of possible models that an economist might try. And Leamer claims that, as a result of that, the classical statistical significance tests really go out the window. We are kind of at the mercy of the researcher, because we don't know the range of stuff that was tried and not tried. And I have to mention George Stigler, who once told me that when he was in graduate school, since it took such an immense effort to run a regression, you picked the one or two that you thought were the best ideas. And you ran 'em. And it took a long, long time to make the calculations. Basically they were done by hand, with giant calculators. And then you hoped you found something. And that was it. And of course in today's world, you just hit Return. You can do lots and lots of data mining. And Leamer was worried about that. And one of your points, before we get to this issue of convincing specifically, one of your points is that perhaps ironically, you make the argument since Leamer wrote that article--but not based on his remedy. So, talk about what his remedy was, and why you think that has not been a route that people have taken. Guest: Well, I think the question has been whether what Leamer was complaining about was the most important problem that Applied Econometricians face. Leamer was essentially saying that there's a lot of specification search and there's selective reporting. And-- Russ: And his solution was very radical. Right? His suggestion was an immensely honest sensitivity analysis: so basically saying: If you combine all the possible variations of these variables we have, how big a range do we have for the variable we care about? And the answer is usually: Not very much. Guest: He's a fairly committed Bayesian, at least in his writing, if not in person. And he was proposing a fairly conventional I thought Bayesian approach where you would state your priors and you would then show how that maps. And he also had the idea that we should show many variations. Let me say at the outset that Leamer had a huge impact on me, and I think on empirical work. All to the good. That he--he is complaining about the kind of arbitrariness of what I report. Filtered into empirical practice in the form of robustness checks. In the sense that researchers today are expected to report plausible variations on what they've done. A great example of that is from my own work. This is in the new book. In the chapter on Differences in Differences, where you compare changes instead of levels, it's essentially a panel data method. The idea is that treatment and control groups move in parallel. In the absence of treatment. And that's a testable hypothesis. And a very simple check on that is to allow some departure from parallelism into your models. And the easiest way to do that is to introduce--if it's a state-based panel the easiest way to do that is some kind of state-specific trend. And many panels do not survive that, in the sense that the treatment-effective interest either just disappears or becomes not very well identified, not very precisely estimated when you do that. And Mostly Harmless had an example of that, and the new book has an example from my own work where we are trying to use compulsory attendance laws at the beginning of the 20th century by state and year of birth. And that's the source of variation in schooling we want to exploit. And when you put in a state-specific trend, it disappears. So that kind of idea that you owe it to your readers to both understand and explain and probe the fundamental assumptions that drive your results, well taken. And I think we have to credit Leamer's article for highlighting that and bringing that into modern empirical practice. An extreme version of that which is also emerging among my contemporaries is that when I do a randomized trial I might actually precommit to the analyses. And that's also a good development. Russ: Yeah. Shout it out. Guest: That's a sign of maturity, that we're willing to do that. I have mixed feelings about it, because I don't do a lot of randomized trials and I think the idea of precommitment becomes very difficult in some of the research designs that I use where you really need to see the data before you can decide how to analyze them. You're not sure what's going to work. That said, when you can precommit, that's a wonderful thing, and it produces especially convincing findings. The idea that I should show the world a mapping of all possible models and that that's the key to all good empirical work: I did disagree with that at the time and I still do. And that's reflected in the article with Steve in the JEP. The reason that most empirical work was not convincing in the age of, say, Stigler and until more recently was not because there was inadequate specification testing, but because the research designs were lousy. The example that Steve and I gave, is from work by Isaac Ehrlich, very influential papers on the effects of capital punishment. Russ: Yep. Part of my youth. Guest: Yeah. That's a great question and I don't want to single Ehrlich out for doing a particularly sloppy job or anything like that. But, I'm not too interested in how sensitive his findings are to the sort of variation Leamer is describing because I didn't find any of it convincing. He really did not lay out a clear case for his research design. A core concept in my work, in my writing with Steve and in the research methods I think are most effective is the notion of design. The notion of design, in an experiment, of course, is how you set up the experiment; who got allocated; what you were conditioning on; what the strata are; and so on. In an observational study, design is about how you are mimicking that trial. So when I talk about RD and I'm using RD, regression discontinuity, methods to estimate the effects of going to an exam school, you know the design there is that I'm comparing people above and below the test score cutoff. And if that design is convincing, it'll satisfy certain criteria, which I then owe my reader. But I certainly don't owe my readers an account of all possible strategies. I really do build it from my proposed design.
25:45Russ: Let me react to that. So, I remember very vividly when the Ehrlich study came out. And at the time, I was a proponent of the death penalty. I couldn't exactly tell you why: that would be an unanswerable question. But when it came out--I was very naive and very young--I thought, well, see, it's proved. Of course, it wasn't. And of course, I think if you were not a proponent--and we don't need to go into your personal views on this, because I think it's a general issue: 'Oh, yeah, it was a terrible study; it didn't control for this; it didn't control for that.' People who were more sympathetic to the outcome, the findings of the study I think were more likely to believe that it was a good study. And if he had been more thorough, I suspect those of us who were biased toward the finding might have been a little more embarrassed to wave it around. I wasn't in any position to wave it around, so that isn't exactly my point. Guest: Well, Ehrlich's problem is not thoroughness. That's what I'm saying. Ehrlich's problem was the lack of a design. And, I mean, it's probably not that important--you know, Ehrlich's work was based on small samples and pre-dates most of the methods, except for basic regression methods-- Russ: Yeah, that's true-- Guest: that were highlighted in the book. At a minimum, we'd like to study capital punishment, we would use a state panel, for example. And we'd take out state effects--that is, we would use, basically we would use the Differences in Differences method. And that's been done, and there are references in the article that Steve and I wrote. You know, Ehrlich's work is important because it was intellectually important at the time. It's not of any empirical significance. I don't think any social scientist of my generation would look at Ehrlich's regressions and say they are worth reacting to. Russ: No, of course not; I understand. Guest: But there are other papers in the article about capital punishment; if you want I can look at it quickly, though I don't think it's to our-- Russ: No. I want to stick-- Guest: [?] much better job. Russ: No. I want to stick with the more general [?] Guest: Yeah. But you know, somebody, for example who proposes to study capital punishment, because, you know, the state of New York decides not to use it or outlaws it, you know, that person potentially has a good design. And I can tell that person, that researcher, exactly what he needs to do to convince me of that finding. And it won't be what Leamer suggested, which is a sort of all-hands-on-deck, all-specifications-are-created-equal specification search. Sorry--specification sensitivity analysis. But rather, I know what Differences in Differences depends on; and again, this is a theme of both of my books with Steve. We know what that method turns on. It turns on parallel trends. We always say that. It lives or dies with parallel trends. And to some extent, not 100%, but to a large extent, that kind of assumption can be tested. And the evidence that emerges from that test may or may not be very strong. But if it is strong, and if it's strongly favorable, then I have to be prepared to accept the results from that person's work. Russ: So, that's my question-- Guest: Somebody who is interested in the evidence.
29:05Russ: Yeah, that's my question. So, let's go--I'll take a micro, a couple of micro issues, one of which you've mentioned; and I'll throw in a couple more that you referred to in your book or article--or that you don't, but they are prominent examples. And then I'll go up to macro. So, I'm going to go micro to macro. On micro, I'm going to mention the effect of the minimum wage on employment; the effect of class size on educational attainment; the effect of health insurance on health outcomes. Those are three incredibly contentious policy issues in microeconomics. At the macro level, I'll pick the Stimulus Package of 2009. So here are four issues that we as economists are expected--whether we actually can speak to them is a different question. But we are expected to speak to these issues. And so we roll out tremendous econometric artillery, along the lines that you've mentioned. And you talk about these, some of them, most of them, in your books and your article. Guest: Yeah, I wouldn't describe it as 'tremendous econometric artillery'. The methods in my book are simple and accessible to any reasonably quantitatively sophisticated undergraduate. Russ: But they take a lot of time and effort to do correctly with the data, and to do the kind of careful research design-- Guest: As I, as any work doing does. Russ: Right. And my question is-- Guest: I don't see that we're sort of over the top here in how hard the econometric work is. Russ: No, okay. That's fine. But the question, then, is: What have we learned in those four areas that you think stands the test of time and that is replicable? There have been some fine studies. There have been some--and I'll throw in the effect of immigration on wages, because you refer to the classic Mariel Boatlift study of David Card-- Guest: Yeah. Russ: How-- Guest: So, some of the evidence in these areas is stronger and weaker. But there is a lot of interesting evidence here that's worth discussing. That's my standard. Russ: Has anybody been convinced-- Guest: [?] would be-- Russ: Has anybody on the other side-- Guest: I've been convinced about many things. If you mention health insurance, for example, Americans are not very healthy compared to other OECD (Organisation for Economic Co-operation and Development) countries. Russ: Correct. Guest: The evidence overwhelmingly suggests that it has nothing to do with health insurance. And we see that in two randomized trials, extremely well done, very convincing. Russ: Well, convincing to you. Guest: That's an area [?] where the evidence is very strong. Russ: Convincing to you. Most--I happen to agree with you. I don't think it's a convincing case that health insurance-- Guest: I'm not too interested in taking a poll. The evidence is clear. I'm not sure who is not convinced. Russ: How about the people-- Guest: But anybody who believes otherwise has to explain away the RAND and the OECD findings. Russ: They can. Can't they? Guest: I haven't heard a convincing explanation, I don't know what it is. Russ: Well, not to you. I mean, I don't want to take a poll either, except to make the point that economists are typically unconvinced by so-called 'scientific experiments' using first-rate research design. It's very easy for them to say, 'Oh, the RAND study--it didn't look at a long enough distance, the Oregon study didn't have enough power. They didn't have a big enough sample. There were problems of selectivity.' Guest: Well, all I can say is the RAND study followed people for up to 5 years and the Oregon study certainly the standard errors are small enough. I mean, you know, there's informed critiques and there's uninformed critiques. There are people who have a position. I'm not sure what your standard is, Russ. I don't really care if I convince, say, Paul Krugman. Russ: No, I understand. There are people with an axe to grind, there are partisans, there's--let's move to the-- Guest: Yeah. I think that the people who work on health insurance in the scholarly community have been enormously influenced by those findings. And, you know, the people who wrote those papers probably did not expect to find what they found. So, I don't think they are representing the work dishonestly. Russ: I agree with that. Guest: And it has to be taken seriously. Now, I'm not sure what the standard is. There are certainly people who have an axe to grind. So I don't--you know, we can say the same thing about charter schools, which is something I work on. There are people who are very hostile to charter schools; and there are people who love charter schools. Russ: Yep. Guest: Okay. And you know, there are people who believe in market-based solutions and there are people who don't-- Russ: who are skeptical-- Guest: who are hostile to market-based solutions. And many of the people who comment on that sort of thing are very committed. I doubt that my work moves them. I think, for example, Diane Ravitch--I know that she's aware of what I do, what our group does--we have something called the School Effectiveness and Inequality Initiative. I don't know what I need to do about that. I don't really see that as my problem. People who study schools, and in my academic community pay attention to what we do. Now, you might say, 'Who cares about that?' Russ: No, no; I care. Guest: When it comes time to make policy, there are people who skip over the advocates. And they do look at what the academics say. When our governor, for example, was thinking--and in Massachusetts, the number of charter schools is capped. I don't have a position on that. I don't care, personally, deeply, what Massachusetts does as far as its charter school policy. I just want my work to be noticed when that issue is debated. And when that issue was debated in 2010, our work was noticed and I was gratified by that. The work was noticed, not just because economists were saying, 'This is worth attending to,' but people found the design convincing. We were able to represent it in a way that was convincing to policy makers as well as to other scholars. And more so, I think, than a lot of the work that had gone before. Russ: I want to come back to your example of Paul Krugman. He does have a Nobel Prize in economics. But I'll take your point-- Guest: Yeah, I don't want to discuss individuals in any--I used him as an example-- Russ: I understand-- Guest: of somebody who is identified with a set of positions. Russ: Agreed. Guest: And what he says is not the measure of my success. Russ: Of course. Guest: The measure of my success is what my peers think. But somewhat indirectly I think what my peers think matters. And when policy-makers--we're lucky to live in the United States where social science does actually matter for policy; and better social science probably matters more. Russ: Well, I'm agnostic on that. I think we like to believe that. I think we also perhaps read that evidence a little more cheerily than it perhaps deserves to be read. I think we're sometimes used by politicians rather than changing their opinions. But let's put that to the side. And I understand your point about Diane Ravitch; certainly partisans who--I'm not talking about political partisanship, I'm talking about people who have a staked-out position on a policy issue are going to be hard to change their mind.
37:08Russ: Let's just stick, then, with two issues for now, which are: the health insurance case and the minimum wage. Do you think the majority of health economists oppose universal health insurance, based on the empirical evidence that it's not related to health outcomes and it's just a waste of money? Guest: I don't think that's relevant. Again, I'm not taking a poll. I think that many economists, again there's people who follow this and care about it. I think there's an understanding that if you want to improve public health, which of course many of us do, that insurance is not the key. There may be other good reasons to support insurance, and I'm not really interested in debating that. Russ: Yeah, I understand. How about the minimum wage? Do you think we have any scientific understanding of the impact of an increase of the minimum wage on employment, based on the research design. Guest: Yeah. Yeah, there's been a lot of good work on the minimum wage. Of course, it's not as good as the work in health insurance, in the sense of there isn't a randomized trial of the minimum wage. But I would say that the burden of proof has shifted towards people who think that the minimum wage has large dis-employment effects. Because it's been hard to find those. I'm not saying it's been impossible. But, you know, I'm a labor economist by trade. I do econometrics as kind of a hobby. And a lot of my teaching is in labor. And it's clear that the scholarly work on the minimum wage today is in a very different place than it was before Card and Krueger. Russ: Oh, I agree. Guest: I'm not saying everybody is convinced. Russ: That's true. Guest: But the evidence is relevant and worth attending to, and it tends to fail to find large dis-employment effects, and anybody who discusses the minimum wage has to contend with that. And I would say here there's a difference between what, say, Ehrlich did, which I don't, for the most part--and again, I'm not picking on him. I don't think Stigler is remembered for his empirical work, either. You mentioned him early in our discussion. There are studies that are remembered for their findings. You may disagree with the findings, or you may have reasons to discount the findings. But the findings are worth discussing and thinking about and they have to be confronted. Okay, that's my standard. Russ: Absolutely. Guest: You may disagree with my results on charter schools, but they are worth worrying about. Russ: Totally agree. What I find depressing is a couple of things--although I agree with you that sometimes people are surprised by the results they discover in their empirical work when they do a research design along the lines you are talking about, very often they will just dig harder. Other times they will not publish those results. And unfortunately sometimes when those results do get published, they don't hold up. So the biggest problem I have, really, is--there is a theoretical argument, which is-- Guest: Well, science is done by human beings. I think if you come at it with a very idealistic view, you are bound to be disappointed. People make mistakes. I'm not sure economists--we were having this discussion, I was at a conference last week at Stanford about causal influence in business school fields, and one of the speakers, John Rust, gave an interesting talk and he highlighted all the mistakes that economists have made in their empirical work--well known examples of mistaken analyses, I guess the most recent one is the Reinhart and Rogoff thing. Well, we all make mistakes. Science is a human endeavor and I'm not sure that we're worse than other fields-- Russ: I'm not talking about-- Guest: One of the [?] at this conference was talking about that. Russ: But I'm not talking about a spreadsheet error or Excel got the wrong number put in and they overstated some effect. And no one suggests that-- Guest: There's a spectrum of mistakes; some of it has to do with specification searches and that sort of thing. I agree. But you know, don't let the perfect be the enemy of the good. Are we always right? Are the findings always clear? Do the politicians always listen? I'm sure the answer to every one of those questions is 'No.' Are things generally improving? Are they better in the United States than elsewhere? Can you point to a situation or a period in time where the quality of social science and the impact that it has on public policy has been better than it is now? I'm not aware of a strong case for that. Russ: I don't find that necessarily a good thing. I mean, it's good for us. I'm not sure it's good for public policy. The question is whether the precision and accuracy of what we've discovered with the kind of techniques you are talking about, whether they have improved public policy or not--they've certainly given it a more scientific gloss. But the question is whether we have gotten better. Certainly we have more data; we have different kinds of data. But it's not obvious to me that we've gotten better at distinguishing causal impacts from correlations that may not be causal. And yet, you are right--we are the high priests of public policy; we get listened to a lot. I look at [?], my own bias, which is that skepticism. So, I'm willing to concede that I may be overly skeptical. When I look at the single most important macroeconomic event of our lifetime and I see the lack of precision--not just precision but different really smart people say that the effects are not just a different size but have different signs, it makes me wonder whether we are helping the debate or not. And I don't see those differences being narrowed over time. Do you think I'm wrong on macro? Guest: Well, you know, I'm a microeconomist, so I tend to pay less attention to macro. Steve and I wrote about this: I wish that macro was more empirical. And that macroeconomists were more like me in the sense that they look for good experiments and try to produce good designs. I think that's coming. It's been a long time coming. And Steve and I wrote about some of the younger scholars who seem to be bringing that message. It's certainly been resistant in macro. Here I'm talking about sort of on the intellectual side there seems to be a preference for models and theory among people who are trained in macro and see macro as their field. I can't really explain that. I think we'll get better evidence. But, if you draw back and say, where is social science in macro--again, by what standard? One of the most influential documents in the history of social science if Friedman and Schwartz. And it's hard to point to another field where, at least in social science, where anything has been so influential. Russ: I agree with that. I've talked about it many times in here; and it's not a sophisticated statistical analysis. It's just a post--before and after kind of look, what they call a natural experiment. It's very clever. Guest: Well, it's an effort to get at the causes of the Depression. I think that [?]-- Russ: And inflation, generally. Guest: [?] Friedman and Schwartz. And inflation. We can do better than Friedman and Schwartz with the kind of tools that are around today, but Friedman and Schwartz is a benchmark, and a worthy benchmark, and something to the credit of our discipline. Russ: But I have to mention that in 1945 there was a remarkable natural experiment, that WWII ended; and many macroeconomists said that it would create a horrible downturn. It did not. It didn't change 'em. I've gone back and read the AER (American Economic Review) and JPE (Journal of Political Economy) from those times; they then had an explanation for why it didn't conform to their expectations and then they didn't really need to revise them so much. I think it's very hard in a complicated world--and macro is one of the more complicated parts of it--for people to concede that their pet theory--and this is on both sides; I'll pick on my own views, which are very Friedman-and-Schwartz influenced. Certainly many people of my ilk said that we'd have massive inflation by now because of the activities of the Fed increasing its balance sheet. And I acted accordingly; I bought Inflation-Protected Securities with the Treasury. And they did okay, actually. But I was wrong. And a lot of people on my side-- Guest: I don't react to short term current events. When I was growing up--at least, I try not to in my work, in thinking about econometrics--when I was growing up, at least in my intellectual youth when I was in college, inflation was a central, was the central macroeconomic problem. And that problem in developing countries seems to have been solved. Well before the Great Depression. So that's certainly-- Russ: You mean well before the Great Recession? Guest: Right. I see that as a feather in the cap of applied macro. Russ: Oh, I totally agree. I think that's one of the few things that economists can point to, where they have, through empirical work, improved our understanding of something that wouldn't have otherwise been obvious to the general public or to policy makers. Showing that class size has an impact on education, I wouldn't put in the same category. And I'm worried that we're making a mistake when we conclude that minimum wage increases don't affect employment very much in the current range. Guest: Well, you know--I don't know--class size, I would say, is part of a larger literature on human capital. And again, I would credit economists with the prominence of human capital in policy discourse today. And certainly the credit here has to go to Gary Becker. His contribution was not fundamentally empirical. But also to Jacob Mincer; his contributions were fundamentally empirical. And that work began in the 1960s and 1970s and produced a stream of compelling empirical studies that really cemented the foundations that Becker and Mincer laid. So if you asked me for the largest macroeconomic victory for economic policy relevant to empirical work, I would say it's Friedman and Schwartz; and inflation especially on the micro side I would say the general importance of human capital as a causal determinant of earnings. And also something that the government can potentially influence. At the same time, labor economists have been good at showing that other things might not matter very much, like training programs that the government puts a lot of stock in don't seem to help people very much. Some do, but most don't.
49:14Russ: Let me ask you a question, though, about randomized trials. We had Brian Nosek, the psychologist, on the program-- Guest: Yeah, I know Brian and his center. Russ: So, they are part of a larger agenda to worry about the replicability and credibility of experimental results in psychology. There's been a huge interest in the last 10 years over similar randomized trials in poor countries, trying to find out what works and doesn't work. And again I worry that they appear to have a scientific basis akin to a medical trial that's controlled and a "real" experiment. But we do have the problem of limited sample size. And there's a serious question of whether the findings scale: whether they are not specific to particular experiments rather than general lessons about behavior. Am I right? Guest: [?] let me react to that. First of all, no, I don't think you are right. The first problem is, limited sample size--if that's all you are telling me, the answer to that is in the statistics, in other words, the machinery of statistics tells you whether your sample size is large enough. The answer to that question is in the standard errors. If you think your sample size is too small or too big in a sort of moral sense, I can't help you. But if you want to know-- Russ: No, I'm not talking about that. Guest: whether the results are statistically precise, I have a precise answer for that. If you want to know whether the findings generalize, that's a harder question to address. There are certainly strategies for that, and we don't have to invent them. When somebody produces an important finding in medicine, other people try to replicate it. So, you are seeing that happen now, for example, in microfinance. There is enormous enthusiasm for microfinance in developing countries as a tool to lift people out of poverty. And certainly a priori it's not crazy to think that that might be useful. And we're getting a lot of evidence that it's probably not that effective. And not just from one study. So there's a body of work building up. And that's the J-PAL (Abdul Latif Jameel Poverty Action Lab) Agenda, the Poverty Action Lab folks. Some of whom were my colleagues; and Esther Duflo who is one of the co-leaders of that effort was my student. She's not answering all questions all the time and she's not providing the most general answer at any one time. But she's promoting the idea that we can, through a series of experiments, learn a lot that's useful. And in particular we can come up with evidence that helps us direct resources in directions that are most likely to be useful. One of the things that's important to remember--this came up at the conference I was at last week--is: one of the big roles of a social scientist is to point out what's not likely to work. Russ: Very valuable. Guest: And particularly in the world that you are describing, which is full of interested parties and advocates. In some cases it's ideological, but often it's commercial or it's based on some sort of faith in particular strategies. So, in the education world there's no end of approaches to schools that people are strongly committed to, not based on the evidence but based on a belief about how students learn or perhaps they even have a product to sell--we see that in the case of computer-aided instruction. In the developing country world, you have many actors, philanthropists, governments, non-governmental agencies, who have an idea to sell. Maybe it's smaller family size; maybe it's a particular kind of social organization. Maybe it's a particular technology. And it's very useful for an outside party to come in and say, 'let's take a look at this.' A great example recently is the surge in enthusiasm for computers in early education in developing countries. Many, many people became convinced--and I'm talking about politicians and policy makers and scientists--that it would be extraordinarily beneficial to put laptops or iPads in the hands of young kids in, say, Peru or Thailand or someplace like that. And others came and looked at that. In some cases, the idea that we should look at it was resisted. But we have good experimental evidence that that's probably not going to improve outcomes in those settings.
54:23Russ: But in so many cases--this is tragedy, this is to be warned[?], not celebrated--a particular experiment which has statistical significance--when I say--my worry about sample size, it's not a moral issue. It's the question of whether you've sufficiently randomized across the unobservable variable that you can't control for; and therefore it's always possible that what you have measured is not really there. A lot of times, those studies don't replicate when they go try to find the results. Now, agreed, it's nice to open a question and it's nice to look at it. But I find it fascinating how often those results don't replicate. And that's a problem of development in randomized trials in poor countries. It's an enormous problem in epidemiology, where they often have enormous samples but they still have results that cannot be replicated on different samples or across different types of people or different cultures. And yet, the results that were established initially become waved around. An example was recently written about in the New Republic, the enthusiasm for deworming in Africa that seems perhaps, based on a followup study--and maybe it's not a good study--they suggest that many of those studies do not get repeated. There's not benefits from deworming, for student performance in education. So, that's--I'm not suggesting we shouldn't do empirical work. I'm suggesting that we should be much more humble about its reliability. Guest: I'm all for humble. I think it's important not to throw the baby out with the bathwater. The idea that findings can be misleading--you know, I'm the first to say that. And I'm known for being a harsh critic on other people's empirical work, and I try to apply the same standards to my own work. I don't agree with the sort of nihilistic proposition that nothing is ever learned, that it's all for naught. Russ: It's depressing, isn't it? Guest: No. I'm not depressed. Russ: Nod, it would be, if it were true. If it were true. Guest: I think there are a lot of people who are sort of retreating into that. I'm not sure why. Again, don't let the perfect be the enemy of the good. And try to keep some perspective. I was at a conference that the Center for Open Science sponsored. And most of the studies that seemed to generate the majority of the handwringing that we saw at that conference came from psychology, where there would be a small sample and there would be kind of a quirky finding. And I would have said, why did you pay any attention to that, anyway? And you know, you are probably right that the Atlantic likes that sort of thing-- Russ: Yup; New York Times. They make the front page-- Guest: Somebody does a little study about men and women do this or that--women are actually more competitive than men-- Russ: Better investors, whatever it is-- Guest: Under the right circumstances, men will eat their children. Or some wacky psychological thing. It doesn't concern me too much. I'm not sure that there's any policy that's reacting to that. I think in some sense that's just kind of a consumption good. It's lots of fun. I like to read it by myself. Russ: Find what's wrong with it. Yeah. Point out what's wrong with it. I understand. Guest: I would worry if everything we do turns out to be wrong, perhaps because the researchers are dishonest or manipulating results. That's not my impression, though. Russ: No; I think the bigger worry is that they are honest, and either they are fooling themselves or they are unintentionally fooling others about the reliability of the work. It's a lot more important, I think, to understand what happens when you spend $780-$805 or whatever it turned out to be, billion dollars on stimulus or whether you have helped or hurt the lowest skilled people with an increase in the minimum wage. There's a lot more at stake. Guest: Right. But there are plenty of examples where there's a body of work emerging. So, you know, in labor, it's certainly been hard in repeated good efforts to find dis-employment effects of the minimum wage. I'm not saying that's the end of the story. It's been hard in repeated efforts, mostly based on random assignment, to find training programs that are very likely to support the lower tail of the income distribution in any substantial way. It's been relatively easy in repeated efforts to find strong evidence that schooling boosts earnings. There's quite a few findings out there that are worth paying attention to and worth taking account of when it comes time to make policy. Russ: Well, I think learning boosts earnings. I don't think we've been very good at proving that schooling does. I think that's a big challenge, especially in poor countries, and Lant Pritchett's work I think is very alarming and probably true. Guest: Well, you need to read Chapter 6 of Mastering 'Metrics. Which is all about the relationship between schooling and earnings. And we trace the history of that question. And we go through the evidence and we explain why the picture that emerges there is reasonably convincing. Russ: Well, sometimes knowledge is correlated with schooling. I don't deny it. Russ: Let's close-- Guest: No, but I'm talking about the effect of schooling on earnings specifically. Measured schooling and earnings. That's what Chapter 6 in Mastering 'Metrics is about. Russ: Right but a huge part of that-- Guest: And we use that as a question to walk the reader through our application of our serious 5 econometric techniques. And, not every study is equally well done. But there's a body of evidence there that's worth taking a serious look at. Russ: Oh, I totally agree with you. But again, I'm not blaming you for this; the fact that it has led to billions--to say b illions of dollars being spent on schooling in poor countries with no impact is tragic. And that's not your fault; it's not the fault of that literature; it's not the fault that that literature doesn't apply to certain countries and settings. And the fact that, say, schooling and education are not always correlated. But I agree with you: when they are, there's no doubt it has an impact. I think people, even without economics degrees, believe it, and believed it before we quantified it.
1:01:04Russ: Let me close with a philosophical question, because we're out of time. Your paper is a [?] paper. It's a paper of an evangelist. And I have a lot of respect for what you do. The fact that I don't agree with every jot and tiddle of it is not relevant; and I'm not your target audience, anyway. But, I'm Diane Ravitch in econometrics to your econometrics writings. But, Leamer and Sims, who are critical in response to your paper, are remarkably unconvinced of the credibility revolution. And by the way, we're going to put up links to all these papers. As well as to your book and anything else you want to share with us. But, why do you think you've made so little headway with that audience? Is it their biases? Or is it your flavor that's not appealing to them? They don't find--again, I'm irrelevant. But they are not convinced. Why do you think they are not convinced, and do you think that will change over the next 20, 30 years as the next generation of graduate students comes out? Guest: You know, I don't know why they are not convinced. But I guess, with all due respect to Leamer and Sims, I'm not too concerned with whether they are convinced. Mostly Harmless Econometrics, I think it's fair to say, has had a huge influence on graduate education. It sold about 50,000 copies--this is a graduate textbook in a specialized field. It's a source of discussion; it's widely cited. It's a reference point in scholarly work in Ph.D. programs all over the country. That's the measure of our success, what Ph.D. students are learning and what young faculty are doing. And I think by that standard, we're winning. Russ: Right; but I'm asking a different question, which is: Should you be? You are of course going to be happy that your book is popular; and of course graduate students are going to flock to books that tell them that they are going to change the world and save it and make it better and that they have the tools to do so. But maybe you are not right. Maybe you are overly confident. And Leamer and Sims are saying, 'Whoa.' And you are saying they don't matter? Is it just that they are stubborn? They don't get it? Guest: Yeah; I don't really want to personalize it. Obviously we convince some people and others are not convinced. I think there's a lot of good empirical work today, and if we had a lot more time, we could go through it. We've mentioned some of it that's convincing, that's worth taking note of when it comes time to make policy. In my area on schools I see two sorts of consensus emerging. One is that a certain type of charter school seems to be extraordinarily effective in urban districts. Another is that teachers matter both for achievement and earnings in the longer run. That's convincing work. It matters to scholars and it matters for policy. I think there are more examples like that today than there were when I was in graduate school in the 1980s.