Continuing Conversation... Joshua Angrist on Econometrics and Causation

EconTalk Extra
by Amy Willis
Joshua Angrist on Econometrics... James Tooley on Private School...

This week, EconTalk host Russ Roberts explored his skepticism about econometrics and causation with MIT's Joshua Angrist. Did Angrist convince Roberts about the value of empirical methods today? Are you convinced?

We want to hear what you think...

Mastering Metrics.jpg

Check Your Knowledge:

1. What does Angrist cite as the primary types of empirical analysis available to researchers today?

2. In 1983, Ed Leamer argued that economists should do sensitivity analysis as a way to provide credibility for their findings. Angrist argues that economists have more credibility today but not because of sensitivity analysis. How does Angrist justify that claim?

Going Deeper:

3. Roberts points to three controversial areas in microeconomic research- the effect of class size on student achievement, the employment effects of minimum wage, and the relationship between health insurance and health outcomes. What has econometrics been able to show about each of these, according to Angrist? Are these areas where knowledge has become more reliable and precise because of empirical study?

Extra Credit:

4. Have you ever had your mind changed about a belief you had previously been strongly committed to? If so, what effected the change? If not, what would it take for you to change your views? Explain. (Related bonus question: What is the role for econometrics in effecting belief change? Are you as optimistic about the possibilities as Angrist, or as skeptical as Roberts? Explain.)

5. Roberts was skeptical that econometrics convinces politicians about which policies they should pursue. What is Roberts's argument? How does Angrist respond? What might settle their differences?

Comments and Sharing

TWITTER: Follow Russ Roberts @EconTalker

COMMENTS (6 to date)
Greg G writes:

regarding # 5,

I think the issue was a bit misframed in your opening sentence Amy. I don't think there is any evidence that politicians are more reluctant than anyone else to change their ideology in the face of challenging evidence from econometrics. And I didn't hear Russ single out politicians. Most of the discussion was about economists.

There are two separate issues here. The first is how useful econometric evidence is. The second is how reluctant all people are to change their ideological beliefs in the face of evidence against those beliefs.

The second question is more within the purview of psychologists or behavioral economists. The first was more the focus of this podcast.

As for politicians, they are unlikely to stray far from the consensus view of professional economists in a crisis. George W. Bush passed a stimulus bill with overwhelming Republican support in 2008.

Russ is very skeptical about the value of econometrics both as a useful empirical tool and as a guide to policy. He raises Hayekian concerns about our ability to identify the effects of single causes in a complex economy. He is also skeptical that econometrics changes people's minds. He cites macroeconomists who thought the depression would resume after WWII as one example of this. Then, in an admirable show of balance, he cites the Austrians expectations of raging inflation predicted to result from the Fed expanding the money supply as another example. In neither case did many change their ideologies. Russ is also concerned that econometrics may generate false confidence in policy decisions we should be more skeptical about.

Angrist is much more optimistic about the value of econometrics. He cites the work of Friedman and Schwartz as an iconic example of econometric analysis changing minds and influencing policy. He also cites several studies that changed his own mind. These include studies showing that health insurance has little causal effect on health, government job training programs are largely ineffective and that minimum wage legislation has had very small disemployment effects.

There is no reason to think that anything will settle these differences. Both sides agreed that humility is appropriate. It is worth remembering that every policy preference is a prediction and a claim to know something about causation whether or not you use econometrics. I see no reason to think that one side is more humble than the other. And of course few enterprises could be more self defeating than bragging about your own humility anyway.

Russ Roberts writes:

Greg G,

The question is related to this point of Angrist's:

When it comes time to make policy, there are people who skip over the advocates. And they do look at what the academics say. When our governor, for example, was thinking--and in Massachusetts, the number of charter schools is capped. I don't have a position on that. I don't care, personally, deeply, what Massachusetts does as far as its charter school policy. I just want my work to be noticed when that issue is debated. And when that issue was debated in 2010, our work was noticed and I was gratified by that. The work was noticed, not just because economists were saying, 'This is worth attending to,' but people found the design convincing. We were able to represent it in a way that was convincing to policy makers as well as to other scholars. And more so, I think, than a lot of the work that had gone before.

Angrist thinks his work informs the policy debate for decisionmakers. I am not sure. I wonder if politicians use the experts as cover for what is in their political interest. But it is not surprising that economists want to believe that they have influence not as propagandists and enablers but as educators of the powerful. Perhaps they are merely fooling themselves.

Dallas Weaver, Ph.D. writes:

Glad to see that there is some progress in econometrics that may make some of the findings a little more robust. But it still seems that you end up with a correlation results that only imply causation and don't have all the little pieces that go from real causes to final results. It is a lot like all the cancer cluster studies and environmental health studies, without a biochemical mechanism, that seldom have been shown to be correct. In fact most have been shown to be false, whenever someone tried to reproduce the results.

It is the old problem of not being able to describe a N variable problem in N-X variables and often not even knowing all the relevant variables. In many of the questions discussed, the ignored relevant variables are "cultural" and aren't even measured or measurable.

For an example of educational performance, we can observe the Vietnamese who migrated into So. Calif. after the Vietnam war. They were very poor, couldn't drive very well, couldn't speak english, and moved into very poor areas with failing schools dominated by spanish speakers. They made up about 10% of the population of Orange county. Withing less than a decade, they made up 90% of the valedictorians in the local high schools while moving on to become the majority at University of California at Irvine. The parents demanded the children learn english and do well in school and they did; the kids had no options.

These cultural factors were more important that even the failing schools and failing teachers in that area. But these cultural factors are just lumped into a nebulous lump of "human capital", even when the parents had no definable human capital (no education, no language skills, no mathematics beyond an abacus, great loss and trauma, etc.).

We get into areas like the minimum wage, I observed that a lot of the seriously marginally functional children ended up in marginal jobs with relatives in small businesses. They were employed and paid the minimum wage, whether they were worth it or not for social and family reasons. The family is going to spend the money anyway so why not make the kid (sometime not young) feel good about himself with a "job". Changing the minimum wage has no impact on this relationship beyond decreasing the non-wage interfamily transfers.

We even had a cousin send us his kid who dropped out of high school for a job. My business was doing poorly at that time so my cousin payed for his job. Of course, we had him do all the dirty and hard work jobs and we had all the other workers pointing out that he shouldn't want to do all that hard dirty work the rest of his life. Several of those workers had made poor life choices and spoke with truth of real regret. He wasn't marginal, but he was just a lazy teenager. He is how a successful lawyer.

The real world is the sum of all these individual actions and any lumping of variables is, by mathematical definition, an incomplete representation of reality. Perhaps someone can make the equivalent of Arrow’s impossibility theorem to show that any lumping of variables will always give a partial result, at best.

Vincent L writes:

Dallas Weaver,

I am glad that you bring up an important point at the beginning of your discussion:

Glad to see that there is some progress in econometrics that may make some of the findings a little more robust. But it still seems that you end up with a correlation results that only imply causation and don't have all the little pieces that go from real causes to final results.

Indeed, the fundamental question of applied econometrics (and of the talk with Russ and Josh) is how can we use econometrics to distinguish between correlation and causality?

To start, let me quote Josh in his talk with Russ in which he repeatedly said "don't let the perfect be the enemy of the good". Allow me to elaborate,

Josh listed several econometric tools that we can use to identify causality including: regression, instrumental variables, natural experiments, difference-in-differences, and regression discontinuity. Without any doubt, under certain assumptions, these methods can produce unbiased estimates of the true causal effect of an independent variable on a dependent variable. In other words, econometricians have long mathematically proven the unbiasedness of the results obtained using these methods given certain assumptions.

So these methods definitely work, but notice the key caveat I stated above: these methods work given certain assumptions. In the real world, we can never fully validate our assumptions, even in the case of a seemingly perfectly designed random experiment. But again, don't let the perfect be the enemy of the good. Using econometric techniques, we can at least argue that our conditions are most likely satisfied, and applied micro-economists have done a very good job over the past two decades of improving the power and reliability of "causal estimates". In some cases, we have very good data and strong evidence of certain causal mechanisms -- indeed Josh Angrists' work on charter schools are considered robust and thorough.

Unfortunately, many important economic topics do not have such good data in which we can use econometric techniques to identify causal effects. For example, for many macroeconomic questions, we do not have enough data nor do we have plausible natural experiments to appropriately compare "treatment" to "control" groups. In other words, suppose we wanted to figure out the effect of the last financial recession on US GDP growth. It is extremely hard to do this, there is only one United States, and one financial recession that impacted the entirety of the United States, so how is it possible to extract a causal effect here?

However, these problems that we face in many economic questions should not hinder our progress in areas where we do have good data, and we do have good techniques to identify causal effects. For example, in Josh's work with charter schools, over-subscribed schools assign students based off of a lottery system. This lottery system is randomized, so there should be no initial differences, on average, between students who were assigned to certain charter schools and those who were not. This "randomized" variation is the basis for the validity of the econometric techniques.

Lastly, I just want to conclude that you bring up anecdotal evidence in your discussion of econometrics. Of course, everyone has their own personal experience, but it is very tricky to judge econometrics based off of personal experiences. Remember that in many econometric studies, the "point-estimate" causal effect is generally expressed as an average treatment effect. So the results may not apply to you, they might not apply to me, they might not apply to your neighbor, but if done correctly, the results should apply on average. Thus, it is dangerous to judge econometrics by anecdotal evidence where sample sizes are extremely small.

M. Delson writes:

I would like to suggest that your discussion include references to significant documents mentioned during the interview. For example, Mr. Angrist said "Friedman and Schwartz" was "one of the most important documents in the history of social science."

But listeners like myself, unschooled in economics, have no idea what the reference is to. Is it "Monetary History of the US"? "Monetary Statistics"? Another work?

Guidance for non-economists who would like to learn more would be much appreciated.


Mercy Vetsel writes:

While I've long ago learned to accept the vast economic illiteracy of the press and academia, I've recently noticed that when it comes to the minimum wage (and several other issues) that professional economists have adopted a tactic of retreating into a peculiar type of sophistry.

Rather than challenge the Law of Supply and Demand or debate the unintended consequences of price controls on labor or negative wage price elasticity, they switch to multiple variable econometric models and then lo and behold they just can't find that needle in the haystack.

I'll bet that if the Democrat Party platform stated that airborne feathers have no weight, you'd see professional economists doing econometric analyses of birds based on the "natural experiment" that occurs when a bird loses a feather and then dismiss Law of Gravity-based arguments with some pompous phrase about how the economic literature shows that it's very difficult to find evidence that airborne feathers actually have weight.

Multiple regression models are inherently vulnerable to manipulation, especially political manipulation and therefore that's where left-leaning economists like to hide. Trying to tease out the long term effect of a small increase in a minimum wage that affects less than 2% of the workforce amidst a host of other factors, known and unknown is absurd.

Don't let them do it. The right question is whether one would need a multiple-regression model and a natural experiment to determine if a $40/hr minimum wage would benefit low wage workers.

That said, overall I found myself agreeing with Angrist more overall. Just because some tools of econometrics lend themselves to abuse doesn't mean that we should just abandon the whole endevour.

I would really like to see Roberts take a politically charged issue like the minimum wage delve into why Angrist has so much confidence that the Card and Krueger study applies.


Comments for this podcast episode have been closed
Return to top