This is a follow-up to the EconTalk with Joshua Angrist who is a prominent champion of sophisticated statistical analysis for understanding and measuring causal relationships between variables.

This was not a debate. In general, I invite guests to EconTalk to hear what they have to say and to give listeners the chance to think about what they have to say. I will challenge a guest from time to time but that doesn’t mean it’s a debate. And just because I don’t challenge something doesn’t mean I agree with it. I pick my spots. We only have an hour.

Rarely, there will be an EconTalk episode where the guest and I understand from the start that it will be more of a debate than an interview. This episode with Robert Frank for example, came about because Frank suggested that we have a debate on the topic.

In the case of Angrist, he did know coming into the interview (because I told him) that I was sympathetic to the views of Ed Leamer so he was prepared for a little more disagreement than the typical guest. I thought there was at least one time when I veered into debate territory and that was at the end when I asked him for his thoughts on why Leamer and Sims did not agree with him. I was disappointed with his unwillingness to engage with their ideas so I pushed back much harder than I usually do. Other than that and maybe one or two other moments, this was not a debate, but a chance for Angrist to make his points and to respond to questions from me about those points.

What I want to do here is to write up some further thoughts on the issues he raised and to share with you where my thinking is on the issues that were discussed. I’m going to push back harder than I did in the interview hoping that Angrist or some of his fans will respond to these points.

My basic position is that econometrics is often (almost always?) pseudoscience–that causation and measuring the magnitude of that causation is very difficult to tease out in a complex world of multiple interactions with difficult to measure and even difficult to perceive feedback loops.

When I pressed Angrist on the lack of reliability of empirical results using the kind of techniques he champions, he cited maybe five different areas–the minimum wage, his own work on charter schools, effects of education on earnings, the effect of class size on education, and the effect of health insurance on health.

Interestingly, a recent study of the minimum wage uses the same “natural experiment” approach of Card and Krueger (and others) and finds sizable effects of the recent increase in the minimum wage. Perhaps Angrist and others will be able to dismiss it as being poorly done. Perhaps it will not survive peer review.

I was surprised at Angrist’s claims on behalf of charter schools. I had thought the findings are quite mixed. I am looking forward to reading Angrist’s work on the topic. As I made clear in the interview, I think the work showing a relationship between years of schooling, say, and income, the bedrock of the human capital literature, has turned out to be terribly misleading in helping poor countries where years of schooling can correspond to very fews gain in knowledge.

I oppose government’s involvement in health insurance but I was very surprised at Angrist’s confidence in the implications of the Rand Study or the Oregon Medicare Study. When I suggested that they did not look far enough into the future, he mentioned a five-year horizon. Surely this is inadequate for measuring the effect of health insurance on longevity, say. And the Oregon Study, even though very close to a genuine randomized trial still had problems with selection bias as Jim Manzi argued when I interviewed him.