big data.jpg

Is economic analysis ready for the age of Big Data? That’s the underlying question explored in this EconTalk episode. Host Russ Roberts welcomes Stanford’s Susan Athey, and they discuss how machine learning might be used in conjunction with more traditional econometric techniques. They also talk about the ways commercial entities, such as amazon and Facebook, use machine learning to change their users’ experiences and increase profits.

Are there lessons to be learned from these commercial experiences that can help us analyze policy? Or will counterfactuals, such as what would have happened had NAFTA not passed, forever evade such rigorous analysis?

1. Why is it so difficult to measure the effects of a policy such as minimum wage legislation, and how does Athey cast doubt on the Card-Krueger study?

2. Athey suggests that despite the popular imagination, most innovations in tech companies today are still incremental, or “nothing happens at Google without randomized control trials.” What is your reaction to this information about the process of innovation going on at established companies? How often do you think the responses to these experiments benefit users vs. profits? Or is it both? What might the answer depend on?

3. What does Athey mean when she says that economists are too focused on looking for “generalizable parallels?” How does she suggest machine learning may be combined with econometric techniques to add greater predictability, and to what extent are you convinced of the efficacy of this approach?

4. In a previous episode of EconTalk, Pedro Domingos suggested (see around the 17 or 18 minute mark) that machine learning and big data can greatly improve on previously used statistical techniques. Do you agree? Do you think Susan Athey agrees with Domingos particular optimism?