by Alice Temnick

How do p-hacking, type M errors, and the “winners curse” affect the research findings that make weekly news? Or the research findings published in academic journals? In this week’s episode, Stanford University’s John Ioannidis and host Russ Roberts discuss the surprising frequency and enormity of the problem as well as some causes of false research findings.

statistical significance.jpg

1. What concerns you most about the extent of the false research findings surrounding us, and why?2. Referring to meta-analysis, why does Ioannidis suggest that even when these studies are completely flawed, they might give more information than a single study or observation?

3. Roberts refers to flaws in the research question of many minimum wage studies which are then compounded with an 8.5% median power of even detecting an impact on unemployment. Do you think this is depressing or revealing about the limitations of research design/studies?

4. To what extent might requiring increased transparency of information reduce skepticism and/or improve accuracy of research findings?

5. What advice does Ioannidis offer to young scholars in scientific fields? To what extent would you find this advice valuable?