Why does it seem that pundits’ and politicians’ predictions are always right? How can you assess the accuracy of a probabilistic prediction? This week, EconTalk host Russ Roberts sat down with Superforecasting author Phillip Tetlock, and their conversation ranged over these topics and more.

1. As Tetlock told Russ about his earliest forecasting tournaments about the Soviet Union, he noted how different the predictions of liberals and conservatives was. Still, he explains, none of them foresaw the rise of Mikhail Gorbechev or the collapse of the USSR, describing an “outcome-irrelevant learning situation.” What does he mean by this? What sorts of outcome-irrelevant learning situations have you found yourself in and/or witness to? How might they have turned out differently?

superforecast2.jpg

2. Tetlock and his team felt that President Obama was underconfident in his decision to go after Osama bin Laden. To what extent do you agree? How does this example illustrate the dangers of both over- and under-confidence?

3. What is the “cloning problem” in group decision-making settings? Have you ever fallen victim to clones? How can you really know if and when this problem is mitigated?

Bonus: Superforecasting was one of Bryan Caplan’s (of EconLog) favorite books of the year. (See here and here.) Have you read it, and do you agree with Bryan?