Cass Sunstein on Infotopia, Information and Decision-Making
May 14 2007

Cass Sunstein of the University of Chicago talks about the ideas in his latest book, Infotopia: How Many Minds Produce Knowledge. What are the best ways to get the information needed to make wise decisions when that information is spread out among an organization's members or a society's citizens? He argues that prediction markets can help both politicians and business leaders make better decisions and discusses the surprising ways they're already being used today. Deliberation, the standard way we often gather information at various kinds of meetings, has some unpleasant biases that hamper its usefulness relative to surveys and incentive-based alternatives.

RELATED EPISODE
Cass Sunstein on #Republic
Author and legal scholar Cass Sunstein of Harvard University talks with EconTalk host Russ Roberts about his latest book, #Republic. Sunstein argues that the internet has encouraged people to frequent informational echo chambers where their views are reinforced and rarely...
EXPLORE MORE
Related EPISODE
Richard Thaler on Libertarian Paternalism
Richard Thaler of the University of Chicago Graduate School of Business defends the idea of libertarian paternalism--how government might use the insights of behavioral economics to help citizens make better choices. Host Russ Roberts accepts the premise that individuals make...
EXPLORE MORE
Explore audio transcript, further reading that will help you delve deeper into this week’s episode, and vigorous conversations in the form of our comments section below.

READER COMMENTS

Brad Hutchings
May 15 2007 at 3:07am

He had me up until the open source software discussion. Not that I don’t have enough EconTalk inspired reading (two Taleb books came from Amazon today, and Moneyball and The Blind Side were recent excellent reads), but I’m going to have to see how he connects open source with wikis in his book. The definition he gives in the interview is of a particular strain of open source software. There is still a tremendous amount of open source software available under so-called BSD licenses, which don’t require giving back. Some (read “the guy posting this”) would argue that viral licenses like GPL are socially retarded akin to the understanding of Christmas gift giving that a 4th grader might have. Great for 4th graders, but horrible in the adult world. Additionally, self-annointed open source pioneer Eric Raymond argued a decade ago that the best open source projects were mainly individual efforts until they reach a certain critical mass of users or importance. This keeps the “too many cooks” problem from getting into the DNA of the project.

In the long run though, to say that commercial software or open source is “better” is folly on an axis of better, except one. That would be solving the coordination problem. It is well known in software development circles that if you have a staff of say, 100, and your project is moving too slowly, the best way to make it go faster is to fire half your staff. Hire more people and expect it to take even longer. The whole concept of open source seems to just turn this reality on its head. Millions of eyes or millions of coders will tend to muck things up. Concrete example… Compare progress on Internet Explorer to Firefox. Many would say that Firefox is better, but nobody could reasonably argue that it’s twice as good as IE7. Yet Firefox has all the volunteers while Microsoft just occasionally staffs up to bring IE up to date.

Many people (and companies) see software as art. I wonder how Sunstein would feel about music or plays or scripts composed via wiki. In other words, there are some things that crowds are very good at when information is dispersed. and others that crowds will undoubtedly suck at, such as when focus and uniformity are desired traits.

Richard Sprague
May 19 2007 at 10:28am

I would love to see a serious economist take a closer look at open source software. I’ve seen studies that show, contrary to its proponents’ claims, in fact even on the big open source projects (like Linux and Firefox), it’s still a very small number of people doing the real work. In fact, even on the bug-finding and fixing side there is evidence that commercial software does a more systematic job than open source, which is subject to the free rider problem.

Steve
May 31 2007 at 4:58pm

I am only half way through this podcast so I have not gotten to the part about open source. But this topic is one that has intrigued me for some time. I have worked for a software company for 13 years so I have a vested interest in the success of copyrighted software produced by a firm. But the success of open source software is indisputable. On my most recently purchased computer, I installed OpenOffice rather than MS Office. My blog is powered by WordPress, an open source application.

I would love to hear more discussion on this topic. Russ, have you thought of interviewing Yochai Benkler, Yale Law Professor and author of the influential paper on this topic, Coase’s Penguin? He coined the term Commons Based Peer Production to describe the open source production process. In particular, a discussion of the incentives at work would be fascinating. Is Open Source the counterpoint to Adam Smith’s “It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own self-interest”?

Steve

shawn
Jul 8 2007 at 4:23pm

…I’m intrigued that there was no mention of public choice theory in the midst of this conversation. Perhaps I’m using the wrong term, but essentially, in an earlier podcast (which one it was I can’t recall just now…perhaps with Mike Munger?) the fact that ‘median joe and median jane’ always and in every instance get what they want seems to run a bit counter to this.

How does that factor in to this discussion? If, essentially, the vast majority of people DIDN’T think that Apple computers would survive, they wouldn’t have come to a consensus that it would, and it would have continued to be a dream. I thought you were going to go there initially, when the discussion turned to which areas this works well in, and which it fails in, but it didn’t seem to come up.

Does the ‘prediction market’ answer that situation, in that then ‘median joe and median jane’ just keep their mouths shut, because they don’t know what’s going on, and would be best to stay out of it?

Comments are closed.


DELVE DEEPER

About this week's guest:

About ideas and people mentioned in this podcast:Books:

Articles:

Web Pages:

Podcasts and Blogs:


AUDIO TRANSCRIPT

 

Time
Podcast Episode Highlights
0:38Intro. Knowledge is dispersed, spread out among many people. How can we aggregate that knowledge across people and time? From Infotopia:
First, groups might use the statistical average of the independent judgments of their members. Second, groups might attempt to improve on those independent judgments, by using deliberation and asking for the reasoned exchange of facts, ideas, and opinions. Perhaps members will vote anonymously or otherwise after deliberation has occurred. Third, groups might use the price system and develop some kind of market through which group members or those outside of the group buy and sell on the basis of their judgments. Fourth, groups might enlist the Internet to obtain the information and perspectives of anyone who cares to participate.
Statistical averages. Surowiecki book, Wisdom of Crowds. Statistical average as if by magic seems to "get it right"--better than experts and also more accurate. Horse racing example. Galton, random fair attendees guess weight of ox. Condorcet jury theorem: Take a group of people each of whom is more likely to get the right answer than the wrong answer. Ask them the question. As size of group expands, likelihood of majority getting it right approaches 100%. Holds for pluralities. This is why surveys tend to do well. But if the group suffers from a systematic bias or are more likely to be wrong than right, then likelihood of group getting it right falls to 0% as size of group expands. Sunstein tried experiment, asking colleagues some questions. Theorem worked for Kentucky Derby, but not for number of times Supreme Court had invalidated State and Federal law. Limitation of crowd wisdom--have to know in advance that group being sampled isn't predisposed to bias toward getting it wrong.
7:02Congressional staffers and law professors systematically get wrong what proportion of American work force earns the minimum wage or less. (A: 3% or under.) Does crowd wisdom approach only get you the right answer in relatively unimportant matters like the weight of a horse? Is it just a parlor trick? No, it's the mechanism, not the importance of the question. Scott Page, Michigan, random error in large groups. Just want to know whether or not you have confidence in the people being asked. Statistical collections of experts typically outperform non-experts. To show whether the method works you have to specify measurable, factual questions. In business domain: trick is to figure out whether there is or isn't systematic bias.
12:39Prediction markets, variation on statistical averages. Try to get a bunch of people to make a bet about whether question businesses are really interested in (will product be available? will office open? will venture be profitable?). Bet can be virtual (just about reputation and honor) or can be for real money (offshore markets, gambling), or can be chance to enter a lottery (get a shirt as token reward). Voluntary participation, some incentive to get it right. Microsoft, Hewlett-Packard, Ely Lily. Google's virtual market experience is that the prices are probabilities, accurately so. Why this apparently magical result? People only choose to participate unless they are more than 50% confident they will get it right. "Economic incentive plus the dynamic quality of the prediction market... provides a kind of built-in safeguard against the risk of a systematic bias or pervasive error." Were Kentucky Derby and Supreme Court survey answers publicized ex post (as an incentive to get it right)? Google--did it compare statistical average to prediction market results? No. How many participants? Even thin markets are pretty accurate. Logically, prediction markets outperform surveys because of voluntary aspect because there is an economic incentive to get it right. No financial cost for wrong prediction. Google is successfully trying to avoid gambling laws. Trade sports are high-incentive; Google virtual market is low-incentive; but analysis suggests that both work well. Puzzle. Maybe reputation is an incentive. Surveys can do just fine even if there are lots of random guessers.
22:42Internal deliberation. Business or political applications. Survey CIA employees about Iraq's possessing weapons of mass destruction may result in less confidence than public statements. Internal deliberation process crucial. Roosevelt, Bush. Bush considered using prediction markets. Robin Hanson's proposal distorted. Deliberation is a bit oversold. In private sector and government it seems to be preferred way to make decisions. But after deliberation people sometimes end up taking more extreme positions of what they originally thought. Colorado example on affirmative action. Affects risk-taking behavior. Group polarization can lead to error. Additional problem: If people suffer from some sort of confusion, worse than garbage-in/garbage-out. Errors are amplified by deliberation. Availability heuristic. Also, groups often emphasize shared information at the expense of critical uniquely-held information. Bay of Pigs invasion. People sometimes keep their mouths shut in group situations. What experimental evidence is there about these results? In real world, do organizations relying on deliberation take measures to reduce these negative effects? Ex-post revelations may not tell us about the general run. Federal judicial behavior example. Private sector boards may be contentious and take steps to avoid the polarization phenomenon or avoiding the problem when unique information doesn't get out. Inculcation of norms of disagreement. Eureka problem, crossword puzzles. Goal should be to figure out how to make deliberation process better. Reagan administration successful in this, but how?
34:24Why are there these problems with deliberation? Judicial finding: When you are on an all-Democrat or all Republican panel, you know how the outcome will turn out so you might as well join in--different incentives. Harmless in some cases, but stakes still aren't zero because dissenting opinions matter. Republicans and Democrats behave similarly when sitting with opponents. Power of social influences. Supreme Court deliberation process--we envision that they independently weigh evidence. But do they deliberate and communicate more than we are aware of? Yes: Conference, deliberative process. Rehnquist and Roberts--different approaches to deliberation. After opinion is circulated there is more give-and-take, both written and oral. No evidence on whether they change each others' minds, though. Should we seclude them? Best to have anonymous points of view stated independently first, before deliberative process. Diversity and strength of will on Supreme Court are both safeguards. Creativity.
45:41Hayek's "Use of Knowledge in Society" used in Infotopia. Information is dispersed, each of us has a bit; price system is a marvel because it gathers that information in one place. Prediction market has a Hayekian feature. Hayek had that right. Criticism: "The same sorts of problems that can make deliberation go wrong can make the price system go wrong, though there are correctives that make the errors in the price system less robust, less enduring." Fads and fashions in stocks and commodities, so things can for a time get out of whack. Shiller, Irrational Exuberance, vindicated. Not so worrisome for prediction markets. Traders correct for errors. Fads, bubbles do make mistakes. Hayek never claimed prices are right or perfect. Interested in solving the problem of order. "Those who understand Hayek's arguments to suggest that stock markets are always optimally priced... that there is a great deal of truth in what they say and Hayek provides the mechanism to explain it." But errors can persist even though that wasn't what Hayek was talking about. Thesis of Infotopia is that prediction markets are the most successful aggregative mechanism we have for compiling dispersed information. Should public sector be pushed further to use prediction markets? Yes. EPA. Is use of prediction markets growing in the private sector? Yes, but some companies are nervous about talking about it because of disclosure laws.
55:39Open source software, Wikipedia. How do they aggregate information? Not proprietary. Hacker culture. Will they outperform owned resources remains to be seen because of lack of monetary incentive. Other incentives. Kevin Kelly podcast. Encyclopedia Britannica vs. Wikipedia comparison of errors. Wikipedia is working better than many expected. Motivated contributors exceed the vandals. Wikis used in the publishing industry. CIA's intellipedia (not publicly accessible). Builds on Hayekian process. Simon Winchester's book on voluntary process to create Oxford English Dictionary. FCC.