|0:38||Intro. Knowledge is dispersed, spread out among many people. How can we aggregate that knowledge across people and time? From Infotopia:|
First, groups might use the statistical average of the independent judgments of their members. Second, groups might attempt to improve on those independent judgments, by using deliberation and asking for the reasoned exchange of facts, ideas, and opinions. Perhaps members will vote anonymously or otherwise after deliberation has occurred. Third, groups might use the price system and develop some kind of market through which group members or those outside of the group buy and sell on the basis of their judgments. Fourth, groups might enlist the Internet to obtain the information and perspectives of anyone who cares to participate.Statistical averages. Surowiecki book, Wisdom of Crowds. Statistical average as if by magic seems to "get it right"--better than experts and also more accurate. Horse racing example. Galton, random fair attendees guess weight of ox. Condorcet jury theorem: Take a group of people each of whom is more likely to get the right answer than the wrong answer. Ask them the question. As size of group expands, likelihood of majority getting it right approaches 100%. Holds for pluralities. This is why surveys tend to do well. But if the group suffers from a systematic bias or are more likely to be wrong than right, then likelihood of group getting it right falls to 0% as size of group expands. Sunstein tried experiment, asking colleagues some questions. Theorem worked for Kentucky Derby, but not for number of times Supreme Court had invalidated State and Federal law. Limitation of crowd wisdom--have to know in advance that group being sampled isn't predisposed to bias toward getting it wrong.
|7:02||Congressional staffers and law professors systematically get wrong what proportion of American work force earns the minimum wage or less. (A: 3% or under.) Does crowd wisdom approach only get you the right answer in relatively unimportant matters like the weight of a horse? Is it just a parlor trick? No, it's the mechanism, not the importance of the question. Scott Page, Michigan, random error in large groups. Just want to know whether or not you have confidence in the people being asked. Statistical collections of experts typically outperform non-experts. To show whether the method works you have to specify measurable, factual questions. In business domain: trick is to figure out whether there is or isn't systematic bias.|
|12:39||Prediction markets, variation on statistical averages. Try to get a bunch of people to make a bet about whether question businesses are really interested in (will product be available? will office open? will venture be profitable?). Bet can be virtual (just about reputation and honor) or can be for real money (offshore markets, gambling), or can be chance to enter a lottery (get a shirt as token reward). Voluntary participation, some incentive to get it right. Microsoft, Hewlett-Packard, Ely Lily. Google's virtual market experience is that the prices are probabilities, accurately so. Why this apparently magical result? People only choose to participate unless they are more than 50% confident they will get it right. "Economic incentive plus the dynamic quality of the prediction market... provides a kind of built-in safeguard against the risk of a systematic bias or pervasive error." Were Kentucky Derby and Supreme Court survey answers publicized ex post (as an incentive to get it right)? Google--did it compare statistical average to prediction market results? No. How many participants? Even thin markets are pretty accurate. Logically, prediction markets outperform surveys because of voluntary aspect because there is an economic incentive to get it right. No financial cost for wrong prediction. Google is successfully trying to avoid gambling laws. Trade sports are high-incentive; Google virtual market is low-incentive; but analysis suggests that both work well. Puzzle. Maybe reputation is an incentive. Surveys can do just fine even if there are lots of random guessers.|
|22:42||Internal deliberation. Business or political applications. Survey CIA employees about Iraq's possessing weapons of mass destruction may result in less confidence than public statements. Internal deliberation process crucial. Roosevelt, Bush. Bush considered using prediction markets. Robin Hanson's proposal distorted. Deliberation is a bit oversold. In private sector and government it seems to be preferred way to make decisions. But after deliberation people sometimes end up taking more extreme positions of what they originally thought. Colorado example on affirmative action. Affects risk-taking behavior. Group polarization can lead to error. Additional problem: If people suffer from some sort of confusion, worse than garbage-in/garbage-out. Errors are amplified by deliberation. Availability heuristic. Also, groups often emphasize shared information at the expense of critical uniquely-held information. Bay of Pigs invasion. People sometimes keep their mouths shut in group situations. What experimental evidence is there about these results? In real world, do organizations relying on deliberation take measures to reduce these negative effects? Ex-post revelations may not tell us about the general run. Federal judicial behavior example. Private sector boards may be contentious and take steps to avoid the polarization phenomenon or avoiding the problem when unique information doesn't get out. Inculcation of norms of disagreement. Eureka problem, crossword puzzles. Goal should be to figure out how to make deliberation process better. Reagan administration successful in this, but how?|
|34:24||Why are there these problems with deliberation? Judicial finding: When you are on an all-Democrat or all Republican panel, you know how the outcome will turn out so you might as well join in--different incentives. Harmless in some cases, but stakes still aren't zero because dissenting opinions matter. Republicans and Democrats behave similarly when sitting with opponents. Power of social influences. Supreme Court deliberation process--we envision that they independently weigh evidence. But do they deliberate and communicate more than we are aware of? Yes: Conference, deliberative process. Rehnquist and Roberts--different approaches to deliberation. After opinion is circulated there is more give-and-take, both written and oral. No evidence on whether they change each others' minds, though. Should we seclude them? Best to have anonymous points of view stated independently first, before deliberative process. Diversity and strength of will on Supreme Court are both safeguards. Creativity.|
|45:41||Hayek's "Use of Knowledge in Society" used in Infotopia. Information is dispersed, each of us has a bit; price system is a marvel because it gathers that information in one place. Prediction market has a Hayekian feature. Hayek had that right. Criticism: "The same sorts of problems that can make deliberation go wrong can make the price system go wrong, though there are correctives that make the errors in the price system less robust, less enduring." Fads and fashions in stocks and commodities, so things can for a time get out of whack. Shiller, Irrational Exuberance, vindicated. Not so worrisome for prediction markets. Traders correct for errors. Fads, bubbles do make mistakes. Hayek never claimed prices are right or perfect. Interested in solving the problem of order. "Those who understand Hayek's arguments to suggest that stock markets are always optimally priced... that there is a great deal of truth in what they say and Hayek provides the mechanism to explain it." But errors can persist even though that wasn't what Hayek was talking about. Thesis of Infotopia is that prediction markets are the most successful aggregative mechanism we have for compiling dispersed information. Should public sector be pushed further to use prediction markets? Yes. EPA. Is use of prediction markets growing in the private sector? Yes, but some companies are nervous about talking about it because of disclosure laws.|
|55:39||Open source software, Wikipedia. How do they aggregate information? Not proprietary. Hacker culture. Will they outperform owned resources remains to be seen because of lack of monetary incentive. Other incentives. Kevin Kelly podcast. Encyclopedia Britannica vs. Wikipedia comparison of errors. Wikipedia is working better than many expected. Motivated contributors exceed the vandals. Wikis used in the publishing industry. CIA's intellipedia (not publicly accessible). Builds on Hayekian process. Simon Winchester's book on voluntary process to create Oxford English Dictionary. FCC.|