Department of Marketing
University of Pennsylvania
Distilling more wisdom from a crowd
In many situations, from economists predicting unemployment rates to chemists judging fuel safety, individuals have differing opinions or predictions. We consider the crowd-wisdom problem of aggregating the judgments of multiple individuals on a single question, when no outside information about their competence is available. Many standard methods select the most popular answer, after correcting for variations in confidence. Using a formal model, we prove that any such method can fail even if based on perfect Bayesian estimates of individual confidence. Our model suggests a new method for aggregating opinions: select the answer that is more popular than people predict. We conduct empirical tests in which respondents give both their own answer to some question and their prediction about the distribution of answers given by other people, and show that our new method outperforms majority and confidence-weighted voting in a range of domains including geography and trivia questions, laypeople and professionals judging art prices, and dermatologists evaluating skin lesions. We show how to use these ideas to improve machine learning models for aggregating crowd wisdom, and, in the context of a cognitive reflection test, how to apply these ideas when the space of possible answers is not known in advance.
A pizza lunch will be served at 11:45am. The seminar will begin at 12:00pm.