The Right Way to Use the Wisdom of Crowds
Management teams are responsible for making sense of complex questions. Maybe it’s estimating how much a market will grow next year, or finding the best strategy to beat a competitor. One popular approach for navigating these questions is turning to the “wisdom of crowds” – asking many people for their opinions and suggestions, and then combining them to form the best overall decision. Evidence suggests that the combination of multiple, independent judgments is often more accurate than even an expert’s individual judgment.
But new research identifies a hidden cost to this approach. When someone has already formed an opinion, they’re far less likely to be receptive to the opinions of others – and this can lead to evaluating other people and their ideas more negatively. Fortunately, this work also suggests a few ways to minimize this cost.
Management teams are responsible for making sense of complex questions. Maybe it’s estimating how much a market will grow next year, or finding the best strategy to beat a competitor. One popular approach for navigating these questions is turning to the “wisdom of crowds” – asking many people for their opinions and suggestions, and then combining them to form the best overall decision. Evidence suggests that the combination of multiple, independent judgments is often more accurate than even an expert’s individual judgment.
But our research identifies a hidden cost to this approach. When someone has already formed an opinion, they’re far less likely to be receptive to the opinions of others – and this can lead to evaluating other people and their ideas more negatively. Fortunately, our work also suggests a few ways to minimize this cost.
The “wisdom of crowds” refers to the result of a very specific process, where independent judgments are statistically combined (i.e., using the mean or the median) to achieve a final judgment with the greatest accuracy. In practice, however, people rarely follow strict statistical guidelines when combining their own estimates with those of other people; and additional factors often lead people to assess some judgments more positively than others. For example, should the boss’s estimate count for more simply because of status? Shouldn’t an expert’s opinion count more than a novice’s?
In our research we find another factor that seems to impact how we evaluate other people’s opinions: when someone forms his or her own opinion. As team leaders, we started to notice that a common source of team friction came from members committing to their own ideas before the team as a whole agreed to a course of action. We wondered whether a simple matter of workflow ordering – forming a judgment before evaluating someone else’s judgments – was causing tension.
To test this question, we conducted an experiment where we randomly assigned the order in which individuals formed an estimate of their own versus evaluated the estimate of another. We asked 424 parents in the U.S. to estimate the total cost of raising a child from birth to age 18. They also evaluated another person’s estimate – which we framed as that of “another parent.” In fact, it was the consensus estimate created by financial experts.
Even though the estimate being evaluated was always exactly the same, we found that parents who had made their own estimates first evaluated the other person’s estimate more negatively. Parents who first made their own estimate were 22% less likely to think that the other estimate was at least “moderately likely to be correct” than were parents who evaluated the other estimate before making their own.
We wondered if this effect varied among different types of people. In this study and the others we conducted, we looked at whether men responded differently than women, whether older individuals responded differently than younger individuals, and whether experts responded differently than non-experts. None of these differences mattered. Regardless of their gender, age, or expertise, decision makers who first formed an opinion of their own were more likely to negatively evaluate another’s opinion.
In a second study, we asked 164 U.S. national security experts to assess a hostage-rescue strategy and evaluate what “another national security expert” proposed. Unlike the cost-estimation question of our first study, this question was not quantitative, nor did it have a clear right answer. Despite these differences, and despite the fact that the individuals in this case were experts, the effects of forming an opinion before evaluating someone else’s were the same. Those who first formed their own opinion offered systematically lower evaluations of a peer’s strategy, compared to those who evaluated the peer’s strategy before forming their own opinion.
We also asked participants how intelligent or ethical they perceived the other person to be, based on their recommendation. Even though the actual recommendations were exactly the same across our ordering conditions, those who first formed their own opinion made more negative inferences about the peer than those who formed their opinion later.
Why do people penalize the judgments of others after forming their own opinion? The key factor seemed to be how far someone’s estimate diverged from the other person’s. When we asked participants in these two studies to simply look at someone’s judgment and form an opinion about it, participants own estimates were pulled toward the estimate they were considering, a phenomenon often referred to as “anchoring.” By contrast, when participants made their own estimate independently, they were more likely to disagree with the estimate they had to evaluate later, viewing it as too different from their own, and thus less likely to be correct.
While disagreement is not necessarily a bad thing – combining diverse judgments and estimates underpins the wisdom of crowds –in order to be effectively leveraged it first has to be correctly interpreted. In most cases, disagreement should signal that either or both parties are likely to be wrong. Our data suggest the problem is that people interpret disagreement in a self-serving way, as signaling that their estimate is right and the other party is wrong.
We ran a final study to test this interpretation. We asked 401 U.S. adults to form a judgment before seeing the judgment of another participant selected at random from a prior study. Some participants saw peer judgments that were in close agreement with their own, and others saw estimates that differed dramatically. We then asked them to evaluate the quality of both judgments. We found that, as disagreement increased, people evaluated others’ judgments more harshly – while their evaluations of their own judgments did not budge. Our participants interpreted disagreement to mean that the other person was wrong, but not them.
Across our studies we found that forming opinions before evaluating those offered by others (compared to evaluating first and forming one’s own opinion later), carried social costs – participants thought less of the other person’s estimates and ideas, and, in some cases, thought the other person was less ethical and intelligent.
What should a manager do if she wants to get to better judgments and minimize the costs that arise from people getting enamored with their own opinions? The evidence is strong that to maximize accuracy, team members should form independent opinions before coming together to decide as a group.
But our findings suggest that groups of decision-makers should also pre–commit to a strategy for combining their opinions. The specific strategy will depend on the type of question a team faces. However, committing to an aggregation strategy ahead of time can protect teams from the negative social consequences of evaluating each other’s judgments in light of their own previously-formed opinions.
Teams facing quantifiable questions should aim for strategies that, as much as possible, remove human judgment from the aggregation process. A team estimating how much a market will grow faces a quantifiable question; they should pre-determine an algorithm (such as a simple average or median) for combining the opinions of different team members.
Teams facing non-quantifiable questions will have to rely on human aggregation in some form. For these questions, teams should prevent the person responsible for the final judgment from forming an opinion of her own before seeing the opinions of others. This is not always easy. By the time managers evaluate their subordinates’ ideas, they often have already formed their own opinion.
This highlights an important point: committing to an aggregation strategy is as much a structural matter as an in-the-moment decision. Unbiased aggregation requires structuring work flows so that those responsible for combining opinions do not first form their own, or at least work to not let that opinion undermine the decision-making process.
At the individual level, team members should reframe how they think about disagreement. Our studies suggest that many people interpret disagreement to mean that someone else is incorrect. With a concerted effort toward intellectual humility, however, this does not have to be the case. For teams, disagreement should be thought of as valuable information. Thinking of it as signaling value, rather than as a reason to derogate, may be the single-best way to defray the costs of turning to the crowd to answer complex questions.
Brad DeWees is a doctoral candidate at Harvard University and an active-duty military officer. His research focuses on how social processes affect judgment and decision-making, and as an officer he serves as a Tactical Air Control Party (TACP) in the Air Force. Follow him on LinkedIn or Twitter.
Julia A. Minson is an assistant professor of public policy at the Harvard Kennedy School of government. She is a social psychologist with research interests in group judgment and decision-making, negotiations, and social influence.
The Right Way to Use the Wisdom of Crowds
Research & References of The Right Way to Use the Wisdom of Crowds|A&C Accounting And Tax Services
Source
0 Comments