In 1906, Francis Galton visited a country fair in Plymouth, England, where he encountered a peculiar contest. Nearly 800 fairgoers had paid sixpence each to guess the weight of an ox. Galton, a statistician obsessed with measurement and human capability, collected the tickets after the contest ended. He expected the guesses to reveal human ignorance—proof that ordinary people knew nothing about livestock. Instead, he discovered something that would puzzle researchers for the next century: when he calculated the median of all 787 guesses, it came to 1,197 pounds. The ox, after butchering, weighed 1,198 pounds.
The crowd had been nearly perfect. Most individuals had been wrong, some wildly so, but together they had beaten the cattle experts.
The Mathematics of Diverse Error
The reason Galton's crowd succeeded wasn't magic or collective consciousness. It was mathematics. When people make independent guesses about something measurable, their errors tend to cancel out. Some guess too high, others too low. If you have enough diversity in those guesses—different perspectives, different mental models, different biases—the average drifts toward accuracy.
This only works under specific conditions. The guesses must be truly independent. People need different information or approaches to the problem. And you need a way to aggregate all those opinions into a single answer. Break any of these rules and the wisdom evaporates.
Philip Tetlock spent two decades testing this idea on a larger scale. His Good Judgment Project, funded by US intelligence agencies, pitted thousands of ordinary people against professional analysts. The question: who could better predict geopolitical events? The amateurs, organized into teams and given basic training in probabilistic thinking, beat the intelligence analysts by 30%. The analysts had security clearances and access to classified information. The amateurs had Google and each other.
When Experts Still Win
This doesn't mean experts are useless. The type of problem matters enormously.
For problems with clear rules and narrow outcomes—diagnosing a rare disease, solving a complex engineering challenge, playing chess—experts dominate. Their years of training have built mental libraries of patterns. They can eliminate bad options faster and recognize solutions others miss.
But for problems with wide ranges of possible outcomes and no simple rules—election results, economic trends, whether a new product will succeed—crowds consistently win. These problems don't reward deep expertise in one domain. They reward the ability to synthesize different kinds of information, which diverse groups do naturally.
Tetlock found another pattern in his research. He divided experts into "hedgehogs" and "foxes," borrowing from the Greek poet Archilochus. Hedgehogs know one big thing. They have a grand theory and apply it to everything. Foxes know many small things. They stitch together insights from different domains. The foxes predicted better than the hedgehogs, even though the hedgehogs were often more famous and more confident. A single person thinking like a crowd—drawing on diverse mental models—beats a person wedded to one framework.
The Iowa Electronic Markets Paradox
Since 1988, the University of Iowa has run prediction markets where people bet small amounts on election outcomes. These markets have competed against 964 professional polls. The market won 74% of the time, with an average error of just 1.13 percentage points in presidential elections.
The traders aren't political scientists. Many aren't even particularly political. But they have something polls lack: they're constantly updating based on new information, and they're putting their money where their mouths are. Every trade reflects someone's genuine belief about probability. The price aggregates thousands of these beliefs, weighted by conviction.
This creates a paradox. Professional pollsters spend millions on methodology. They worry about sample sizes, demographic weighting, question phrasing. The Iowa markets just let people trade. Yet the markets win.
The difference isn't that traders are smarter. It's that the market captures something polls miss: the collective processing of all available information, including the polls themselves. When a new poll drops, traders adjust. When a candidate makes a gaffe, traders adjust. The market becomes a meta-analysis that runs in real time.
The Expert Squeeze
Michael Mauboussin, a investment strategist, calls this phenomenon "the expert squeeze." As networks improve at harnessing collective intelligence and computers get better at pattern recognition, the zone where human experts add unique value keeps shrinking.
Google discovered this in hiring. For years, they relied on expert interviews to select candidates. Then they analyzed the data. Academic credentials barely correlated with job performance. Interview assessments were inconsistent. So they built algorithms based on 300-question surveys answered by existing employees. The crowd of current workers predicted new hire success better than expert interviewers.
This doesn't eliminate the need for experts. It changes what we need them for. Experts remain essential for three things: designing the systems that might replace them, making strategic decisions that require novel combinations of ideas, and handling the psychological dimensions of decision-making. You still want an expert leading your organization. You just might not want them making predictions alone.
Why Independence Dies
The wisdom of crowds collapses the moment people start paying too much attention to each other. This is the mechanism behind bubbles, panics, and collective delusions.
In a 2024 study of 260 general practitioners, researchers found that independent diagnoses aggregated together significantly improved accuracy. But when doctors could see each other's opinions first, accuracy dropped. The second doctor anchored to the first doctor's assessment. The diversity of thought—the engine of crowd wisdom—disappeared.
This is why asking for estimates in a meeting rarely works. The first person to speak becomes an anchor. Everyone else adjusts from that number rather than generating truly independent judgments. You need to collect opinions separately, then aggregate them.
The same breakdown happens in prediction markets when everyone starts trading on the same information or following the same gurus. The market stops being a crowd of independent thinkers and becomes an echo chamber. Prices swing wildly. Bubbles form. The collective intelligence evaporates.
The Superforecaster Exception
Tetlock's research revealed one more twist. Not all crowd members contribute equally. Some people are consistently better at probabilistic prediction. These "superforecasters" share certain traits: they update their beliefs frequently based on new evidence, they think in terms of probabilities rather than certainties, and they're comfortable with nuance and contradiction.
When you can identify these people through track records, weighting their opinions more heavily improves the crowd's accuracy even further. A select crowd of proven forecasters beats a random crowd, which beats individual experts.
This suggests the future isn't crowds replacing experts. It's crowds of experts—people with proven judgment, thinking independently, having their diverse views aggregated systematically. The wisdom isn't in eliminating expertise. It's in preventing any single expert's biases from dominating, and in harnessing the error-canceling power of diverse perspectives.
Galton's ox weighed 1,198 pounds. The crowd said 1,197. They were closer than the butchers, closer than the farmers, closer than Galton himself. Not because any individual was brilliant, but because their errors pointed in different directions. That's not a replacement for expertise. It's a mathematical fact about how independent errors behave when you add them up. The question isn't whether crowds are smarter than experts. It's why we ever thought a single expert could beat the math.