“Prediction is the essence of intelligence”. But monitoring and anticipating the severity of infectious diseases can be a daunting task, especially when when data is sparse, untimely or unfamiliar, as was the case with Covid-19.
To reinforce its predictive arsenal and tackle uncertainty, the Johns Hopkins Center for Health Security partnered with Hypermind to experiment with a novel method that had previously been used by the US intelligence community to predict geopolitical events: crowd forecasting.
This approach involves asking precise questions with verifiable answers to a large group of people, gathering their probability forecasts and optimally combining them into a crowd forecast that often outperforms even the most accurate individual contributors.
Crowd forecasting draws its power from diversity and (weighted) averaging: bias and noise in human judgment are inevitable, but combining many flawed judgments allows objective knowledge to add up and rids it of subjective noise.
To enhance the accuracy of combined forecasts, Hypermind follows a four-step aggregation algorithm: weighting, culling, averaging and extremizing.
The Johns Hopkins project recruited a diverse international crowd of 500+ volunteers to forecast over 15 months the severity of outbreaks for 19 infectious diseases. Participants were mostly public health experts and other medical professionals, with some skilled generalist forecasters from Hypermind’s prediction market mixed in.
Following the Diversity Theorem, crowd forecasting works best when it combines diversity (different individuals with different backgrounds) and expertise (with both skilled forecasters and medical professionals).
The aggregation of individual forecasts into an optimally accurate crowd-forecast follows a 4-step aggregation algorithm:
Crowd forecasting demonstrates how probability estimations can assist public-policy decision-making with the help of the crowd as long as participants’ expertise is sufficient and they are provided with quality data.
Prediction markets and prediction polls are two vetted crowd-forecasting methods.
A prediction market is a competitive betting game designed to tap the wisdom of crowds to predict future events. It feels like a financial market where event outcomes are traded instead of commodities. The trades are conducted peer-to-peer among the participants, with no bookmaker or other intermediaries. An outcome’s trading price, the point where buyers and sellers agree to disagree, imputes its probability of occurrence.
A prediction poll solicits instead individual predictions with rewards for the most accurate, then aggregates them with sophisticated weighted averages. The aggregation algorithm is designed to simulate explicitly what markets do implicitly, i.e., give more weight to the forecasts of more recent, more accurate and more active forecasters.
The combined human intelligence that powers prediction markets and polls shines in situations where AI falls short: when relevant data is not structured neatly into a database’s rows and columns, but rather is dispersed among many minds in a rich, qualitative way, purely statistical methods fail because they have little to process, or even worse, because the available databases have become irrelevant. Crowd forecasting is most useful on new problems where past data could be misleading.
Invented by the American economist Scott Page, the diversity theorem states that collective error equals average individual error minus the diversity of the estimates.
The obvious half of this equation states that smaller individual errors will reduce the collective error. This is the traditional call for subject matter experts. But the second half is less intuitive: the more diverse the individual estimates are, the more accurate the collective estimate will be.
Diversity and expertise are therefore interchangeable and complementary, a phenomenon you can take advantage of when few experts are available: bring in more and different minds for more reliable forecasts.
open access HANDBOOK
Co-edited by an international network of top academics and practitioners, our Handbook is for policy makers, public organizations, civil society activists and practitioners of Collective Intelligence.
Every 2 weeks, you’ll get: