*This article originally appeared in the Wall Street Journal on December 4, 2015.
SIX YEARS AFTER THE SECOND WORLD WAR, the probability of a third seemed high and rising. The Soviets had the bomb; Europe was riddled with tripwires. Yugoslavia was a major concern. Its Communist government had broken with the Soviet bloc three years earlier and relations were worsening. Would the Soviets invade? In March 1951, information generated by American intelligence agencies was gathered and distilled. The result was National Intelligence Estimate 29-51— “Probability of an Invasion of Yugoslavia in 1951”—which concluded that a Soviet assault was “a serious possibility.”
The report was read in the State Department. Policy planners got busy.
But one day, Sherman Kent, a legendary CIA intelligence analyst, had a casual chat with an official from the State Department. Say, the man asked Kent, “what kind of odds did you have in mind” when you used the phrase “serious possibility”?
Kent said he was pessimistic. He thought there was about a 65% chance of an invasion. The official was jolted. He had assumed “a serious possibility” meant a much lower probability.
Kent was jolted, too. He asked his colleagues—in numerical terms—what they thought “serious possibility” meant. The answers were all over the scale of probability. The highest put the likelihood of invasion at 80%, the lowest at 20%. Kent was stunned. But if you think a misunderstanding like that is freakishly unusual, think again.
The only thing unusual about the fog of confusion surrounding NIE 29-51 is that it was identified.
Linguistic fog isn’t the only kind that bedevils forecasting. There’s also fog surrounding judgments about the accuracy of forecasting—even judgments about single forecasts may be muddled. When an online prediction market forecast a 75% probability that the Supreme Court would strike down Obamacare, and the court upheld the legislation, the New York Times’ Pulitzer Prize-winning journalist David Leonhardt declared the forecast “wrong.” But that is itself wrong. After all, the forecast had said there was a 25% chance the law would be upheld. A single, probabilistic forecast simply cannot be judged this way.
As an essential first step to sweeping away the fog, Sherman Kent proposed that intelligence analysts use a chart that connected words and phrases with numerical probabilities. Kent was revered in the CIA—the agency’s school for intelligence analysis is named after him—but his proposal went nowhere. Institutional inertia and bad incentives protected the hazy status quo.
Still, Sherman Kent was right. He was also ahead of his time.
The intelligence community has finally taken Sherman Kent’s advice and replaced vague words with precise numbers, delivering clarity of meaning. We hope other organizations will learn from what the intelligence community is doing. But more than that, we’ll forecast that there is a greater than 90% probability that they will. In CIA terms, we’re almost certain.
A rising wind is finally blowing away the fog. And it’s about time. Clarity will be transformational. Clear forecasts can be measured for accuracy, and those measurements will increasingly reveal what improves forecasting and what doesn’t. How good could forecasting get in the future? We don’t know. But we do know that in other fields that transformed themselves this way, the improvements were spectacular.
Think of medicine’s leap from leeches to antibiotics. There are fundamental limits we can never overcome, but we expect forecasting to make similar leaps this century. Forecasting informs all the important decisions we make, from investing to preparing for invasions. Its transformation should have begun in the era of Sherman Kent, but as Kent himself would say—he was as practical as he was perceptive—better late than never.
Philip Tetlock is a psychologist and professor of management at the University of Pennsylvania’s Wharton School. The journalist Dan Gardner is his co-author.