Philip Tetlock has spent decades studying the art—and failure—of forecasting. He gathered a vast archive of predictions, both published and private, tracked which ones came true, and sorted the results by the personalities of the forecasters and the methods they used. One of his more unsettling findings is that, across fields, the very worst predictions are often produced by experts. Ask a random passerby, and you may well get a more reliable forecast than from someone who has devoted a lifetime to the subject (with one important exception: when that passerby is merely echoing expert opinion picked up from the press).

A large part of the explanation is straightforward. Experts, precisely because they know a given case in great detail, are prone to seeing it as unique. They are immersed in a thicket of seemingly important details and technical parameters that appear nowhere else. The layman, by contrast, ignores such particulars—he does not know them, and so he cannot be misled by them—and instead asks simple, surface-level questions: What happens to a marriage when one partner is an alcoholic? What happens when a country fights a war against a stronger power without outside support? How does an honest candidate fare in an election without money for a campaign? More often than not, it is the layman’s answer that proves closer to the truth.

The deeper problem, however, is more general. Knowledge of facts is useless—indeed, sometimes counterproductive—if you do not know how to use it. In forecasting, that means three things.

First, you must be clear about the line of reasoning that can lead you, in this particular case, to a prediction of reasonable reliability.
Second, you must know which facts and pieces of information are actually relevant to that reasoning.
Only then, in the third step, do you plug your knowledge into the framework. It is at this stage—and only at this stage—that expertise becomes an advantage.

In practice, the process often runs in reverse. At first, everything seems clear. But as complications accumulate, we find ourselves pausing: “Wait—what line of reasoning am I even supposed to be applying here? And why that one?”

Without that discipline, we fall into a familiar trap. A single fact triggers an emotional response; the response narrows our perception; that selective perception reinforces our prior belief; and the cycle repeats, each turn tightening the loop. It is a failure that spares no one—not the amateur, and not the expert.

Leave a Reply