In a paper presented last week at the Jackson Hole gathering of central bankers, economists and other global power brokers Andrew Haldane
and Vasileios Madouros
of the Bank of England presented a paper titled "The Dog and the Frisbee" (here in PDF
). In the paper they make a case for the use of simple, transparent heuristics as the basis for financial regulation, rather than complex and impenetrable regulatory regimes.
In making their argument, they draw upon the literature on sport, and specifically several studies on the use of heuristics to predict outcomes of Wimbledon. They write:
Too great a focus on information gathered from the past may retard effective decision-making about the future. Knowing too much can clog up the cognitive inbox, overload the neurological hard disk. One of the main purposes of sleep – doing less – is to unclog the cognitive inbox (Wang et al (2011)). That is why, when making a big decision, we often “sleep on it”.
“Sleeping on it” has a direct parallel in statistical theory. In econometrics, a model seeking to infer behaviour from the past, based on too short a sample, may lead to “over-fitting”. Noise is then mistaken as signal, blips parameterised as trends. A model which is “over-fitted” sways with the smallest statistical breeze. For that reason, it may yield rather fragile predictions about the future.
Experimental evidence bears this out. Take sports prediction. In principle, this should draw on a complex array of historical data, optimally weighted. That is why complex, data-hungry algorithms are used to generate rankings for sports events, such as the FIFA world rankings for football teams or the ATP world rankings for tennis players. These complex algorithms are designed to fit performance data from the past.
Yet, when it comes to out-of-sample prediction, these complex rules perform miserably. In fact, they are often inferior to simple alternatives. One such alternative would be the “recognition heuristic” – picking a winning team or player purely on the basis of name-recognition. This simple rule out-performs the ATP or FIFA rankings (Serwe and Frings (2006), Scheibehenne and Broder (2007)). One good reason beats many.
While the overall point being made is a sound one, the two papers that they cite are not the best examples from the literature on prediction, and actually do not even support the claims being made. Let me explain.
Both papers cited by Haldane and Madouros used name recognition as the basis for creating predictions of the Wimbledon tennis tournament. Both papers looked at recognition among amateur tennis players as well as laypeople. Scheibehenne and Broder (2007, here in PDF
) came up with the following results (and Serwe and Frings 2006
had very similar results):
Predictions based on recognition rankings aggregated over all participants correctly predicted 70% of all matches. These recognition predictions were equal to or better than predictions based on official ATP rankings and the seedings of Wimbledon experts, while online betting odds led to more accurate forecasts. When applicable, individual amateurs and laypeople made accurate predictions by relying on individual name recognition. However, for cases in which individuals did not recognize either of the two players, their average prediction accuracy across all matches was low. The study shows that simple heuristics that rely on a few valid cues can lead to highly accurate forecasts.
The authors explain:
The systematic relationship between recognition and player success is presumably mediated by mass media coverage. Thus, by relying on their partial ignorance, non-experts are able to make accurate predictions by exploiting an environmental structure that contains relevant information and that is available at almost no cost.
So, "name recognition" is in effect simply a proxy for the ATP ranking which is reflected in Wimbledon seedings. There should thus be no surprise that the results of the analysis find the name recognition heuristic. The results showed that for amateur tennis players the recognition heuristic performed identically to the rankings and seedings, and for laypeople the recognition heuristic underperformed the rankings and seedings.
What this shows -- to use language frequently mentioned on this blog -- is that neither group showed skill
in prediction, where skill is a demonstrable improvement over a naive baseline, and the laypeople actually showed negative skill. (For a discussion of skill in Olympic medals predictions, see this recent post
.) This makes sense logically as amateur tennis players are far more likely to recognize the names of professionals than the average layperson. Hence, the recognition heuristic is more likely to accurately reflect the ATP rankings and seedings for those paying attention than for those who do not.
Thus, when Haldane and Madouros claim that -- "One such alternative would be the “recognition heuristic” – picking a
winning team or player purely on the basis of name-recognition. This
simple rule out-performs the ATP or FIFA rankings" -- They are incorrect insofar as the literature that they are citing, as the two papers on Wimbledon that they cite don't actually support the claim they are making and they don't cite any studies of FIFA rankings. The ATP rankings and Wimbledon seedings are naive baseline predictions as they are readily available and suggest a first order expectation of tournament outcomes. To explain that a recognition heuristic performs as well as the naive baselines is to say that the recognition heuristic does not add any value. However, it may show that the media coverage accurately conveys the rankings.
Haldane and Madouros are however on the right track. The definitive paper on prediction lessons from sport research has yet to be written, but given its apparent importance to the work of central bankers managing the global economy, it should be soon.