In today's New York Times, economist Justin Wolfers has a front page article on ASU's "Curtain of Distraction.," a feature of its raucous student section at home basketball games. Before diving into this post, let me say up front that I'm a fan of Wolfers research and follow him on Twitter, where he has a great presence. He has a lot of smart things to say on a lot of important subjects, including sports, where he has done some significant work. In this post I'm going to critique his NYT article, which is just not up to Wolfers' usual standards. It also offers a cautionary tale for would-be "data journalists."
The ASU Curtain of Distraction a make-shift stage that the student section uses to try to distract opposing the opposing team when it is attempting free throws. As the opposing player stands on the free throw line the curtain opens to reveal all sorts of eye-catching displays, from a guy in a Speedo to kissing unicorns (see above for an example). Wolfers argues that the Curtain of Distraction provides ASU with a quantifiable benefit because it leads the team's opponents to miss free throws.
It's a fun news story that combines what is best about college kids supporting their home team and the ideal of data journalism to provide insights behind the story. It's likely that combination of narrative and numbers that landed this article on page one of the NY Times. Unfortunately, like a lot of what passes for "data journalism," when you take a closer look, the story and the numbers don't actually fit together. There is a deeper lesson here about the power of narrative over the substance of data. Let me explain.
Wolfer's article is about causality, and specifically it's about detecting the impact of the Curtain of Distraction in a time series of data on free throws in games that ASU has played at home in Tempe, AZ. There are of course many ways to look for signals in data using statistical methods. As far as methodologically challenging signal detection, this one is not too complicated, because the data is good and the phenomena to be explained are very well observed. Even so, the data do not lend themselves to unique or conclusive claims of causality.
But Wolfers thinks that they do. He explains what he did:
A statistical analysis by The Upshot — with an assist from Nick Wan, who runs the True Brain blog, and from Jan Zilinsky — suggests that the Curtain really works.
It appears to give Arizona State an additional one to two point advantage per home game, beyond the normal home court advantage. The Curtain may even have played the pivotal role in the Sun Devils’ recent upset of their state rivals, the Arizona Wildcats.
The easiest way to see the effect is to compare visitors’ free throw shooting percentage before and after the Curtain’s 2013 introduction. In each of the three seasons from 2010-11 to 2012-13, visitors missed 28 to 32 percent of their free throws. Last season, the Curtain’s first, the rate at which visitors missed free throws rose sharply, to 40 percent.
In the first 14 home games of this season — when the Curtain and its surprises have continued to appear — visitors have missed 36 percent of all free throws. If you didn’t know better, you might suspect that the size of the hoop had gotten smaller.
Given the timing, and the fact that players are powerless to defend a free throw, it seems reasonable to attribute this sharp change to the student high jinks, rather than any change of players or strategy. In fact, the statistics largely rule out competing explanations.
As I'll show, the last sentence is completely wrong, and the conclusion of detection a strong signal from the Curtain of Distraction - an effect of 1 or 2 points a game - is just not supported by the data, despite the appeal of the narrative.
One mistake that Wolfers makes is to look at all free throws. The Curtain of Distraction (CoD) appears only on one side of the court, the student section, which ASU opponents face only in the second half of each game. So to look for a signal of the CoD we should look for its effects in second half free throws.
What do we see when we break down ASU opponents' free throws by half?
In the first half opponents shoot 66.7% when not facing the CoD. In the second half that drops to 61.1%.
Aha!! ... we might say, the effect is 5.6%. But as any student of causality knows, detecting a difference is not the same as accounting for that difference. So let's take a look at individual games to see if we might account for that difference.
It turns out that 8 of the 14 ASU opponents (including 7 of its last 9) actually improved their free throw percentages in the second half. Thus, there is very little basis in these 8 games to suggest that the CoD caused worse free throws. In fact, one could make a stronger case that the CoD improved second-half free throw shooting in the majority of ASU games. Of course, given any data, lots of plausible theories could be proposed.
Looking game-by-game we can quickly see that through ASU's 14 home games this season so far, the reason that opponents shoot a lower free throw percentage in the second half can be entirely attributed to ASU's first two games of the season, Chicago St. and Bethune-Cookman. Chicago St. is ranked 333 of the 351 teams tracked by ESPN, and Bethune-Cookman is ranked 344. These are two of the worst teams in the country. One could plausibly come up with a large number of possible explanations for their poor free throw shooting in the second half - including the CoD - but also more prosaic reasons such as fatigue, or early-season road jitters, or the pressure of facing a much better team, or just chance, or something else.
Since those two games, in the subsequent 12 home games opposing teams at ASU have made 62.3% of their free throws in the first half and 65.0% in the second half. When facing the CoD teams have on average improved their free throw shooting! So even if we were to postulate a CoD effect in those first two games, it clearly wore off pretty quickly (and maybe even reversed;-).
Wolfers takes the causality argument even further when he writes that the CoD "may even have played the pivotal role in the Sun Devils’ recent upset of their state rivals, the Arizona Wildcats." Again, a wonderful story, even movie-script stuff, but the data just does not cooperate. Arizona was 0-2 from the line in the first half and 8 for 12 in the second half (66%). Arizona's season-long free-throw percentage is 69.2%.
The bottom line here is that the evidence does not support, much at all and far from strongly or uniquely, a claim that the ASU Curtain of Distraction impacts opponents free throws in any quantifiable way. (For other analyses see the Harvard Sports Collective and True Brain).
Sure one can spin a compelling narrative and find some numbers that seem to support that narrative. As the saying goes, numbers which are sufficiently tortured using statistical methods will ultimately confess. As a corollary I might add that pretty much any narrative one cares to spin can be supported by some plausible or plausible-sounding data. But I'm also pretty sure that is not how "data journalism" is supposed to work.