Monday, March 18, 2013

Does Nate Silver Have Predictive Skill in the NCAA Tourney?

Nate Silver has come out with his predictions for the 2013 NCAA tournament. Apparently he has been making predictions since 1992, however only a few are available online (oddly, I can't even track down his 2012 bracket). Here I conduct an initial evaluation of Silver's track record in NCAA predictions, based on his 2011 and (partial) 2012 prognostications.

Silver employs a highly complicated statistical model to general winning percentages for each team in the bracket, and those percentages allow him to populate a bracket. Last year, Silver asked:
So is it worth going through all this trouble?
Well it sure is a lot of fun. But if prediction is your goal, Silver's performance to date (again, based on the info I have available), the answer to his question is a decided No.

Let's look back at his performance. To evaluate predictive skill, readers here will know, requires the adoption of a naive baseline. Skill refers to the ability to outperform such a naive baseline. The degree of out-performance provides a quantitative metric of the amount of skill.
Here I'll simply use the NCAA seeds as the naive baseline, under the assumption that a higher seed will beat a lower seed. Skill in prediction thus correlates with the forecasters ability to pick upsets. I am evaluating skill based on games picked correctly -- for those of you in pools with weighted points systems and other fun rules, games picked may not be your preferred unit of analysis. But let's start there.

How did Silver do in 2011?

Of the 63 games in the tournament, before the tournament Silver's model predicted 29 of the winners. The NCAA seedings by contrast picked 37 of the winners. That was good for only 18th place out of 28 NYT staffers, and better than only 32.8% of all participants in the NCAA open contest. In short, not good and not skillful.

For 2012 I can find Silver's top 10 ranked teams. Of the Elite Eight Silver's model picked 5, but each of these 5 were also expected to win based on their seeding. So a push means zero skill.

It is a small sample, but I am going to go out on a limb and predict a >50% chance that Silver's 2013 predictions fail to show skill as well. Stay tuned!

PS. A note to Nate Silver, I am happy to perform this analysis with a longer time series, I just need the data (i.e., your actual pre-tourney picks).

1 comment:

  1. In an environment where the higher seed almost always wins... it must be next to impossible to have a "skillful forecast".

    As a side note- I think you would have to define "skill" more thoroughly. Couldn't there be a scenario under which Silver's model correctly predicts one of the cinderella teams? There are certain circumstances under which this would be exceedingly skillful.