/cdn.vox-cdn.com/uploads/chorus_image/image/41522142/455544322.0.jpg)
The most interesting subplot of 2012 US election coverage was the battle between the poll-oriented quants — most famously Nate Silver — and sundry television know-nothing pundits who insisted based on their guts that Mitt Romney was going to pull it out.
The 2014 horserace has produced much less public interest and a proliferation of quantitative models based on poll aggregation. It even features its own entertaining feud between the famous, entertaining, and media-savvy Nate Silver and the less-known Princeton University scientist Sam Wang.
Except there's a huge problem — we're never going to know which model is correct.
The New York Times and the Washington Post are offering pretty dramatically different forecasts here, but they're both saying that Republicans will probably win. If the GOP does win, that will shed no light on the merits of the underlying models. If Democrats hold the Senate, that will of course make the Post's uber-confident forecast look silly. But it's hard to give bragging rights to an alternative model that "got it wrong" simply on the grounds that it was less confident in its wrong forecast.
Conversely, imagine a world where next week every single model shifts two percentage points in the Democrats' direction. That's a very modest change. But it should shift Wang's model from saying the GOP has a knife's edge advantage to saying the Democrats do. That would set us up for a nice cozy media narrative in which a Democratic Senate means a win for Wang while a GOP Senate vindicates Silver. Alternatively, if we stick with the status quo where all models give Republicans the edge and then the GOP doesn't win, expect a huge comeback from Silver's old detractors who'll say the entire enterprise of poll-based forecasting has been debunked.
To anyone who understands probabilities, of course, this is nonsense. If you flip a quarter twice, it is unlikely to come up heads twice in a row. But it's not all that unlikely. If you sit down at the blackjack table and play for a while, you will probably lose money. But you might not. Even the Washington Post's current forecast that the GOP has a 95 percent chance of obtaining a Senate majority won't genuinely be debunked by a Democratic hold. Five percent is unlikely, but unlikely things happen.
The underlying metaphysics of statements about probability are surprisingly hard to sort out. But in an epistemological sense, the way we check probabilistic statements is to run the experiment over and over again. Flipping a coin twice doesn't really prove anything. But if you flip it ten or twenty or a thousand times you'll see that "it comes up heads half the time" is a good forecasting principle.
If we got to look at 100 or a 1,000 midterm elections, we'd end up with a pretty good sense of whether Silver's forecast or Wang's forecast is the more accurate one. But elections simply don't happen that frequently. And as time passes, the underlying technology of polling will shift and modelers will want to recalibrate their approach. We'll never really get a clear sense of whether Silver's 2014 model was really superior to broadly similar alternatives that all involve aggregating polls in one way or another.
Which isn't to say we should go back to the bullshitting-on-television approach to understanding elections. Rather, we should try to remember the high-level points that all these models have in common, namely "the candidate leading in the polls usually wins" and "aggregating polls is more accurate than looking at particular ones."
Unlike fussing over the details of 51 percent vs 57 percent, these are actionable insights. I remember sitting in the American Prospect office in November 2004 and being the only one to correctly forecast John Kerry's defeat. Not because I had any secret insights, but simply because I knew that Kerry had consistently trailed in the polls.
Smart people are really good at devising sophisticated explanations for why their preferred outcome is also the likely one — Romney supporters did it in 2012 — but the basic fact is that pollsters are pretty good at what they're doing. The aggregation insight, meanwhile, has already changed journalism somewhat and should change it further. Traditionally, media organizations tend to over-hype their own proprietary polls rather than put them in the context of the overall landscape of polling. The business incentives for this are pretty obvious, but it's terrible social scientific practice. More information is better.
Those are valuable lessons, and journalists and members of the public who ignore them do so at their own peril. But while it would be nice to know whether the odds of Majority Leader Mitch McConnell are 66 percent (NYT) or 67 percent (DailyKos), we're just never going to get the kind of sample sizes that would let us tell whose method of calculation is best.