/cdn.vox-cdn.com/uploads/chorus_image/image/51847347/GettyImages-529893680.0.jpg)
Over at the Atlantic, Jason Blakely asks, "Is political science this year's election casualty?" There's a lot to unpack in this headline and in the article, but it's worth a discussion since quite a few expert opinions were turned on their head last week.
First of all, it should be noted that Blakely's takedown of "political science" refers to no political scientists. He's mainly noting the failures of polling aggregator sites like those developed and run by Samuel Wang, Nate Cohn, and Nate Silver. Those models all predicted a 3- to 4-point lead for Hillary Clinton going into the election. These models had different estimates of uncertainty, but ultimately they all relied upon the same data: the surveys run and published by polling firms across country.
Those models, obviously, were off, although only by a few points. (Blakely suggests that the USC Dornsife/LA Times poll, which used very different methods and data, "did better," but that's not obviously true. It predicted Trump winning the popular vote by more than 3 points; so far, he's losing it.) We might be more forgiving of these forecasts if they'd picked the Electoral College winner right, but missing by a couple of points really isn't unusual.
But to the extent that this describes political science at all, it describes only a very narrow sliver of it. How did political science actually do? Keep in mind that the subfield of political science that focuses on forecasting American presidential elections is very small, even if somewhat more visible than most. Our journals and books are largely focused on explanation, description, and the testing of hypotheses, rather than the prediction of future events.
But if you want to see what political science election forecasters came up with, look here. The predictions cover a pretty wide range of outcomes, with some seeing Trump winning and others expecting a Clinton win. But average them and you get Trump winning 49.9 percent of the two-party vote. The vote so far has him at 49.6 percent. That's pretty impressive, and far closer than polling-based models came.
Blakely's critique seems more aimed at those who use quantitative measures to predict political outcomes. As he says, "The problem comes with forecasting — or the attempt to report predictions as supposedly scientific or quasi-scientific findings akin to work that happens in the natural sciences."
First, I'd note that only parts of the natural sciences lend themselves well to precise quantitative forecasts. Want to know when a comet will intersect Earth's orbit or what will happen when you smash two particles into each other in an accelerator? You can forecast those pretty well. Want to predict how tall humans will be in 100,000 years or how the Earth will respond to a doubling of the carbon dioxide content of the atmosphere? Well, those are complex systems, and we can certainly use history and good quantitative measures to make educated forecasts, but there will be substantial error terms associated with them.
Besides, I'm unfamiliar with any pollster or political scientist who would claim that their forecasts are akin to those in the natural sciences. While I'd be skeptical of models that claim a 95 percent certainty of any outcome in a close election, it doesn't seem to me that there's a misapplication of scientific methods here. Human behavior is undoubtedly complex, and thus the study of it carries large error terms. But that doesn't mean that quantitative analysis is useless in the study of humans.
Such analysis can be extremely helpful. It can tell us, for example, that Clinton's vote share was roughly where we'd expect it to be given the state of the economy, and that party discipline held among voters even in a very unusual election year. We should embrace more qualitative studies, as well — Kathy Cramer's The Politics of Resentment is vital for understanding political sentiment in the rural Upper Midwest and proved rather prescient this year — but there's no reason to dismiss quantitative methods just because humans are complicated.