It feels at times like we’re bombarded with poll coverage, but a little-noticed reality of the 2016 campaign is that we actually have many fewer polls out in the field than we are accustomed to.
From month before election to 9 days before it, we had 80 live interview polls in 2012 in 10 states closest to national vote. In 2016? 36.— (((Harry Enten))) (@ForecasterEnten) November 1, 2016
In the 2016 cycle this has been especially noticeable in a clutch of states that are somewhat bluer than the national average, notably Colorado and Wisconsin, plus, to a lesser extent, the other states of the Great Lakes. Since these states voted for Obama twice and since Clinton has led in almost every national poll, it’s easy to kind of mentally write them down in the Democratic column. But in a close national race, these should be competitive states, and you would want to see actual data. But we haven’t had much.
One reason is that polls are getting harder to conduct. Fewer people pick up the phone and talk to pollsters, so you need to make more calls to get a decent sample. More calls cost money, so the tendency is to do fewer polls.
But the other problem is that the trend in favor of aggregating polls — whether using statistical modeling like FiveThirtyEight and the Upshot or simple averaging like RealClearPolitics or Huffington Post Pollster — has created a serious tragedy of the commons problem.
In the dark ages of, say, 2004, the convention was that New York Times articles would exclusively mention New York Times polls while the Washington Post would exclusively mention Washington Post polls. To spread costs, normally a print outlet would pair with a broadcast outlet (NBC/WSJ, NYT/CBS), but still the idea was to obtain proprietary poll information. Then local papers and TV stations would do the same thing for state polling. And public opinion research companies would also do some polls as loss leaders to advertise their commercial services.
Smart people realized that you could actually get a much more accurate picture of the race by aggregating all the polls together so that idiosyncratic methodologies or sampling error would wash out. Combine this with the way the internet broke down barriers between different media outlets and we got the golden age of poll aggregation in 2008 and 2012. The problem, as Henry Farrell foresaw four years ago, is that once the prestige and attention shifted to the aggregators, it undercut the economic rationale for doing the polls. Why go through the expense of conducting a poll to figure out if Clinton is up by 6 in Wisconsin or just up by 2 if you’re just going to become another data point in someone else’s well-trafficked model?
This is particularly important because the same declining response rate that makes polls more expensive is also raising methodological questions about whether pollsters should find a better way to conduct surveys. Ideally what you’d want to see in troubled times is an increased level of investment in conducting polls so that new methods can be tried. Instead, we’re seeing a general retreat from the field that seems likely to only intensify, meaning there are decent odds that one of these days we’re going to see the polls blow an election call in a serious way.