clock menu more-arrow no yes mobile

Filed under:

How pollsters are responding to problems that came up in 2016

There are no easy fixes.

Voters Head To Polls As Early Voting Starts In Florida Joe Raedle/Getty Images
Li Zhou is a politics reporter at Vox, where she covers Congress and elections. Previously, she was a tech policy reporter at Politico and an editorial fellow at the Atlantic.

A creeping fear is coming for Democrats this year: What if, like in 2016, the polls turn out to be wrong in crucial ways that mean they aren’t actually on track for the House Democratic majority they’ve been working toward?

It’s true that Democrats have a lot of factors in their favor — enthusiasm seems to be on their side, President Trump is still a historically unpopular candidate even if his approval rating has been inching upward of late, and midterms usually push back on the party in power.

But even with all that going for them, they still need everything to break their way in the final days before the election.

The polls were already wrong at least once this year.

During the Democratic primary for the Florida gubernatorial race, polls had progressive candidate and eventual winner Andrew Gillum in fourth place, trailing then-frontrunner Gwen Graham by double digits. At the time, experts argued that polls likely missed the surge in younger and African-American voters, groups that both turned out in higher numbers than in previous years to back Gillum.

As poll after poll signals a Democratic edge, many wonder if they can be trusted. After all, they were so spectacularly wrong in 2016, when all the electoral models seemed to predict a President Hillary Clinton.

What went wrong in 2016: last-minute undecided voters broke for Trump

We now know, of course, that Clinton didn’t win. To find out what went wrong, a panel of experts led by Pew Research Center’s Courtney Kennedy put together an autopsy for the American Association of Public Opinion Research, findings they later detailed in a paper for the Public Opinion Quarterly.

Chief among their findings was that last-minute voters broke for Trump.

Simply due to the timing of when many surveys were conducted, polls missed a massive shift in voter behavior: Data showed that 13 percent of voters in the pivotal swing states of Wisconsin, Florida, and Pennsylvania landed on their presidential pick in the final week. In all three states, those voters broadly backed Trump.

If polls want to capture late-breaking voters, there’s a relatively straightforward logistical solution: They just have to keep conducting surveys in real time, says Kennedy.

“There is little that a pollster can do about this except for conducting a poll as close as possible to Election Day and alerting consumers of their polls that public opinion can change,” she says. “Polls really are just a snapshot in time.”

Doing polling that captures such down-to-the-wire changes in the electorate has become an imperative, says Celinda Lake, a Democratic pollster who runs Lake Research Partners. “We have pushed for tracking later in the process,” she says. “It’s not just an October surprise anymore; it’s an October 31 surprise.”

Despite such efforts, however, there are no guarantees that polls will ever fully be able to spot spur-of-the-moment choices that voters — especially undecided ones — make on Election Day.

“There’s late movement in campaigns that isn’t even evident in the final polls,” says SurveyMonkey’s head of election polling, Mark Blumenthal, who also helped put together the autopsy report.

What went wrong in 2016: educated voters needed to be weighted differently

The second key problem that skewed 2016 polls was how pollsters treated voters’ education levels. Historically, more educated voters have participated in polls, which means that they’re overrepresented in the results. While this hasn’t mattered as much in the past, higher education was heavily associated with support for Clinton in 2016.

Because of this, a number of polls — which included more educated voters in their samples — overstated Clinton’s actual support.

“Many polls — especially at the state level — did not adjust their weights to correct for the over-representation of college graduates in their surveys, and the result was over-estimation of support for Clinton,” the autopsy report says.

Pollsters have reacted to this issue differently, says Kennedy. Some now argue that it’s vital to adjust for education in their polls. Doing so means that polls would “weight” responses from participants based on their education level to ensure that a poll’s results reflect a broader population and not an exaggerated subset.

The best way to explain weighting: If seven out of 10 people who respond to a poll have a college education, but only five out of 10 people in the larger electorate do, pollsters adjust their data so that the final outcome more closely reflects the actual electorate. By doing so, they can ensure that the result more accurately mirrors how a larger electorate might respond. But often, they are making educated guesses about what groups are over- or underrepresented in their samples.

Kennedy notes that pollsters haven’t been unified in how they’re addressing this concern. “Several pollsters have said that they have started adjusting how they weight their surveys to account for this, others have decided not to adjust for it, and a third group of pollsters have been making the adjustment for years and continue to do so,” she says.

As the New York Times’s Nate Cohn wrote last fall, most private pollsters who work with campaigns had begun adjusting on education in the wake of 2016, while fewer public pollsters had appeared to take this step. A key challenge pollsters have faced in trying to factor in education is that the information is not always readily available about the people taking the surveys, Cohn writes. Files on previous voters, a resource that pollsters often use to figure out whom to reach out to for surveys, might not include this data.

As chronicled in the autopsy report, adjusting for education can make a difference — though there’s still disagreement about exactly how much. For example, 2016 polls in New Hampshire and Michigan both more closely reflected the final outcome of the election after the survey results had been adjusted for education.

New Hampshire and Michigan polls indicate that there is a lower margin of error between their polls and the actual outcome, when they adjust for education.
University of New Hampshire Poll; Michigan State University Poll; Public Opinion Quarterly

“I think that this is a kind of a no-brainer change for pollsters,” says Blumenthal.

Even if education is increasingly adjusted in different polls, however, there’s no guarantee another variable won’t emerge that also has the potential effect of skewing survey results.

The best that pollsters can do is to continually monitor the characteristics that might be associated with leaning Democratic or Republican and update their analysis accordingly, Blumenthal notes.

“Are there demographic differences that make you a Democrat or Republican?” he says. “Gender, age, race, and education. Region and geography matter. Income matters. All of those things, if off compared to what the electorate looks like, could introduce the error.”

What went wrong in 2016: pollsters can’t always tell who’s going to turn out

Differences in turnout between 2016 and 2012 also favored Trump, the third thing that caught many pollsters off guard during the presidential election — something we already saw earlier this year in Gillum’s Florida race.

When pollsters conduct surveys, many try to determine what the group of “likely voters” will look like and present results that reflect their preferences. As the reasoning goes, “likely voters” are the people who are most expected to show up at the ballot box based on past voting behavior and a slate of other characteristics, and polls of this group would more closely illustrate how the electorate will act on Election Day, compared to polls of the broader population at large.

In a perfect setup, the group of “likely voters” that pollsters pinpoint will match up with the actual people who show up at the polls when it comes to demographic attributes like age and gender, as well as voting behavior. However, if they don’t, a poll that highlights the preferences of “likely voters” would miss the impact of others who end up turning out.

That’s exactly what happened in 2016, when many “likely voter” models did not match up with the composition of voters who ultimately showed up.

“Likely voter” models constructed using 2012 turnout data, for example, could have overstated the presence of Clinton supporters while overlooking that of Trump supporters. Voters expected to back Clinton — including African Americans — ended up turning out in much lower numbers than anticipated in 2016 compared to 2012, while those who ended up supporting Trump turned out in higher numbers, the autopsy researchers write.

“The turnout patterns were different in 2016 than 2012. That may have mattered in places where pollsters relied on models that relied on 2012 turnout,” says Blumenthal.

The challenge of accurately projecting what Election Day turnout is actually going to look like is an ongoing one with no simple solution, says Joshua Clinton, a political science professor at Vanderbilt University who helped conduct the autopsy analysis. “Is this the year that young people and women are going to vote in different rates? We don’t know,” he says.

If more women hypothetically turned out this year, for example, a poll might show a different result than if the same proportion of women showed up as in 2016. “Probably the most defensible thing is to make sure you consider different slices and report out what could happen,” says Clinton.

It’s an approach that both Lake and Blumenthal say they’ve been using in practice. And it’s one that has gotten growing support — including in the New York Times’s take on polling — following questions that cropped up after 2016.

“One of the toughest things is to figure out what the turnout is going to be. I personally don’t think there’s one overall solution. The way to do it is to look at multiple models,” says Lake.

While conducting polling for the special election for the Alabama Senate seat last year, Blumenthal notes that they presented a range of possible results based on different turnout forecasts. These models show that depending on who turned out to vote, the outcomes could very much favor Republican Roy Moore or Democrat Doug Jones. “Data collected over the past week, with different models applied, show everything between an 8 percentage point margin favoring Jones and a 9 percentage point margin favoring Moore,” Blumenthal wrote at the time.

There is an “inherent difficulty of projecting an electorate,” he says. “The degree of how much younger voters turn out is an unknown [for example]. I don’t know that; that’s a fixable challenge.”

Polling only tells you so much

The important thing is for people who are tracking these polls to be aware of their flaws, experts say. After all, 2016 is far from the only time they’ve missed the mark — and it’s likely they’ll continue to do so.

“I think it’s good to be skeptical. The best consumer of polling data is someone who understands its limitations,” says Blumenthal. “Take whatever the margin of error is and double it,” says Clinton.

One of the big outstanding questions centers on whether polls affect how voters behave, something that Clinton is especially concerned about.

“The danger is that people look at polls and they think that this race is over,” he says. “I would not want people to look at the poll and say, ‘I’m not going to vote because my side is going to win,’ because that’s really bad for democracy.”

Ultimately, both Clinton and Lake say that polling is best used when it isn’t simply a horse-race tracker. They argue that it plays an even more important role in identifying the key issues that people are focused on and offering voters, more broadly, an opportunity to pinpoint what matters to them.

“Lots of emphasis is put on who wins and who loses. The most important question is the road map to victory — what message we should be targeting,” Lake notes.