clock menu more-arrow no yes mobile

Filed under:

Why election forecasters disagree about who will win the Senate

Mark Wilson/Getty
Andrew Prokop is a senior politics correspondent at Vox, covering the White House, elections, and political scandals and investigations. He’s worked at Vox since the site’s launch in 2014, and before that, he worked as a research assistant at the New Yorker’s Washington, DC, bureau.

With the 2014 midterms now just two months away, there's been a recent proliferation of models attempting to forecast the battle for the Senate — models that are showing different results:

Senate forecasts


Currently,
The Upshot and FiveThirtyEight both view Republicans as narrow favorites to retake the chamber. The Washington Post and Daily Kos view the race as essentially tied. HuffPostPollster gives the Democrats a slight edge, and a model by Princeton neuroscientist Sam Wang views Democrats as clear favorites. Now, these differences shouldn't be overstated — in a probabilistic forecast, 60-40 looks a lot like an ordinary coin flip, as Amanda Cox of the Upshot explained at a recent panel.

Yet it's worth understanding why the models somewhat disagree. Very broadly, the most important differences relate to "how uncertain the models are about forecasts for every different race," says Eric McGhee, co-creator of the Washington Post's model. In other words, compared to what the raw polling data currently shows, how does each model try to adjust and correct for where things might go wrong? "These models are measuring uncertainty in different ways," says Nate Silver of FiveThirtyEight. "There's a range of legitimate disagreement, and there's also models that are just not done well."

1) The big difference: Polls-plus fundamentals vs. just polls

AR-SEN LA-SEN forecasts

Currently, the three models giving Democrats the best chances — Wang's, Daily Kos, and and HuffPost Pollster — all rely entirely on polls of individual races. In contrast, the three models slightly favoring Republicans — the Post, the Upshot, and FiveThirtyEight — balance out these polls with a set of other factors generally called "fundamentals." As shown above, FiveThirtyEight and the Upshot view Democrats Mark Pryor of Arkansas and Mary Landrieu of Louisiana as underdogs, while the polls-only models generally view those races as toss-ups.

The specific fundamentals used in each model differ somewhat, and can include both state-level factors like candidate fundraising and national factors like the generic ballot. Right now, metrics that try to account for the underlying political leanings of particular states — like Obama's statewide vote share in 2012 — seem to do the biggest damage to Democrats' chances compared to the poll-only models.

These fundamentals are included to try and balance out uncertainty, particularly in states where the polling averages might not seem so trustworthy — perhaps because polling has been sparse, or has come from firms with poor track records. "Usually, if you don't have that much data and you have to pack too much in, your model doesn't work as well in the real world as it does on the computer," Silver says.

But some argue that including fundamentals actually introduces more uncertainty. "Because polls are a direct measure of opinion, and fundamentals are indirect, it's generally a risky move to add fundamentals to a measurement," says Sam Wang. He argues that his purely poll-based metric correctly called two Senate races won by Democrats in 2012 — North Dakota and Montana — that Silver's model missed. "He whiffed on those two Senate races," Wang says, suggesting that Silver's inclusion of fundamentals threw him off.

The modelers who use fundamentals say they've extensively analyzed whether polling-plus-fundamentals models outperform polling-only models at various stages of the campaign — and that historically, at this point, they clearly do. While Silver hasn't yet published all the factors that go into his new Senate model, he does say that while his team was making it, "We discovered that we were putting too much weight on the fundamentals before."

But these models aren't static. Even the modelers who do use fundamentals are starting to give polls much more weight, and they'll continue to do so as election day draws closer. The Post's model, which used to be entirely fundamentals, is now "mainly polls," says McGhee. This is one reason why, back in July, the Post predicted an 86 percent likelihood of Republicans taking the Senate, but it now gives them only a 53 percent chance.

2) The future's uncertain, but how uncertain?

NC-SEN forecasts

Sen. Kay Hagan (D-NC) has a small lead in most recent North Carolina polls. But the models disagree sharply on how good her chances actually are. Should she be viewed as a clear favorite or a toss-up?

The most optimistic results for Hagan are from Wang's model, which gives her an 84 percent chance of victory, and the Washington Post's, which gives her a 91 percent chance. The polls have shown "a small margin but an extremely consistent margin" in Hagan's favor, McGhee says, so the Post's model reflects that consistency. "We looked at the polls in past election years to see when we felt like we could really trust the polls, and we concluded that right about now was when you could start trusting them."

There's another, more technical reason why Wang's model may stand out: he alone relies on the median of polls. Specifically, he finds the median of the most recent poll from each pollster, estimates a standard error, and calculates a win probability based on that. "My use of median-based statistics is a way to get rid of outliers," he has said. "If the numbers say Obama +1, Obama +2, and Romney +9, the right answer is probably Obama +1, the middle value (median), not the average." In contrast, HuffPostPollster and Daily Kos look at every poll conducted in a particular race and fit a trendline to them.

Unlike Wang and the Washington Post, the other four modelers view Hagan's race as a toss-up, matching the views of most political observers. This is partly because they build more uncertainty into their models overall, though they do so in different ways. "Once we have an estimate of where we think public opinion is today, we add random error to account for where the polls could be wrong and how public opinion could change between today and election day," says statistician Drew Linzer, who co-created the Daily Kos model.

Silver's model also builds in substantial uncertainty, as he described in a recent post. "Whether a 4-point lead translates into a 90 percent chance of victory or a 60 percent chance of victory is something we've spent a lot of time looking at," he says. "Some people that have crazily high odds in one state or another, I really want to kinda wager them. But it's good for the science of this for people to have a lot of different approaches."

3) Bias in the data?

Of the 10 recent polls of North Carolina's Senate race, 7 are from avowedly partisan pollsters, as you can see above. Many of them use robo-calls, and one uses an online panel. Some of these firms have done well in past elections and others haven't. Some poll registered voters, while others poll likely voters, and they all differ by sample size. How should modelers adjust for all this?

Linzer believes that, for the most part, they shouldn't. "Correcting for systematic biases feels more intuitive, but how do you know your correction is the right correction?" he says. After a broad analysis of partisan polls, Linzer found that on average they give "about a 1.5 percent boost to candidates that they represent." So his model corrects for that but, he says, "that is the only correction that we do, and even that is relatively minor. We don't believe that we know what the corrections that would help are."

Other models make more extensive adjustments. It's common to adjust for "house effects" of polling firms in some form. Silver's models have long rated the accuracy of particular pollsters based on past performance, and his current model does as well. And the Huffington Post's model gives greater emphasis to certain polls. "We're using all polls, even the partisan polls, to help determine the trend, but we're calibrating the level at which our trend is set based on the nonpartisan polls that have had a reasonable record of relatively little house effect in 2012," says Mark Blumenthal, who runs the HuffPostPollster model with Natalie Jackson.

Early in the year, pollsters often measure opinion among all registered voters. But as the election draws nearer, they instead start looking at likely voters — the people they think will actually turn out on election day. "Almost always in midterm years, likely voter polls are more favorable to Republicans. We'll soon see a switch to more pollsters using likely voter data, and our model already accounts for that," says Silver. Models that don't do so may move in a pro-Republican direction as many more likely voter polls begin coming out. "A lot of people will see that and say, 'There's a big shift toward Republicans.' They'll misinterpret that evidence," Silver adds.

What to keep in mind going forward

Reid McConnell

(Saul Loeb, AFP/Getty)

All of these models will likely converge as more polling comes out. "By Election Day, everyone's going to have what's basically a poll averaging model," says Linzer. "In 2012, everyone looked at the forecasts people made the day before the election, saw they were right, and said, 'The nerds won!' Just because the night before the election we averaged a bunch of polls together." Linzer hopes that the post-election retrospectives take into account not just the night before, but what the models said months in advance. "These are forecasting models, they should have some utility now, around Labor Day."

Still, unless the new polls move sharply in one direction or the other, the modelers attempting to call the Senate outcomes face a very difficult task this year. "There's a lot of competitive races, and those competitive races absolutely matter for control of the Senate. The target we're trying to hit is very small," says McGhee. Silver concurred in a recent post, writing, "This is the sort of year in which there are likely to be several missed calls — it would be a minor miracle if any of the models, certainly including ours, manage to go 35-for-36 or 36-for-36."

And even though models like these have performed well in recent years, they're still all vulnerable to the possibility of a broad-based polling failure. "The volume of polling is way lower than it was 2 and 4 years ago, and the quality of polling is problematic," Silver says. "The response rates get lower and lower every year. Pollsters have still been managing to get decent results, but sooner or later, something's gonna break." One particular problem Silver mentions is that "pollsters tend to herd, or copy off each other. Then, instead of having random variation around some mean, you can get weird patterns where you can be right for several elections in a row, and then you might have fat-tailed errors."

"That does make me really nervous," Silver adds. "Maybe it won't be this year, but sooner or later you're going to have a year when things were way off."

Update: Added more information to the section on fundamentals.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.