clock menu more-arrow no yes mobile

Filed under:

The GOP has either a 42% or 77% chance of retaking the Senate

BSIP/Getty Images
Andrew Prokop is a senior politics correspondent at Vox, covering the White House, elections, and political scandals and investigations. He’s worked at Vox since the site’s launch in 2014, and before that, he worked as a research assistant at the New Yorker’s Washington, DC, bureau.

How likely is it that the Republicans will take over the Senate this fall? Here are the views of New York Times' The Upshot and the Washington Post's Election Lab on the subject:

Senate_forecasts

This is quite a big discrepancy. Here's why the Times and Post have come up with such different results.

The key difference: use of polls

If you've been following elections in recent years, it's probably been drilled into you that averages of polls are usually quite accurate near the end of the campaign. But the election is still 6 months away, and polling is infrequent in many races — which presents a problem for anyone trying to forecast election results now. To address this, election forecasters incorporate broader political factors called "fundamentals" into their models.

The Upshot has created a model that's mostly based on polling, with fundamentals playing a supplementary role. "The mixture of polling vs. background varies from race to race," Josh Katz of the New York Times tells me, "But as of today, the background model can account for anywhere from about one-third of the forecast to less than 5 percent." Accordingly, the Upshot's results much better reflect recent polling putting Democrats ahead in several close races.

The Post, by contrast, doesn't currently use Senate polls at all — their model is entirely based on fundamentals. They lay out the broader conditions for each state's race, and make their projections based on those. They plan to add polls eventually, but they wanted to emphasize the fundamentals first. "We wanted to take a more cautious approach and incorporate polls gradually," says Eric McGhee of the Monkey Cage, one of the model's co-creators.

The reason to use polls is obvious. They take the temperature of the electorate, and provide up-to-date, specific information about individual races that broader models can't. Plus, at FiveThirtyEight, Harry Enten found that polling averages from January to June of election years in recent Senate races actually do "a pretty good job of forecasting the final poll margin." Enten writes that the average error for these early polls was 6.4 points — and that polls taken in the final month before the election weren't much better, with an average error of 4.8 points.

A thermometer or a forecast?

The case against using polls is a bit more counterintuitive. But it gets to the crux of one tradeoff faced by all election modelers — should they focus on predicting what would happen if the election was held today, or what will happen when the election is eventually held months down the road? "We have in our sights the philosophy of pegging the actual outcome," McGhee told me. He said that, though they'll keep improving and updating their model as the campaign continues, "The anchors of our model are relatively unchanging."

Basically, the Post's modelers are theorizing that the current fundamentals will better predict how things will actually end up than today's polls. How might this work? Right now, the voters might not be perfectly tuned in to these fundamentals, but during the election season, the dueling campaigns and the press can communicate this information to them. "The campaign brings the fundamentals of the election to the voters," political scientists Robert Erikson and Christopher Wlezien wrote in a book on presidential elections.

McGhee points out that there's greater variation among individual Senate races than presidential general elections, where the candidates are often evenly matched in resources. But he says Senate elections are "kind of pulled in that direction, toward the fundamentals. Assuming a real contest, I think they can only diverge so much."

What's a fundamental?

Beyond the difference with polls, the Upshot and the Post define those background conditions known as fundamentals in somewhat different ways.

Screen_shot_2014-05-28_at_5.16.20_pm

One difference here is that the Post looks at President Obama's approval nationally, while the Upshot looks instead at the approval of the actual incumbent senators in each state. This may seem like a small matter, but it gets at a major question this cycle — whether red state Democrats will be dragged to defeat by Obama's unpopularity, or whether they can effectively differentiate themselves from the president.

The most disputed races

In 6 competitive races, the two forecasts particularly diverge — and the Post gives Democrats worse odds in 5 of those.

Senate_forecast_races_2

The wildly different forecasts for the Republican-held open seat in Georgia are particularly striking. Democrat Michelle Nunn has been leading many recent polls there, and the Upshot balances those polls with the built-in Democratic disadvantages in the state to give her a 46 percent chance of winning. The Post's poll-less model, however, gives her an amazingly low 1 percent chance. Meanwhile, must-win Democratic open seats in Iowa and Michigan are considered by the Post to be toss-ups, but the Upshot considers Democrats clear favorites in both.

Conclusion

Over the next few months, we'll see whether the recent Democratic polling advantage in several races holds up — or whether they're dragged down by the national mood. And if things take a turn for the worse for Democrats, the Post will be able to say that their fundamentals-based model predicted it. But both models will keep being tweaked as the campaign continues. "Where the art comes in, as opposed to the strict science, is in striking the balance between the polls and the model," McGhee says. "At what point do you really start to trust the polls more?"

Further reading

  • Political scientist Alan Abramowitz wrote on Twitter that the Upshot's use of state polls explains the different forecasts.
  • Nate Silver's latest Senate forecast, from back in March, and not yet formalized in a model, is here.
  • Sam Wang wrote about "The war of the Senate models" in Politico Magazine.
  • Josh Kraushaar criticized several Senate forecasting models for National Journal.
  • John Sides wrote about why the forecasting models differ, and Ben Highton ran through some of the Post's specific results here.
Correction: This post has been updated to clarify that the Post's model uses national presidential approval data, not state-level data.