Up until Tuesday morning, it was easy to consider yourself well informed about what was going to happen during the 2016 presidential election: All you had to do was check your choice of polls, almost all of which showed Clinton winning.
If you wanted to get more sophisticated, you could look at a data + politics hub like Nate Silver’s Fivethirtyeight, which provided more nuanced odds, but still favored Clinton.
Almost everyone got this stuff wrong.
But the one that arguably made the biggest error was also the one making the most audacious claim: VoteCastr, a brand-new startup, promised to provide real-time projections on Tuesday, as the votes were being collected, but before vote totals were actually released.
VoteCastr’s data generated lots of attention on Tuesday, and may have even helped move financial markets.
But it turned out to be way, way off: VoteCastr got five of the seven states it predicted wrong.
Update: VoteCastr CEO Ken Smucklr argues that his company actually called the majority of the states it watched correctly. You can read a longer version of his case at the bottom of the post.*
The most prominent error regarded Florida, which VoteCastr thought Hillary Clinton would win by more than 300,000 votes; instead, she lost it by more than 100,000 votes.
The misses convinced many people that VoteCastr’s mission was a bad one: It got the calls wrong, and it distributed those incorrect calls while voting was in process, which could have affected the outcome.
But Slate Editor in Chief Julia Turner, whose site partnered with VoteCastr and published their data, says the idea remains a good one.
The problem, she says, was the polls VoteCastr conducted before the election, then synced up with the voter turnout numbers it collected on Tuesday.
Like everyone else who polled voters before the vote, VoteCastr’s poll numbers were wrong, too. Which meant the final product would be wrong as well.
Here’s her response, via email, to my questions about why VoteCastr’s numbers were off, and whether she would repeat the process for another election:
The Votecastr project was premised on two ideas: First, that keeping real-time information from voters on election day was anti-journalistic. And second, that campaign methodologies could produce accurate and illuminating estimates of how the race was going on election day.
Obviously, our numbers weren’t right. We’re running a postmortem with the VC team this morning, but I suspect that’s because Votecastr’s methodology is dependent on polling; Votecastr conducted its own large sample polls, but found results fairly in line with the public opinion polling that was also off by a few points in many key states.
I’m disappointed that our numbers were off, but I still believe in the general principle that election day shouldn’t be an information-free zone.
* Votecastr CEO Ken Smukler says we’re wrong, and that his company was right. Or mostly right: The company says it ended up getting most of its calls correct on Election Day.
The longer argument: Smukler says VoteCastr correctly projected final presidential vote counts for six of the eight states it tracked on Tuesday.
But that information was never distributed by Slate or Vice, the media organizations VoteCastr partnered with. Instead, Slate and Vice shared VoteCastr’s estimates throughout the day, which projected real-time vote totals but not a final estimate.
Smukler says the company ultimately made the following projections at 6:30 pm ET, when polls were still open across the country. He points out that the company accurately projected a Clinton win in Nevada and a Clinton loss in Pennsylvania, but concedes that it got Florida and Wisconsin wrong:
Clinton: 46.6% 48
Trump: 43.1% 47
Clinton: 48.6% 48
Trump: 44.7% 49
Clinton: 45.1% 44
Trump: 45.7% 52
Clinton: 45.0% 48
Trump: 47.6% 49
Clinton: 50.0% 47
Trump: 41.8% 48
Clinton: 46.1% 48
Trump: 44.9% 46
Clinton: 44.5% 42
Trump: 46.1% 52
Clinton: 46.0% 47
Trump: 43.0% 44
The idea behind VoteCastr was to replicate the “war rooms” run by campaigns leading up to the election. The company combines its own pre-election polling with turnout results from sample polling places collected throughout the day.
Smukler says his intent had always been to use that data to ultimately predict the winner of each state. But Slate, his initial distribution partner, didn’t want to run final projections — just estimated totals throughout the day.
So this list, which Slate described as “VoteCastr's final estimated vote totals,” isn’t actually VoteCastr’s projections for the end of Tuesday’s vote — it’s just the last vote total Slate ran from VoteCastr, using data from 5 pm on Tuesday.
Smukler says Slate could have had access to his final vote projections, but didn’t want to use them, which sounds right: The site had been quite clear about the fact that it didn’t want to call the races themselves, but just describe the state of the race at given points in time.
But Smukler says he did try to give his projections to Vice, and recorded a segment where he passed along those projections at 6:30 pm ET, with the idea that they would be used in Vice’s HBO show that night. But the clip of Smukler never made it on the broadcast.
Via email, Vice News head Josh Tyrangiel confirmed that his crew taped the segment with Smukler but didn’t air it. “Couldn't get it on air in time,” he wrote.
Slate Editor in Chief Julia Turner has her own post-mortem of what happened on Election Day, which syncs up with Smukler’s story in some cases, but not in others.
This article originally appeared on Recode.net.