Another election is around the corner, which means another round of polling to parse as voters and candidates alike gear up for the midterms. Right now, polls are telling a story that is far rosier than the one Democrats were expecting earlier this year. But headline after headline after headline say that these numbers may not ring true on Election Day.
If the polls are wrong, it’s far from the first time Democrats will feel they were set up for disappointment by them. President Joe Biden won the 2020 general election by a slimmer margin than expected, and of course there was 2016: Very few pundits predicted Donald Trump’s victory over Hillary Clinton.
Polling experts will tell you though, that while there were some polls that really failed in recent big elections before and after 2016, there’s also a common, fundamental misunderstanding among news and poll consumers about what they are useful for in the first place. And that misunderstanding was exacerbated in 2012, at a time when the data became more plainly accessible, according to Amy Walter, the publisher and editor-in-chief of The Cook Political Report.
On this week’s episode of The Weeds — Vox’s podcast for politics and policy discussions — Walter gets into the nitty-gritty of what many people hanging their hopes and setting their expectations on the data get wrong about polling and why consumers of public polls and media about them may not have all the data they need to predict political races.
Amy, take us back to 2012. What made that election so different in terms of our understanding of the polls?
2012 was this watershed moment where regular people, regular news consumers could have access to not just the data, because this data had been floating out there. The NBC Wall Street Journal polls had been around for a long time. So have CBS and ABC … but for the first time, we had something in FiveThirtyEight that aggregated all of that data and made it easy to search and understand for your non-political person. Your average news consumer could understand in essence what all of this data was saying. [FiveThirtyEight] took all the polls, they aggregated them, they had something of a formula to be able to average it out so that every day you would click on their website, and every day you would get an updated version of what are the polls saying about the approval rating of X candidate.
They put something else that we as human beings like to see, but we don’t really understand very well — or at least we don’t react to it in the way that we should — which is to put a percentage chance or to put a prediction on the chances of that state going a red direction or a blue direction. A probability.
Humans are not very good at a number of things. One is risk. We think we understand risk, but we’re really, really bad at it. You say, “That seems really dangerous to drive a hundred miles an hour on a really curvy road with ice on it, that’s clearly risky.” You know what else is really risky? Walking across the street looking at your phone and not looking at the fact that there’s a car coming or a bike coming or somebody else coming at you. There’s a bigger likelihood you’re going to get injured or die because you were looking at your phone, not because you were driving a million miles an hour on an icy road.
The other thing we’re bad at besides risk is probability.
So the weather person says there’s an 80 percent chance of rain. What do you do? You expect it’s going to rain, right? You carry an umbrella and when it doesn’t rain, after you canceled your plans to go outside and have your nice outdoor dinner party, you’re really upset because you said there was an 80 percent chance. How is it possible that it didn’t rain? Well, the weather person you’re not able to come back and say, “Well, what I really told you is there’s a 20 percent chance it’s not going to rain.”
Are pollsters the weather people of politics?
In some ways, yes. In some ways, no. They are not the four-day weather forecasters. What they would tell you is, “I’m going to tell you what the temperature, the wind speed, the humidity level is today.” Now we can project that out and assume if all of those things stay similar, that that is what it’s gonna look like seven days from now. But if something happens, we have to go back into the field and reassess the weather. And so pollsters will tell you all the time, this is just a moment in time.
Even a 30 percent chance of something happening is not that outlandish. We’re asking a lot more of political polling than it is able to give us. It is not meant to be a precise tool. This is not what NASA would use to put together instruments. It is giving you a snapshot.
Then who right now should have faith in the polls? Should the Republicans have faith in the polls? Should the Democrats?
Right now, when you talk to campaigns and the people who are doing the private polling, Democrats and Republicans do see some of the races differently.
So they have different expectations internally because they have access to more information than any of us do. So there is some disagreement out there as to whether the public polls are mirroring what the campaigns are seeing in their own campaign.
That, I think, is another important thing to remember, is if you think about all the polling that’s going on and the modeling that happens — data analytics, big data — all of that is happening in these campaigns to a degree that is significant. A significant amount of money and effort goes into these analytics and this polling data that we, you and I and anybody else listening here, will never get a chance to see.
So what we get a chance to see are polls done for public consumption. It’s not to say that these are pollsters who are trying to mislead or are pushing an agenda. I think that — especially if we’re talking about some of these polls that have been around a long, long time, and then some of these colleges and universities that are doing polling — they’re doing it as a way to sort of put information out into the public’s sphere.