clock menu more-arrow no yes mobile

Almost everyone got 2016 wrong. We should try to predict 2020 anyway.

A political scientist makes the case for political forecasting.

Voters Across The Country Head To The Polls For The Midterm Elections
Voters line up outside the polling place at Fire Station Number 2 on Election Day, November 6, 2018, in El Paso, Texas.
Chip Somodevilla/Getty Images

On the day Beto O’Rourke announced his candidacy for president, I went on Twitter and made a prediction: The El Paso Congress member would be the next president of the United States, propelled into the Oval Office by his narrative gifts and charisma.

My prophecy took my Twitter followers by surprise and earned me a few sneers, and, for my sins, I was ratioed. Some took it as an endorsement (it wasn’t). It was instead my attempt to elaborate on a working theory about what drives success in today’s politics, and to make a prediction based on it, far enough in advance so it actually means something.

Most scholars and commentators these days are overly cautious about venturing predictions. It’s understandable: After so many got so much wrong in 2016, the natural response is to step back and “get out of the prediction business.”

This is too bad. If anything, we should make more predictions — if so many once-reliable theories crumbled in 2016, there’s all the more need to come up with new ones. And making predictions is the best way to test our theories and assumptions, and therefore an excellent way to learn. Yes, it means sticking your neck out and maybe being wrong. The safest way to always be right is to never make any predictions. But without venturing testable hypotheses about the future, it’s harder to distinguish rival theories.

Like it or not, the demand for predictions exists. Our minds are wired to hate uncertainty, to know the next turn in the story. Predictions fill a deep need.

Without scholars and commentators making informed predictions in the accountable spirit of the scientific method, the market is left to those who care less whether they are right — they just want to be interesting. This reduces the quality of predictions — more vague and wild predictions, fewer ones resting on specified theories. The result: We understand less about the world.

Psychologist Philip Tetlock has found that the more specific predictions we make, the better we get at predicting. Like anything else, we learn more from our mistakes when we hold ourselves accountable.

How to think about theories and predictions

To live in the world is to develop many predictive models about how everything operates, based on experience. The subway is faster than the bus; if you want to get a table at this restaurant, you have to arrive before 6 pm; if my kids have a late dinner, they will get very cranky.

The same applies to politics. We have lots of predictive models about what we think will happen and why. One common example of a predictive theory is the old political saw, “It’s the economy, stupid!” The slogan oversimplifies a theory that the better the economy is doing, the more likely an incumbent candidate will get reelected. Certainly, this doesn’t capture everything. But as a modest estimation, it works pretty well.

One popular economy-focused presidential election model is Douglas Hibbs’s “bread and peace” model. The more “bread” (i.e., the more economic growth), the more votes the incumbent party is likely to win. And the less “peace” (i.e., if the country is in an unpopular war), the fewer votes the incumbent party is likely to win. Add these together and you can explain a good deal of the variation in presidential election results going back to 1952.

But the model made a wrong prediction in 2016. It overestimated the share of the two-party vote going to the incumbent (Democratic) party candidate. Hillary Clinton underperformed the model and Donald Trump won the election. Some other factor (or set of factors) outside the model mattered.

Here’s the thing: Even good models and good theories spit out wrong predictions sometimes. Models are not one-to-one maps of the world. They are probabilistic abstractions that attempt to simplify a complex world into a parsimonious hunch.

The more observations, the more accurate the theory can be. But often, you’re working with a limited number of observations, especially when it comes to important historical events. This is a problem with, say, presidential elections, which don’t happen that often. That makes election models difficult to get right.

I use the example of a statistical model because it’s easiest to explain and generates the most precision. Though we should strive for as much precision as possible, not all important factors are easily quantifiable. Sometimes, the quest for precision can direct us away from important but difficult-to-quantify factors and lead us to search for our keys under the streetlight. Sometimes we have to settle for proxies or expert judgments instead of clear metrics.

Learning from our mistakes

So what happens when a model makes a wrong prediction? A forecaster might come to a few different conclusions:

  1. No big deal — the model is as good as it can be. No model is perfect, and one wrong prediction doesn’t mean you should abandon a pretty good model. The world is full of random flukes.
  2. The model is still right generally but could be better. The wrong prediction revealed something about the underlying relationships and can allow us to improve the model.
  3. Something is different now. The assumptions about underlying relationships have changed. The model is now junk. We need to build a new model.

Let’s apply this to Hibbs’s model. Maybe the model that got so many of presidential elections right but got 2016 wrong needs another variable, like the “time for a change” factor — that after eight years of one party in the White House, the electorate wants another party. If so, that’s a tweak: Add in another variable. But the challenge is that the more variables you add, the less parsimonious your model becomes, and the harder it becomes to test.

Or maybe something fundamental has changed. Maybe the economy matters less than it once did, because partisanship is so dominant, and even evaluations of the economy are partisan. If so, maybe we need a different model.

One area where a new model might be needed is presidential nominations. Four years ago, I (along with many others) was confident Jeb Bush would be the Republican nominee. Like most scholar and pundits, I believed in the Party Decides theory, which had convincingly explained recent party nominations by showing how party elites coordinated endorsements and money to help favored candidates win. And Bush looked like the favorite of Republican Party elites — hence, the future nominee.

Even as Trump led the field for most of 2015 and into early 2016, I still believed money and endorsements would matter, as they had in the past. But then Trump won. Something was missing from the old theory, at least as applied to the GOP in 2016.

But we still don’t know for sure whether our old theories were just incomplete or whether they are now outdated. To find out, we need to make more predictions.

One big challenge in almost all of social science, political science included, is that it is “observational.” That is, unlike bench science, where you, say, get to inject various mice with different chemical compounds and see how they react while holding everything out constant, observational social science lacks a clear “treatment” and “control.” Instead, we have a bunch of “cases” (like presidential elections) and a bunch of variables (like economic growth) that we might correlate with different outcomes (like the incumbent’s share of the vote).

Now, you might say: Why make predictions when we can just wait for the outcome, and then make a post-hoc judgment call? But once we know the outcome, post-hoc rationalization sets in. Hindsight is always 20/20 (and not just in 2020).

If you have a new theory, there’s only one way to know whether it might be any good: make predictions and see how well you do.

It’s okay to make mistakes

Understandably, many in the scholar-pundit industrial complex are hesitant to make predictions for 2020. If we collectively got so much wrong in 2016, what does that say about our abilities? And who wants to put themselves out there and be wrong again?

Yet we learn more from our mistakes than our successes, in life as in making predictions.

If our old assumptions were wrong, we need some new ones. We need to make sense of a changed political landscape.

But there’s a way to make predictions responsibly. It’s not just forecasting what’s going to happen — it’s about specifying a theory, a rationale, an underlying causal mechanism about how the world works.

Responsible prediction depends on a clearly specified theory, one that is tight enough to be useful and broadly applicable. Irresponsible prediction is wild speculation, making predictions based on random feelings and hunches without clearly articulating what’s behind the hunches. They’re irresponsible because they’re not contributing to broader knowledge and understanding — because they’re not specifying a generalizable and therefore testable theory.

Which brings us back to the prediction I tweeted out (again, not an endorsement): Beto O’Rourke will be the Democratic nominee, and he will defeat Trump in November 2020.

Here’s my theory, informed by what we’ve seen from recent elections: The candidate who is best able to get by on “authenticity” and charisma and can keep from getting mired in policy specifics has the best chance of winning. The current era of social media- and internet-driven small-donor financing has made politics more unmediated, more direct-to-voter than ever.

Perhaps others might think another candidate fits this assessment better (some might say Bernie Sanders, others Kamala Harris, others Pete Buttigieg, etc.). I admit I’m making a personal judgment call here, a precursor to a more rigorous model. We should devise ways of measuring these kinds of qualities better, perhaps by analyzing social media mentions and trends to effectively crowdsource voters’ judgments of candidates, looking for key words and sentiments, or treating small-donor fundraising as a proxy. Let a thousand dissertations bloom, if anybody else finds this plausible.

I would only apply this theory from 2008 forward, when social media and small-donor fundraising reached their modern scale. Before, when politicians had to work through intermediaries and elites, when there was no online fundraising and no Twitter, traditional gatekeepers mattered more, and gatekeepers cared more about experience. This doesn’t seem to be the case anymore.

My theory is based on Obama’s and Trump’s successes. Among Democrats in 2008 and among Republicans in 2016, Obama and Trump both were charismatic men who also deployed social media and leveraged their brands extremely well.

This theory only works when there’s a field in which one candidate exemplifies these traits. In fields that do not have one clear charismatic storyteller, this theory doesn’t predict much.

But I do think this theory will have even more predictive value going forward. Social media will only become more important, and new technologies will enable voters to see candidates up close and personal more and more. Soon, we’ll enter an era in which candidates campaign in virtual reality, and voters can put on a VR headset and see a candidate give a speech right in front of them. Just as television put a premium on looks and charisma, social media and virtual reality will put an even greater premium on those qualities.

For the general election, I have a different theory: Beto will defeat Trump in November 2020 because no president whose approval rating has been underwater the entire first two years of his presidency has ever won reelection. Of course, the only president with a similar pattern of consistently low approval is Gerald Ford, so this is a small sample.

I may be wrong. And that’s okay. If I’m wrong, I’ll learn, I’ll adjust and update, and make new, ideally better, predictions.

I hope more folks will join me in specifying theories enough to make testable predictions. At the very least, they’re fun to debate. And if you’re skittish, you can always couch your predictions in probabilities and say you’re 70 percent sure. I’m 70 percent sure we’ll collectively know more.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.