Bloggers notwithstanding, the primary goal of most political scientists is not to provide internet commentary on politics, but to humbly seek answers about why the political world unfolds as it does. Ideally, these answers will have some relevance to the aspects of politics that normal people care about, and that's where the internet bloviating comes in.
This past year stands out for the ways these two objectives — coming up with sound conclusions and addressing questions people care about — blended in new and troubling ways. Unsettling events in the US and the rest of the world, alongside scandal and controversy in our own scholarly community, have left me wondering about the process of generating knowledge about the social world in real time.
Honesty and research transparency
Fraudulent data is one of the worst kinds of academic misconduct. Unfortunately, this summer political science absorbed the revelation that a major paper, a field experiment about support for same-sex marriage, from 2014 was based on fraudulent data. You can read a couple of excellent pieces about the implications and fallout here, here, and here. Lots of questions and answers came out of this. Some pretty fierce debate about data transparency broke out (over initiatives that predate the LaCour revelations).
While I'm broadly in agreement with Jon Ladd (the second link above) that bad-faith errors like falsified data don't open up any really interesting or ambivalent questions about research ethics, a big causal question remains relevant and raw. I subscribe to the belief that no system is foolproof, and that any large group of humans will have a few actors who want to cheat, shirk, or otherwise behave badly. However, the media response to the initial study reveals, once again, the hunger for big findings and clear conclusions. Those are often in short supply in any scientific endeavor.
The presidential nomination contest — especially on the Republican side — has dominated our attention, but the Republican Party in Congress is, as I've argued, part of the same story. Some of our political science theories about Congress helped to illustrate how the contest for who would succeed John Boehner as speaker of the House emerged the way it did.
Yet it is probably safe to say that most observers, political scientists included, did not see the situation coming — Boehner's resignation, Kevin McCarthy's withdrawal from the speaker's race, and even the vacuum that emerged until Paul Ryan agreed to take the job. There's nothing shocking about the fact that a stochastic human element is at play in politics. But the speaker saga is a good reminder of all that is unpredictable in political life.
The presidential race has also, to say the least, been a reminder of complexity and contingency. To some extent, nearly all thoughtful commentators, scholarly and otherwise, have been caught with our prediction pants down over Trump's staying power. But the theoretical perspective that has taken the most abuse, perhaps unfairly, is the idea that party elites control the process — the theory put forth in The Party Decides.
The point here isn't to go back over the debate that's unfolded over the past two months — there are plenty of blog posts and tweets to illustrate the back and forth among political scientists and journalists. Instead, I think some of this debate illustrates what can be lost in translation when we attempt to bring academic perspective to debates.
First, even if the predictions in TPD spectacularly fail to provide insight into the 2016 nomination contest, this doesn't mean they were wrong or useless, or, most importantly, that we didn't learn anything. If this year's race fails to fit the theory's predictions, we'll have a range of options for why, and it won't necessarily be obvious which one is correct.
But the range of possibilities will be narrower and more structured than if the book had never been written. In other words, as with the initial response to Michael LaCour's research, the tendency for those outside the scholarly community is to look to science for Answers — big, sweeping, airtight Answers. But that's just not how it works. One of my mentors in grad school once said to me, "Science doesn't have to be perfect. It just has to be better" [than the existing state of knowledge]. I don't have to agree with everything in The Party Decides, or consider it a flawless work, to maintain that it has met that standard.
Fear, terrorism, violence
Attacks in Paris and in San Bernardino, California, made for a tragic and disturbing end to 2015. While these events were sudden and unexpected, the political response was incredibly predictable. These acts of mass violence prompted several larger conversations about the fight against ISIS, terrorism in general, and guns. Bethany Albertson and Shana Kushner Gadarian observe that citizens' responses to anxiety follow some predictable patterns.
Although both Republicans and Democrats — mass and elite — worry about terrorism, Democrats also emphasized the gun angle. And although among the public, reservations about Syrian refugees transcended party lines, the usual partisan differences were also apparent about openness toward refugees, about the president's handling of the situation. As grim as the subject matter may be, these patterns represent something of a vindication for political science's predictive capabilities.
So why were political scientists so easily able to draw conclusions about some developments, but stymied and surprised by others? I think we learn a couple of useful things from the challenges of 2015. First, it's important to separate the systematic from the nonsystematic, the idiosyncratic, the contingent. (If you're a political scientist who went to grad school in the past 20 years, you know that from reading this book.)
Not predicting Trump is one thing — the emergence of a well-known and unconventional candidate like Donald Trump is a contingent event. The bigger puzzle regarding the nomination, in my opinion, is why elites haven't coordinated more among the more conventional politicians in the race. Predicting specific developments that rest mainly on a matter of decision-making by a few individuals — presidential runs, speaker resignations, even terror attacks — is difficult. Coming up with probabilistic models of how existing belief systems, like partisanship or ideology, will shape beliefs about other issues is comparatively easier.
But the difference isn't just in the nature of the events themselves. It also has to do with how we can study them. Albertson and Gadarian's conclusions about "anxious politics" for example, are based on extensive and careful experimental studies. It's possible to exert this kind of control on the research situation in political psychology — to randomize exposure to political information or other stimuli, to eliminate other explanations for the results you observe, to see if the results remain consistent over multiple trials.
This doesn't work so well for something like a presidential election. People frown on random assignment and replication when it comes to stuff like binding democratic processes and violence. (And, of course, there is the gray area of field experiments.) These practical and ethical considerations will also affect what we know, and thus what we can predict.
What we saw in 2015 pushed at the limits of what we know and understand about politics. And it was another tough and introspective year for political science as a discipline. Yet there are reasons to be optimistic. The complexity and importance of politics in the coming year means that political science has the potential to be highly relevant to some big conversations. However, most of us know that the answers to big questions are really lots of smaller answers to smaller questions.
Based on what I observed in 2015, I anticipate that in 2016 we'll be seriously grappling with the balance between the political and the science. To us, the content is the method. To those outside our field, the content is, well, the content. In an eventful election year, it's on us to explain how those fit together.