clock menu more-arrow no yes mobile

We discovered one of social science's biggest frauds. Here's what we learned.

Earlier this year, we discovered one of social science’s most widely publicized scandals: The data for a celebrated study showing that a face-to-face conversation could radically decrease prejudice toward gays and lesbians seems to have been fabricated.

We didn’t set out to debunk the original study. Quite the opposite, actually. We wanted to replicate and extend it. It was only when we began doing so that we realized the data for the original seemed too good to be true; within days, others determined it was, and the study was retracted.

This episode grabbed headlines not only because of its implications for political advocates, but also because of broader concerns it raised about science. Outlets such as the New YorkTimes argued that a rise in similar retractions should erode public trust in science. "Cheating in scientific and academic papers is a longstanding problem," the Times editorial board wrote, "but it is hard to read recent headlines and not conclude that it has gotten worse."

We disagree, and think this is the wrong conclusion to draw from the rise in scientific retractions. But what are the right lessons? After reflecting on our experience over the past few months, we’ve come up with a few.

1) The rise in retractions shows the state of social science is strong, not weak

A rise in scientific retractions might indicate misconduct is on the rise — but it also might reflect how changing scientific norms have made scientific misconduct easier to detect and expose.

Concluding the opposite is an example of survivor bias, an error in reasoning best illustrated with a famous story from military history. During World War II, the US Navy asked Columbia University statistician Abraham Wald to analyze patterns of damage from Axis anti-aircraft fire on planes returning from bombing runs. Wald found that certain areas of the planes, like the wings, were especially likely to come back riddled by bullets. He then recommended that the Navy reinforce the other areas — the ones that were consistently unscathed.

The Navy was puzzled with Wald’s advice. If the wings were so often damaged in battle, wouldn’t that suggest weaknesses in the wings, not the consistently pristine windshields? Wald responded wisely. Enemy bullets land everywhere, he explained. But planes that take fire only return to base for inspection when they’re hit in areas where the planes can withstand it. On the other hand, if no planes that return to base have bullet holes in the windshields, it doesn’t mean that windshields somehow avoid getting hit — rather, it suggests planes struck in the windshield don’t make it back.

In the same way, we see scientific retractions as indicators of science’s unique and growing penchant for telling the truth. Just like enemy bullets struck Navy aircraft everywhere, fraud and mistakes crop up in all walks of life. The fact that we are especially likely to see and hear about the errors scientists make does not suggest scientists are especially prone to making mistakes. Rather, it shows that scientific errors are increasingly likely to be detected and corrected instead of being swept under the rug.

2) The norms of academic science make it much easier to catch fraud and mistakes

One reason scientists often get caught lying is that they are required to publish in ways that make it easier to catch lies and mistakes, and they publish in an environment that celebrates those who catch them.

We were able to detect irregularities in the study’s data because of these scientific norms. Within our discipline — political science — norms to make data sets and code publicly available were first adopted in the early 1990s, recognizing that the pursuit of scientific truth is a community enterprise and that the sharing of data and code is vital to that pursuit. By comparing one publicly available data set to another, we were able to detect results that seemed too good to be true. Had these norms not existed, these irregularities likely would have gone undetected.

Then, after making our discovery, we could come forward because we knew our colleagues valued telling the truth over eschewing controversy or embarrassment. The article’s senior author and Science’s editors readily retracted the study because they did, too.

3) Research is best when it’s independent, public, and not-for-profit

In a time when privatization is a watchword of political debate, our experience also illustrates the importance of protecting these norms and institutions, which remain unique to academic science.

Research and development is a $400 billion industry, of which academia comprises only a small share. Proprietary, for-profit fields of inquiry such as market research, pharmaceutical and agricultural research, and consulting are much larger than academic science. Yet we hear about new scientific retractions daily and scandals in for-profit research only rarely. Why?

We suspect the reason academic scientists are responsible for an outsize share of research mea culpas isn’t because academics are especially dishonest but because academia features unique norms and institutions — like blind peer review and access to replication data — that succeed in allowing scientists to discover and disclose difficult truths, including those that concern others’ work.

Like Wald noted about Navy plane wings, the fact that we can observe the mistakes that occur in academia shows its strength, while the relative silence of private sector researchers about their mistakes should prompt questions, not confidence.

Returning to our story, the fraud we discovered could easily have happened in the private sector — but without open data and academic freedom, it just may never have been detected or exposed. Scientists’ proclivity for admitting our errors thus doesn’t mean scientists commit more of them; rather, it showcases the scientific community’s special dedication to telling even embarrassing truths.

4) Working with academics is the best way to learn what really works

Science’s unique norms and institutions also explain why nonprofits, governments, and corporations increasingly partner with academics to evaluate their effectiveness and test new ideas, rather than relying on our for-profit counterparts. The same norms and institutions that encourage us to tell hard truths about other scientists’ work also encourage us to do the same about non-scientists’ efforts. For groups that work with academics, the uncomfortable truth we are in a unique position to tell is that something the group is doing doesn’t work. Academics deliver this news routinely.

It’s also a matter of culture; open criticism is inherent in an academic’s job description. Anyone who has served on a dissertation committee, sat in an audience at an academic seminar, or completed a peer review has had to tell someone that something is wrong with a project that he or she has spent months or years working on. It’s core to the job.

That’s not always true in the private sector. Fear of losing sales might mean private sector researchers are not able to deliver the news that their firm’s drugs, advice, or advertising doesn’t work — and their consumers or funders would have a hard time finding the fib.

By contrast, when academics do produce evidence that something works, you can be much more confident we’re not fudging the numbers — we’d almost certainly be found out if we did.

5) Academia can still do better — but only if its unique position in society is protected

For all that science gets right, there remains room for improvement. As an example, younger researchers like ourselves should be more clearly encouraged to speak out when they suspect something is amiss in published work. Academia is a close-knit yet highly competitive community, and raising suspicions about your colleague’s work still bears some risk. It needs to be even easier to be a whistleblower.

Only one thing makes us doubt whether science will rise to challenges like this one. At the federal and state levels, politicians are threatening science’s funding and independence more aggressively than ever. (As one of many examples, Congress is trying to cut federal National Science Foundation funding for the social sciences in half.) As political scientists, this is no surprise to us: Politicians are worried about the danger the truths we might tell could pose to their agendas, or to their contributors’ interests. These politically motivated attacks on science are reckless and shortsighted; they endanger the ability of one of humankind’s singular engines for progress to keep telling the uncomfortable truths that make progress possible.

So long as science does continue to receive the public support that ensures its independence, we are confident it will only get better at finding and telling the truth — and, as a consequence, that we’ll keep on hearing about the mistakes scientists make.

David Broockman is an assistant professor of political economy at the Stanford Graduate School of Business. Joshua Kalla is a PhD student in the Charles and Louise Travers Department of Political Science at the University of California Berkeley.