Peer review is critical to the scientific process. Before publication, journals have manuscripts reviewed by scientists who are familiar with the field to safeguard against bad science. But peer review also often fails. Poor studies still make it into prestigious journals. What’s more, studies on peer review itself find the practice is not much better than random chance when it comes to getting a paper accepted in a journal.
There’s a big reason why: Peer reviewers are human. They are largely overworked as scientists and generally unpaid for the act of reviewing. As Vox’s Julia Belluz explained recently, “A small minority of researchers are shouldering most of the burden of peer review.” And they’re also susceptible to the same lazy thinking and cognitive shortcuts that can hinder the critical thinking of all humans when they’re taxed.
Here’s a clear example of that, recently published in JAMA.
There’s evidence that peer reviewers are more lenient toward prestigious researchers
In the JAMA study, a team of orthopedic researchers reached out to the peer reviewers of Clinical Orthopaedics and Related Research, a prominent journal in the field. Over the course of a year, the 119 reviewers who participated in the study randomly received article manuscripts to review. All the manuscripts were the same (and contained a few errors), save for a critical difference.
In some conditions, the reviewers were told the names of the study authors, who were prominent researchers at prestigious institutions. In others, they were not.
This mimics the two main ways peer review is conducted. First, there’s single blinding, meaning reviewers know the names of the authors. This is the most common practice in academic journals. Less common, but more rigorous, is double blinding, wherein neither the author nor the reviewer knows the identity of the other. Single blinding can be problematic because the name of a study author or her institution can subtly influence the reviewer.
And that’s exactly what the JAMA paper found: Prestigious authors’ papers were accepted at a rate of 87 percent when their names were present, but 68 percent when their names were redacted.
What that means is that the reviewers are letting an author’s perceived reputation get in the way of their criticism. It might not be intentional. The reviewers could just be falling back on a cognitive shortcut that nudges them to believe a prestigious author will write a report worthy of publication.
Even if it’s unconscious, it’s still bias. And it could unfairly make it harder for young, untested researchers to publish their work. Or it could unfairly give leeway to an established name who has started producing shoddy work.
And this all adds to a “Goliath’s shadow” effect in science, in which it can be difficult for young researchers to get ahead when a prominent older investigator is still at work. Last year, a National Bureau of Economic Research working paper found that when a prominent researcher dies, there is a marked increase in published work by newcomers to the field. This suggests younger researchers are either prevented from or afraid of challenging a leading thinker in a field.
Double-blind peer review could shield reviewers from their biases
The JAMA study was conducted at just one orthopedic journal, so it’s possible that these results don’t generalize to other areas of research or other journals. And know that double-blind peer review is gaining acceptance at major scientific publications, like Nature.
But as my colleagues and I wrote in a recent feature on problems in science, many scientists share the concern that the still-dominant practice of single-blind reviews introduces bias.
"We know that scientists make biased decisions based on unconscious stereotyping," Pacific Northwest National Lab postdoc Timothy Duignan told us. "So rather than judging a paper by the gender, ethnicity, country, or institutional status of an author — which I believe happens a lot at the moment — it should be judged by its quality independent of those things."
It’s possible some researchers are aware of these biases and use them to game the system. Retraction Watch recently reported on an instance when a journal received a paper that included a prominent name among its group of authors, just to later have that name repealed.
The bottom line: The JAMA paper is even more evidence that scientific peer review needs to become, well, more scientific.