/cdn.vox-cdn.com/uploads/chorus_image/image/51971917/Screen_Shot_2016_11_23_at_8.44.30_AM.0.png)
If there’s any part of the research enterprise that can elicit rage amongst scientists, it’s peer review.
Academics vet the work of their peers — for free, in their spare time — in a process that is supposed to weed out junk science before it’s published. But researchers say the task is thankless, that it slows down the publication process. To make matters worse, this cornerstone of the scientific method has surprisingly little evidence for its effectiveness, and many mysteries about how it works.
In an effort to better understand peer review, researchers have been trying to study the process itself. And one such new study may help explain why peer review fails, and why it may not ensure quality in science. Its main finding is that a small minority of researchers are shouldering most of the burden of peer review.
For the paper, the study’s authors (who were mostly based the Paris Descartes University) used a mathematical model to estimate the supply and demand for peer review in biomedical research. For the first time, they came up with a range of what the "global burden" of peer review looks like.
In 2015, they found the supply of peer reviewers exceeded demand by as much as 250 percent. In other words, there were many more potential academics available to carry out the work of vetting research. And yet, they also discovered that only 20 percent of those researchers performed 70 to 95 percent of the reviews. So a small group of the available experts were doing most of the work.
"The disconnect could correspond to millions of potential peer reviewers who are untapped," study co-author Ludovic Trinquart, who is now based at Boston University, told Vox.
For example, according to data from Elsevier and Wiley, two of the world’s largest scientific publishers, the job of peer review disproportionately fell on US-based scientists. By contrast, "Chinese researchers seem to publish twice as many articles as the number they are peer reviewing, despite their willingness to peer review," the researchers wrote.
There’s been a lot of discussion about the occasional failures of the process to actually catch sloppy science and fraudulent data before it makes it on the record. And so the fact that researchers aren’t fairly and equally sharing the burden of peer review may help explain why. As Trinquart put it simply: "A minority of peer reviewers are overworked."
The researchers on the new study also quantified all the time that's spent peer reviewing. For most reviewers, it's not very much at all — with the exception of that overworked minority.
"Among researchers actually contributing to peer review, 70 percent dedicated 1 percent or less of their research work-time to peer review while 5 percent dedicated 13 percent or more of it," the researchers wrote. So again, most researchers didn’t dedicate much time to peer review, while a prolific minority spent a relatively large chuck of time going over the research of their peers.
All told, the researchers estimated 63 million hours were devoted to peer review in 2015, among which nearly 20 percent were carried out by only the top five percent of contributing reviewers.
"Peer review works on a quid pro quo basis, so one could say that doing fewer reviews than papers is too little," Trinquart said. "The 20/80 imbalance is problematic. The system needs to be improved, and future evaluations of alternative peer review systems should address the peer review effort."
Researchers are calling for more data on peer review
Peer review typically begins after a researcher submits an article for publication at a journal. If the journal editors think the article looks promising, they send it off to academics in the same field for comments, critiques — and even rejection.
But as Richard Smith, the former editor of the BMJ, summed up: "We have little or no evidence that peer review 'works,' but we have lots of evidence of its downside."
As we explained in our feature on fixing science, the level of anonymity varies by journal: some have double-blind reviews, while others have moved to triple-blind review, where the authors, editors, and reviewers don’t know who one another are. Some journals now use an open, collaborative process; others use "post-publication peer review," where academics can comment on each other's work after it's published.
Because of the paucity of science on peer review itself, researchers have been calling for a better understanding of the process. As evidence-based medicine researcher Drummond Rennie noted in a recent issue of the journal Nature: "We need rigorous studies to tell us the pros and cons of these [peer review] approaches today. Until then any advertised advantages of new arrangements are unsupported assertions."
Drummond’s advice is that scientists concerned about peer review should follow clinical trials researchers, who have been pushing for higher quality standards and more transparency in their field.
Trinquart agreed, and he wants to pressure the scientific publishing industry to start sharing their data on peer review.
"[Clinical trials] should be considered as a public good — it's the data of patients who consented to participate in studies — but the data from trials are withheld in the majority [of times] by the pharmaceutical industry," he said. And he argues peer review data should be considered a public good. It's the data of scientists who do the reviews, and many of them are funded by governments — and data from publishers should be shared.
"Many questions on peer review can be addressed only with access to ‘aggregate’ [data] on peer review or directly to peer review reports," he noted. For example, the flow of resubmissions of articles to other journals after they’ve been rejected once is a key driver of the system, yet one we know little about. With more research along these lines, this pillar of the scientific process could get stronger.