There are some age-old ways of cheating in the sciences — fabricating data or plagiarizing text — but apparently there's a new cheat on the scene. Over the past two years more than 110 scientific papers have been retracted from people rigging the peer-review system, mostly by posing as independent reviewers and then evaluating their own articles.
Cat Ferguson, Adam Marcus, and Ivan Oransky recently delved into the problem in a news feature published in the journal Nature. They describe several cases where researchers gave journals fake email addresses for other scientists — and then posed as these scientists to give positive reviews of their own papers. (The news feature also highlights how many journals' reviewing-process software doesn't ask for any identity verification.)
These cases are part of what appears to be a recent spate of scientific malfeasance. So what's going on here? Is this just a uniquely bad run? Or is this pointing to bigger flaws in the peer-review process itself?
One of the biggest cases of scientific misconduct ever
No sympathy, here. You really shouldn't have done that. (Shutterstock)
One of the most astounding cases of this kind of fake-identity technique recently led to the resignation of not just a scientist and a journal editor, but also the education minister of Taiwan. In July, scientific publisher SAGE announced that it was retracting a whopping 60 papers connected to Taiwanese researcher Peter Chen, in what appeared to be an elaborate work of fraud.
This group of retractions is big enough for the history books. The 60 papers, published from 2010 to 2014 in the Journal of Vibration and Control, makes this one of the five biggest cases of retraction in science. (The dubious record is thought to be held by anesthesiologist Yoshitaka Fujii, who has 183 papers retracted or pending retraction.)
he had apparently created 130 fake email accounts
SAGE's ensuing, 14-month-long investigation showed that Chen had apparently created 130 fake email accounts of "assumed and fabricated identities" that created a "peer review and citation ring." In other words, he seemed to be suggesting his own fake identities to the journal as reviewers of his papers (or sometimes posing as real people). And he may have used fake authors, too.
This isn't the first time that someone has been caught creating fake reviews. Experts can recall at least five other similar instances, including South Korean researcher Hyung-In Moon, who was caught in 2012 making up fake email addresses to review his own papers. He has had dozens of retractions so far.
Chen isn't even listed as an author on some of the papers that were retracted. Why would someone want to create a paper "written" by other people? Ivan Oransky, VP and global editorial director of MedPage Today and a co-author of the Nature feature, initially broke the story at the Retraction Watch blog and has been following cases of scientific misconduct for some time. In July, he said that these papers were likely Chen's attempt to rack up citations of his own work.
Chen has since resigned from his position at the National Pingtung University of Education. The editor of the journal where Chen's work was published has left his job, too. And the scandal also led to the resignation of Taiwan's education minister, who says he had been added as an author to several papers without his knowledge.
Why would a journal let you pick your own reviewers?
"My friend from graduate school will totally give my paper a good review," thought the assistant professor. (Shutterstock)
Ideally, the peer-review process for scientific papers means that the journal will pick some experts to review someone's paper. And they will keep those reviewers anonymous in order to solicit their honest opinions on the quality of the work. This is what happens at a place like Science or Nature.
But at some journals, an editor might not know the specialized field very well and will ask the author of the paper for suggestions for reviewers. That's what happened in Chen's case — and it opened the door to serious malfeasance. However, if the journal had taken a look at those email addresses, it's quite possible that they could have spotted what was going on. So allowing suggestions isn't necessarily the problem.
Is scientific misconduct becoming more common?
The number of retracted science papers has increased at a rate higher than the increase in publications, according to a 2011 story by Richard Van Noorden. He notes that about half of retractions seem to be because of misconduct rather than more honest errors. (Richard Van Noorden. Science publishing: The trouble with retractions. Nature, 2011.)
It's probably safe to say that scientific misconduct has existed as long as science has. Science, like all human endeavors, is not immune to human failings. (There are even some suggestions that Gregor Mendel — the nineteenth-century father of genetics — may have fudged numbers in his pea plant studies.)
But there have been some pretty big scandals recently. On July 2, two high-profile papers on stem cells were retracted at the journal Nature. Meanwhile, a former Iowa State University HIV researcher just became one of the few people to ever face criminal charges for faking results from research conducted with federal dollars.
Out of about 1.4 million scientific research papers published each year, about 500 get retracted
However, overall, the number of papers retracted is generally quite low. Out of about 1.4 million scientific research papers published each year, only about 500 papers get retracted. That's about 0.04 percent. A good chunk of those studies are retracted because of misconduct — roughly two-thirds, according to one study of biology and biomedical papers.
On the other hand, some bad papers out there never get retracted. "We have editors who stonewall, we have editors who are very stubborn about retracting," Oransky told me in July. "We have scientists who threaten to sue if their paper is retracted. You have all these barriers to retraction." One recent analysis documented several instances of papers with serious flaws (though no evidence of misconduct) that have never been retracted.
It's also hard to tell whether things are getting worse. True, the number of retractions each year has been on the rise. That could be because of more problems. But it could also be a sign of more thorough policing. Plagiarism-detection and image-detection software, for example, have allowed journal editors to more easily screen for duplication problems. The rise in retractions might also be influenced by the fact that people are publishing more and more papers every year.