One of the biggest cases of scientific misconduct in history was uncovered this week.
On July 8, scientific publisher SAGE announced that it was retracting a whopping 60 scientific papers connected to Taiwanese researcher Peter Chen, in what appears to be an elaborate work of fraud.
This case is one of what appears to be a recent spate of scientific malfeasance. So what's going on here? Is this just a uniquely bad run? Or does the recent spate of scientific misconduct point to a flaw in the peer-review process? Here's a rundown:
How does one person manage to get 60 papers retracted?
The Chen case is quite astounding. Publisher SAGE announced it was retracting 60 papers from 2010–2014 in the Journal of Vibration and Control, which covers acoustics, all connected to Peter Chen of National Pingtung University of Education, Taiwan.
The 60 papers makes this one of the five biggest cases of retraction in science history. (The dubious record is thought to be held by anesthesiologist Yoshitaka Fujii, who has 183 papers retracted or pending retraction.)
SAGE has been a bit cagey about this particular case, so some of the details are still sketchy.
But from what we can gather, it appears that Chen created up to 130 fake email accounts of "assumed and fabricated identities" that created a "peer review and citation ring." In other words, it appears that he suggested his own fake identities to the journal as reviewers of his papers. And he may have used fake authors, too.
This isn't the first time that someone has been caught creating fake reviews. Experts can recall at least three others similar instances, including South Korean researcher Hyung-In Moon, who was caught in 2012 making up fake email addresses to review his own papers. He has had dozens of retractions so far.
Chen isn't even listed as an author on some of papers that were retracted. Why would someone want to do that? Ivan Oransky, VP and global editorial director of MedPage Today, who initially broke the story at the Retraction Watch blog, has been following cases of scientific misconduct for some time. He said that these papers were likely Chen's attempt to rack up citations of his own work.
Chen has since resigned from his position at the National Pingtung University of Education, and the editor of the journal where Chen's work was published has resigned.
Why would a journal let you pick your own reviewers?
Ideally, the peer-review process for scientific papers means that the journal will pick some experts to review someone's paper. And they will keep those reviewers anonymous in order to solicit their honest opinions on the quality of the work. This is what happens at a place like Science or Nature.
But, Oransky told me, at some more specialized journals, the field can be so narrow that journals sometimes ask the author of the paper for suggestions for reviewers. That's what happened in Chen's case — and it opened the door to serious malfeasance. However, if the journal had taken a look at those email addresses, it's quite possible that they could have spotted what was going on. So allowing suggestions isn't necessarily the problem.
Is scientific misconduct becoming more common?
It's probably safe to say that scientific misconduct has existed as long as science has. Science, like all human endeavors, is not immune to human failings. (There are even some suggestions that Gregor Mendel — the nineteenth-century father of genetics — may have fudged numbers in his pea plant studies.)
But there have been some pretty big scandals recently. On July 2, two high-profile papers on stem cells were retracted at the journal Nature. Meanwhile, a former Iowa State University HIV researcher just became one of the few people to ever face criminal charges for faking results from research conducted with federal dollars.
However, overall, the number of papers retracted is generally quite low. Out of about 1.4 million scientific research papers published each year, only about 500 papers get retracted. That's about 0.04 percent. A good chunk of those studies are retracted because of misconduct — roughly two-thirds, according to one study of biology and biomedical papers.
On the other hand, some bad papers out there never get retracted. "We have editors who stonewall, we have editors who are very stubborn about retracting," Oransky told me. "We have scientists who threaten to sue if their paper is retracted. You have all these barriers to retraction." One recent analysis documented several instances of papers with serious flaws (though no evidence of misconduct) that have never been retracted.
It's also hard to tell whether things are getting worse. True, the number of retractions each year has been on the rise. That could be because of more problems. But it could also be a sign of more thorough policing. Plagiarism-detection and image-detection software, for example, have allowed journal editors to more easily screen for duplication problems. The rise in retractions might also be influenced by the fact that people are publishing more and more papers every year.