Splashy science frauds usually spark conversations about the fact that science sometimes fails. But what often gets missed are the decidedly less sexy structural flaws within science — from publication bias (the fact that the studies that end up published tend to have positive results) to the lack of replication (or the attempt to validate previous findings by reproducing experiments) and transparency.
There's an interesting new series out in Science this week that suggests a few solutions to these and other problems. Written by a group of researchers, journal editors, funders, and other stakeholders, the pieces all have the underlying theme that science needs more avenues and incentives to improve reproducibility and transparency.
This first piece suggests that scientists use new tools, such as open source software that tracks every version of a data set, so that they can share their data more easily and transparency is built into their workflow.
This second piece suggests a rethinking of the incentive structures in science. "Researchers are encouraged to publish novel, positive results and to warehouse any negative findings," the authors write:
We believe that incentives should be changed so that scholars are rewarded for publishing well rather than often. In tenure cases at universities, as in grant submissions, the candidate should be evaluated on the importance of a select set of work, instead of using the number of publications or impact rating of a journal as a surrogate for quality.
The authors also suggest rebranding terms like "conflict of interest" and "retraction" to promote openness, among other things:
Universities should insist that their faculties and students are schooled in the ethics of research, their publications feature neither honorific nor ghost authors, their public information offices avoid hype in publicizing findings, and suspect research is promptly and thoroughly investigated.
These ideas are made actionable in a final set of guidelines for publishing scientific studies. The Center for Open Science's Transparency and Openness Promotion Committee, the group behind the guidelines, came up with eight standards that scientists, research institutes, and journals should adopt.
You can see the guidelines below (or a larger version here):
The eight standards — each with four varying levels of intensity — are meant to be modular and flexible and therefore easily adaptable to different research settings, according to lead author Brian Nosek, a University of Virginia psychologist and executive director of the Center for Open Science, and Chris Chambers, a professor of cognitive neuroscience at Cardiff University. They wrote in the Guardian:
Most of all we hope that, in combination with related initiatives, the [Transparency and Openness Promotion] guidelines will cause future generations to look on the term "open science" as a tautology — a throwback from an era before science woke up. "Open science" will simply become known as science, and the closed, secretive practices that define our current culture will seem as primitive to them as alchemy is to us.
While this surely won't be the final word in how to fix the structural flaws in science, it's a very thoughtful one worth paying attention to.
You can read the whole package over at Science.