A high-profile dispute between researchers over a study on the role of face masks in preventing Covid-19 is revealing the tensions in how science is conducted during a global pandemic. It’s also raising questions about the role of prestigious journals in elevating findings that may not hold up.
In the latest development, the authors of a controversial study on the effectiveness of face masks, published on June 11 in the journal Proceedings of the National Academy of Sciences (PNAS), are pushing back against calls to retract the paper.
On June 24, the authors issued a rebuttal statement to a petition signed by more than 40 scientists who identified “egregious errors” in the original study.
The study examined how Covid-19 spreads through the air and found that “wearing of face masks in public corresponds to the most effective means to prevent interhuman transmission.” While the authors are not epidemiologists, Nobel Prize-winning atmospheric chemist Mario Molina is among its authors.
The finding that masks are a good way to slow the pandemic aligns with other research, as well as the guidance from health agencies that now recommend wearing them. But the idea that they’re the “most effective means” to do so, compared with tactics like social distancing, banning large gatherings, and closing businesses, is a controversial claim. And the scientists calling for a retraction say the evidence presented doesn’t back it up; they found serious flaws with the study’s methodology and some of its underlying assumptions.
In their letter to PNAS calling for the retraction, the critics write: “While masks are almost certainly an effective public health measure for preventing and slowing the spread of SARS-CoV-2, the claims presented in this study are dangerously misleading and lack any basis in evidence,” according to the letter.
The call for a retraction follows two other recent high-profile retractions in other major scientific journals withdrawn by the request of their authors who found problems in their own data. But it’s highly unusual for one group of scientists to publicly rebuke a piece of research by another, as with the recent PNAS study.
It also highlights the friction between the urgency of the pandemic and the ordinarily plodding pace of science.
More than 10 million people have been infected around the world to date. Hundreds of thousands have already died. So people are desperate to prevent, treat, cure, and vaccinate against the virus, and research like this can have real-world consequences.
That raises the question of how to bring more researchers to the table and speed up the reveal of valuable information about Covid-19 without sacrificing the integrity of the process.
Wearing face masks prevented thousands of new Covid-19 infections, according to the study
The PNAS study looked at the number of confirmed cases of Covid-19 from January 23, 2020, to May 9, 2020, with a focus on Italy, New York City, and Wuhan, China, the epicenters of the outbreak. The authors tracked how cases rose and fell, comparing those changes to when policies like lockdowns went into effect.
In particular, the authors examined how cases fell when governments issued orders to wear face masks. New Yorkers, for instance, were ordered to start wearing face masks on April 17. The analysis showed that from when the order was implemented until May 9, face masks averted 66,000 new infections in New York City.
“After April 3, the only difference in the regulatory measures between NYC and the United States lies in face covering on April 17 in NYC,” according to the paper.
Based on these results, the authors concluded that wearing face masks in conjunction with testing, tracing, and isolation is the most viable way to stop the Covid-19 pandemic without a vaccine.
Lead author Renyi Zhang, a professor of atmospheric sciences at Texas A&M University, and co-author Mario Molina, a chemistry professor at the University of California San Diego, did not respond to requests for comment.
Why the study was so controversial
Kate Grabowski, an assistant professor of pathology at the Johns Hopkins University School of Medicine, keeps an eye out for new Covid-19 papers as part of her work with the Novel Coronavirus Research Compendium.
And the Zhang-led paper in PNAS caught her eye.
“One of the things that immediately jumped out to me when I looked at the paper, you can see that they have a straight line through the peak of an epidemic curve, which is essentially crazy,” Grabowski said. “That’s when I was like, ‘We really need to look at this.’”
You can see how that was presented in the paper here, with the dotted lines drawn through peaks of the number of cases, with the vertical lines showing when a given policy was ordered:
Drawing a straight-line trend through complicated curves is an overly simplistic way to draw inferences from the number of Covid-19 cases and it can be misleading, according to Grabowski. It also seems to indicate the policies had an immediate effect without a time lag. Because it can take several days for changes in infection trends to show up in the data, researchers usually don’t expect to see changes in reported cases stemming from a policy change right away.
But there may be more fundamental problems with the paper. This kind of study is known as causal inference, and it’s one of the more difficult types of analyses to perform. It seeks to establish cause and effect, as opposed to just associations between variables. That requires carefully controlling for other variables that could be in play. In sussing out the impact of masks, for example, one must take into account other factors that can influence viral transmission, like population density, susceptibility of a given group, and other policies. And unlike a laboratory experiment, researchers can’t create their own scenarios — they can only use their observations.
However, Grabowski said the face mask study waves away these confounding factors with the assumption that the only difference between New York City and the rest of the United States was the mandate to wear face masks. “That’s something that an easy Google search would reveal is not true,” said Grabowski, who is one of the lead cosigners of the letter calling for the study to be retracted. “No research manuscript is perfect, but I think for me where I draw the line is when there are clear statements that are factually incorrect in the body of a manuscript in a major medical journal.”
That’s not to say the conclusions are explicitly wrong. The finding that face masks are the most effective way to limit the spread of the virus could be true, and it aligns with numerous other studies on Covid-19 transmission. But getting the right answer for the wrong reasons in this case could push health officials toward strategies that may not be effective or undermine the foundations of future research building on this work.
A spokesperson for PNAS said in an email that “the journal is aware of concerns raised about this article and is working on the matter.”
For their part, the authors said in their rebuttal statement that the sentence referring to differences between New York City and the rest of the country was taken out of context. It was specifically referring to federal policies across the United States as a whole compared to policies in New York City, rather than comparing the city to other cities or states. (Molina also wrote a rebuttal making similar points in El Universal in Spanish.)
As for the straight trend lines across curves, the authors insist that it was appropriate. “A simple inspection of the data indicates a remarkable linearity in the portions of the figures we highlight,” they wrote.
And the authors said the criticisms stem from academic gatekeeping. “It is truly incredible how the authors [of the retraction petition] could come up with such naïve ideas, merely because no COVID-19 epidemiologist was among the authors of our paper,” the authors wrote in their rebuttal.
How did this happen in one of the most influential scientific journals?
PNAS is the flagship journal of the National Academy of Sciences, a private nonprofit scientific organization incorporated by President Abraham Lincoln in 1863. The group counts some of the most esteemed scientists in the country among its members, and its journal is one of the most influential in the natural sciences.
Papers published in PNAS are widely cited by other researchers as foundations for their own work. Consequently, getting research published in a journal like PNAS is like getting into an elite university. It’s competitive. For papers, it connotes an air of importance for the findings. And for authors, it offers prestige that can boost careers.
In the case of the Zhang et al. study, it was published under the journal’s contributor track, which allows a member of the National Academy of Sciences to submit two papers per year. Crucially, the preferential process allows the submitter to choose the reviewers who will evaluate the study. That’s unlike a typical paper in the journal, where the journal selects the reviewers.
In this case, both the authors and the reviewers of the study were not epidemiologists, as might be expected for a paper on this topic, but scientists who study aerosols.
“I really highly doubt it would have been published in PNAS if it had gone through the regular channel,” Grabowski said.
But Andrew Gelman, a professor of statistics and political science at Columbia University, said the preferential review process is beside the point.
The bigger issue, according to Gelman, is that the value of publishing, particularly in major journals, has become too high. Prestigious journals also get a lot of attention from journalists, which in turn bolsters the profile of any studies they contain, sometimes beyond their merit.
All of this adds pressure to defend results rather than scrutinize them. He argued that lowering the stakes around reviewing and publishing research would make it easier for stronger results to stand out.
“It’s just a place to publish stuff,” said Gelman, who was not a cosigner of the retraction petition. “It’s not supposed to be this amazing honor.”
Getting the fundamentals of science right matters more than ever
With the swarm of research on Covid-19, it’s inevitable that some questionable findings would emerge. Part of it stems from the circumstances. The disease has only been circulating for a few months, so there hasn’t been enough time to set up robust controlled studies. Many papers are based on observations rather than experiments, but done right, these studies can still yield useful information.
And retractions and errors do happen in normal times, even in respected journals. It’s just that during a pandemic, everyone is paying close attention and looking for findings that can be used in the real world. People around the world are facing life-or-death decisions around the pandemic, from how to treat critically ill patients to public health guidance for millions of people. Those decisions have to be made based on the information that’s available now, and there’s a rush to fill that void.
That’s led scientists to some controversial decisions in presenting their work. A couple recent reports about the efficacy of the drugs dexamethasone and remdesivir were criticized for being announced via press release rather than with preliminary papers presenting data from clinical trials.
Some early studies have also been blown out of proportion, like those assessing the effectiveness of the anti-malaria drug hydroxychloroquine. Early studies suggested that it could help fight the infection, leading some people to hoard the drug. But later more robust studies found little to no effect.
More recently, The Lancet and the New England Journal of Medicine retracted Covid-19 papers that may have been based on flawed data.
All this doesn’t mean that there’s a crisis. But it does mean that scientists need to be transparent about their work, and the public needs to be careful about the context of these findings.
The pandemic is an opportunity to rethink how we do science
The Covid-19 pandemic is showing that there are ways to conduct and present research at the pace of an ongoing global crisis. And even some of the older, more esteemed journals are starting to make some changes.
Ordinarily, during peer-review, reviewers — often other scientists with expertise in the field — would spend weeks, if not months, assessing a paper submitted for publication, sometimes going back and forth with the authors to address flaws and concerns. The process, from submission to publication, can take up to a year. That’s far too slow to help in a fast-moving pandemic, especially with the surge of new studies in the field.
One new emblem of science in the Covid-19 era is the prominence of preprints. These are studies that are presented online prior to going through peer review so other scientists can start evaluating the results more quickly. The early results may have some problems, but other researchers can peer-review the findings in real time.
Journals themselves have also accelerated their review timelines, as Vox’s Kelsey Piper reported:
Many journals have revamped their process to get those papers peer reviewed and published at a vastly expedited pace. “A process that can take weeks has been condensed to 48 hours or less in many cases,” Jennifer Zeis, a director of communications and media relations at the [New England Journal of Medicine] told me. One preprint posted to the bioRxiv in April looked at 14 journals and found that turnaround times had been, on average, halved.
Daniel Larremore, an assistant professor of computer science at the University of Colorado Boulder and a researcher at the BioFrontiers Institute, said that there are also ongoing experiments with the peer review process itself.
He noted that in natural science journals like PNAS, reviewers usually know the identity of authors, which can bias the process. However, in the social sciences, peer review is often double-blind, meaning the reviewers and the authors of a submitted paper don’t know each others’ identity.
The journal eLife has a peer review process where editors and reviewers make their evaluations public. In computer science, researchers often present their findings in conference papers, papers submitted for consideration at technical conferences, rather than for publication in a journal. Here, the review process is collaborative with authors instead of adversarial. “There’s a whole variety out there, but what they have in common is that the authors don’t get to choose reviewers,” Larremore said.
In addition, a pandemic doesn’t mean that epidemiologists are the only people who can advance the field. The Covid-19 pandemic touches all aspects of society, and it requires help from everyone. Economists, sociologists, and physical scientists can and should weigh in, according to Larremore. There’s no reason to be territorial if the methods are sound.
“It’s my feeling here that Covid-19 is a global crisis and it’s really all hands on deck, and we need expertise from everyone,” Larremore said. “At the same time, that doesn’t mean that papers making epidemiological claims should be subjected to less rigor.”
And the scrutiny of a paper shouldn’t end once it’s published. As more information comes out, it’s worth reevaluating some of the foundational work to see if it holds up. “I’m a big believer in post-publication review,” Gelman said.
But that requires removing the hurdles and stigma around correcting and retracting findings. It also requires scientists to have some humility about the limits of their own knowledge. Mistakes will be made. The questions are if and how they’re corrected.