There’s an oft-repeated mantra among scientists: A single study is rarely the final answer. And yet for science reporters, new studies are irresistible — a bold new finding makes a great headline.
Which explains how we get into confusing situations like this:
Why don't people pay attention to health advice? It's a mystery. pic.twitter.com/UQ0J8e580Y
— Christopher Snowdon (@cjsnowdon) February 23, 2017
The problem isn’t necessarily that these studies are poorly designed (although some of them may be). The problem is that each headline gives an incomplete glimpse of how science works. One lab produces a result. Another lab — ideally — tries to replicate that result. Rinse and repeat. Eventually someone needs to do a meta-review of the totality of the evidence on the question to reach a conclusion. That meta-review, rather than any one study in isolation, is likely to get closer to the true answer.
Yet as researchers in PLOS One recently found, journalists typically only cover those initial papers — and skip over writing about the clarifying meta-reviews that come later on.
What’s more, the study finds, journalists “rarely inform the public when [initial studies] are disconfirmed” — despite the fact around half of the studies journalists write about are later rebutted by follow-up studies.
Journalists cover initial studies far more often than follow-ups or meta-reviews
The authors of the PLOS One paper assembled a huge database of studies in biomedical science, follow-ups to those studies, and meta-studies on those follow-ups. And then they searched the Dow Jones Factiva newspaper database to see how often each type of study was covered.
They found that initial studies were around five times more likely to be reported on than follow-up studies. And meta-reviews were barely covered at all.
What’s more, journalists really, really like to report on studies that deliver positive results — even though studies that deliver negative results are equally valuable. Of the 1,475 newspaper articles in the data set, only 75 articles reported on null findings.
Lastly, journalists seem to flock to studies that concern lifestyle choices like diet or exercise (and especially those published in the most prestigious journals). Non-lifestyle studies — on topics like brain imaging or genetics — were much less common. This “preferential coverage,” the researchers surmise, is due to the fact that readers can take direct action on lifestyle choices.
Nearly half of the single studies that get reported on turn out to be wrong
Here’s why this is a problem. The PLOS One analysis paper found that only 48.7 percent of 156 studies reported by newspapers were confirmed by a subsequent meta-review.* The percentage dropped to 34 when the researchers focused on initial studies only.
(*To be sure, meta-reviews aren’t perfect. Publication bias — the tendency for only papers with positive results to get into journals — can skew even the most careful meta-review. But, in general, they provide a more comprehensive answer than a single study.)
And although journalists gravitate toward covering single studies concerning lifestyle choices such as diet or exercise, these were actually the least likely to be confirmed by a meta-review (as opposed to non-lifestyle papers on topics like genetics).
Overall, the authors conclude:
Our study shows that many biomedical findings reported by newspapers are disconfirmed by subsequent studies. This is partly due to the fact that newspapers preferentially cover "positive" initial studies rather than subsequent observations, in particular those reporting null findings. Our study also suggests that most journalists from the general press do not know or prefer not to deal with the high degree of uncertainty inherent in early biomedical studies.
Why do reporters give undue weight to single studies?
A few possible reasons:
1) Journalists have a need for digestible headlines that convey simple, accessible, and preferably novel lessons. This is fundamentally in tension with how science works, which stresses a slow accumulation of knowledge, nuance, and doubt.
2) It’s not all the journalists’ fault. University press shops are less likely to put out press releases on meta-reviews than they are on a striking and dramatic single study. What’s more the meta studies themselves can be dense and difficult to parse.
3) It’s often the scientific papers or press releases themselves that spread hype about initial findings.
The PLOS One authors have some advice for reporters writing about new studies. Namely: Pick up the phone, and ask researchers whether it is an initial finding, and, if so, they should inform the public that this discovery is still tentative and must be validated by subsequent studies. Indeed, study co-author our result only refers to a small sample of the scientific research.
Also note that this PLOS One study has a few limitations itself. For one, it only looked at newspaper articles. In reality, the science media ecosystem is much, much bigger. There are web-only general interest news outlets like Vox, specialty science magazines such as New Scientist and Scientific American, news operations run by journals like Science and Nature, and television news programs — all of which report on science. “Our result only refers to a small sample of the scientific research,” Estelle Dumas Mallet, the study’s lead author, writes in an email. “Also, we cannot extrapolate these results to other domains such as physics and chemistry.”
The study also only included news articles published within a month of the publication of the scientific papers they cite. It’s possible newspapers do a better job when they cite research when it’s not breaking news.
That said, my guess is that the findings still stand for the broader media environment. As reporters, we’re biased toward what’s new and exciting. But in science, truth takes time.