Just because a study has been published in a scientific journal doesn't mean its perfect — there are plenty of flawed studies out there. But how can we spot them?
The excellent chart below offers "A Rough Guide to Spotting Bad Science." It was put together by the blogger behind the chemistry site Compound Interest. This isn't meant to be an exhaustive list — and not all of these flaws are necessarily fatal. But it's an excellent guide to what to look for when reading science news and scientific studies:
Here's a more detailed breakdown:
1) Sensationalized headlines: Behind sensationalized headlines are often sensationalized stories. Be wary.
2) Misinterpreted results:Tracker will often tell you which reporters did a good job and shame those who didn't. Otherwise, it's often a good idea to read the original study itself.Sometimes the study is fine but the press has completely messed it up. If it's a big story, MIT's Knight Science Journalism
3) Conflict of interests: Who funded the research in question? If you see a study claiming that drinking grape juice helps your memory and it's funded by the grape industry, then think about that a bit. (That happens all the time. Lots of studies on random foods being good for you, funded by random food councils.)
Be careful: some journals require researchers to reveal conflicts of interest and funding sources, but many do not. And not all conflicts of interest involve funding. For example, be a bit suspicious of someone testing a medical device who consults for free for a company that owns medical devices.
4) Correlation and causation: Just because two things are correlated doesn't mean that one caused the other. If you want to really find out if something causes something else, you have to set up a controlled experiment. (Chemical Compound's infographic brings up the fabulous example of the correlation between fewer pirates over time and increasing global temperature. It's almost certain that fewer pirates did not cause global temperatures to rise, but the two are correlated.)
5) Speculative language: You can say anything with the word "could" and it could be true. Jelly beans could be the reason that the average global temperature is increasing. Unicorns might cause cancer. And pygmy marmosets may be living in the middle of black holes.
6) Small sample sizes
7) Unrepresentative samples: If a researcher wants to make claims about how all people think, but she only studies the college students who show up to her university lab, well, then she can only really draw conclusions about how those college students think. One cultural group can't tell you about all of humanity. This is just one example, but it's a pervasive issue.
8) No control group used: Why would anyone even waste their time doing a study like this?
In medical and psychology studies, participants should not be aware of whether they're in the control group. Otherwise their expectations will muddle the outcomes. And, if at all possible, the researchers who interact with the participants should also be unaware of who is in the control group. Studies should be performed under these double-blind conditions unless there is some really good reason that it cannot be done that way.
10) "Cherry-picked" results:Ignoring the results that don't support your hypothesis will not make them go away. It's possible that the worst cherry-picking happens before a study is published. There's all kinds of data that the scientific community and the public will never see.
11) Unreplicable results: If one lab discovers something once, it's sort of interesting. However, that lab could have some random result or — rarely, but possibly — be filled with liars. If someone else can replicate it, then it becomes far more real.
12) Journals and citations: That something was published in a fancy scientific journal or has been cited many times by others doesn't mean that it's perfect research, or even good research.
A couple more tips for evaluating science news
13) Check for peer review: Just because you saw it in a news story doesn't mean that it's been looked over by an independent group of scientists. Maybe the results were presented at a conference that doesn't review presentations. Maybe it went straight from the operating table to the press, like recent uterus transplants.
14) Results not statistically significant: Generally, researchers want to see a statistical analysis showing that the results have less than a 5 percent likelihood of happening from chance alone (a 95% confidence interval). Some fields are even more strict than that. This is so there's a reasonable degree of certainty that you're looking at a real result, not just a stroke of good luck.
15) Confounding variables: Might something else be causing the effect that you see? Did the statistical analysis take that into account?