clock menu more-arrow no yes mobile

Filed under:

What journalists get wrong about social science: full responses

MadamSaffa / Shutterstock
Brian Resnick is Vox’s science and health editor, and is the co-creator of Unexplainable, Vox's podcast about unanswered questions in science. Previously, Brian was a reporter at Vox and at National Journal.

I recently asked several psychologists and social scientists a simple question: "What do journalists most often get wrong when writing about research?"

Here are their responses.

Sanjay Srivastava, associate professor, University of Oregon

Most basic research in psychology has no short — or even medium-term — applied implications. It is too tentative, too early. It hasn't been replicated or independently verified. We don't know if it extends across populations and contexts. We don't know if it will survive the transition from laboratory to field, and from controlled intervention to real-world implementation. Yet it often gets written up with a "news you can use" angle: how to improve your life, change your relationships, improve your organization etc.

Psychologists are often complicit in this too when we communicate with the media — in part because we've been told (probably correctly) that it's what the media wants, and partly because we can sometimes directly benefit from overselling our work (consulting, writing books, etc.).

But there is also a fundamental tension between the media's desire for novelty and the scientific method. Science proceeds through a systematic and replicated accumulation of knowledge across multiple methods. That means that by the time some piece of scientific knowledge is solid enough to be worth spending money and changing lives over, the idea should not be "new" any more. I don't think that will ever go away.

Patrick A. Stewart, associate professor, University of Arkansas Fayetteville

A part of the blame lies at the feet of us social scientists/psychologists for not communicating as well as we should. However, if there is one problem that I see in the communication process, it is that the media simplifies results (because it has to) — both in terms of whether an effect exists and how strong it is. When this occurs, [results are] communicated as definitive proof.

However, as scientists we understand that our findings are contingent upon a range of factors that might (or might not) influence the results. In other words, we know that we do not know 100%, even in finite topics. When the media communicates to the public, the assumption is (sometimes by the media, often by the general public) that we know [the answer] 100 percent. And there lies the disjoint.

Gordon Pennycook, PhD candidate, University of Waterloo

The biggest issue for journalists (and also for scientists) comes in the communication of uncertainty. The conclusion for almost every experiment is tentative. Relative certainty can only be achieved across a large number of experiments (ideally, undertaken by a large number of scientists in different labs). Of course, by the time this level of certainty is achieved (if ever), the result in question is probably yesterday's news (or, more accurately, last decade's news).

Journalists often include caveats, but they often lose force when combined with a flashy, click-bait title. It's a major problem that titles are often not up to the writer of the article — but rather dependent on the whims of an editor who is less concerned about its accuracy than attracting readers. This is a clear example of bullshit in the media.

Jean Twenge, professor, San Diego State University

Too often, the media covers science the way it covers politics. Writers assume that there are two equal sides who both have valid opinions. But science is not politics — it's not a matter of opinion, and there is a right answer out there. We might not know all of the aspects of that answer, or know exactly who it applies to, but usually it's there if you look at all the evidence.

Instead, two sides of a scientific "debate" are presented as if they are equal, when much of the time they are not. There is often much more evidence, and better evidence, on one side than the other. Yet if one study out of 20 finds a different result, then the field is considered "controversial." That's not controversial. It's just variations in science, and 95% of the evidence still points one way.

Similarly, a contrary study may appear and then be found to actually be consistent with most of the research. But that later conclusion is not mentioned; the original, incorrect result still hangs around even after it's been reanalyzed or reinterpreted.

This issue also crops up when writers interview scientists. Many articles quote a psychologist who has a loudly stated opinion on a research area who hasn't actually done any research in that area, and/or who makes statements about that research area that are false. Just because someone is a prominent researcher in one area does not make them an expert in another. Even if they have written a "review" or "debate" piece, they are still not necessarily an expert unless they have actually done a study. This is the origin of many false "debates" around research results.

These debates are harmful because they make it sound like "we don't know anything; everyone disagrees" when that's not the case at all. This would stop immediately if writers looked at what studies (not "debate" pieces — studies) the interviewee has done in the area. Another suggestion: Read the opposing articles instead of just interviewing people. If you read the back and forth of an article, a critique, and the reply, it's often fairly clear which side has more evidence.

Overall: This is science, not opinion. The evidence comes first, not the quotes from interviews. And the truth is out there.

Jason Reifler, associate professor, University of Exeter

The biggest thing that non-technical audiences (whether journalists or readers) get wrong is the limits of any single study. Every study comes with uncertainty and incompleteness or some sort of boundary condition. These nuances are hard for academics to communicate and equally hard for journalists to write about.

While this next comment really isn't about what people get wrong, it does highlight a difficulty that researchers are journalists face when communicating. Researchers are typically focused on understanding a problem. Stories tend to be structured around solutions to a problem. This bridge can be difficult to cross when researchers feel they've identified a novel problem, but don't really have a solution. (This is definitely true of my research about correcting misperceptions.)

I love that there is interest in my work from a broader community and that people want to know the answer. I want to know the answer too! But, I don't have more than a tentative answer yet. And it will probably be something I continue to work on the rest of my career.

Katie Corker, assistant professor, Kenyon College

First, journalists often seem to get caught up with the sexiness or curb appeal of psychological research findings, ignoring or downplaying the quality of the evidence in question.

A meta-analysis (the synthesis of many studies that may be more decisive and informative than a single research finding) will get the same (or less) attention than a flashy new study. Frankly, sometimes researchers fall prey to this same bias.

Solid science may not be always be sexy, but skilled and ethical journalists choose solid findings to report and do so in a way that's still engaging.

Second, it takes some expertise in an area to be able to critique research in a way that's fresh. I teach research methods (lots), and I often find students reusing the same arguments over and over again, often parroting back formulaic critiques they've internalized from the media: the sample is too small, the participants are too homogeneous, and so on.

The most successful science journalists are, or become, quite expert in the areas on which they report, which renders their critiques both more interesting and more valid.

Finally, there's often a failure to separate correlational and causal evidence. High quality social science reporting makes clear when causal claims are warranted (i.e., when there is a random assignment to conditions and strict experimental control), and when causal claims are unwarranted, such reporting considers alternative explanations and third variables that might underlie effects.

W. Keith Campbell, professor, University of Georgia

Given the tight deadlines and the complexity of much of the work, I think the media generally try to and mostly succeed at covering psychology well. Short, fast turnaround stories are generally hyped or set into a narrative. But it is supposed to be news/entertainment and not a scientific journal. And, frankly, it is often we psychologists and our university media people who (over)simplify and hype the research in the first place.

I think the biggest improvement would be to link the actual research to the story. I had a recent paper that was used as material for a clickbait story (something like people see Star Wars to meet their narcissistic needs). Many readers were annoyed, conversations took off on Reddit, and then people started reading the actual article. [This led] to what I thought was a very interesting online discussion. So, even pure click bait can get people to think about the science if the articles are available online.

John R. Hibbing, regent professor, University of Nebraska Lincoln

The unavoidable dilemma is the need for media outlets to be concise and catchy. This means needed qualifiers and nuance are bound to be omitted. Social science works with weak and complicated relationships that in press accounts often come out as strong and direct and this makes readers understandably dubious of the research itself. In my experience, this is especially true when genetics are involved.

Having said all that, I have had some very good experiences, especially with higher quality outlets. The science writers at the NYTimes, for example, tend to be very good and thorough.

Problems can arise when reporters write stories from other media accounts, without talking to the authors and without reading the actual research articles. By and large, I have sympathy for what the media are trying to do — but it it still frustrating sometimes for research of which we are probably proud to end up getting characterized and then ridiculed as the comment stream gets further and further from the original research and everybody tries to make a volatile ideological vessel out of it.

Seamus A. Power, PhD candidate, University of Chicago

Experimental social psychology is currently the privileged form of psychology that is most often reported in mainstream media. Excellent qualitative research is often overlooked.

The problem here is a very narrow definition of what psychology is. The questions that can be asked and the types of "answers" that can be given are bounded by such a precisely focused framing of psychology.

One potential route to overcoming this problem is to have researchers who primarily use either quantitative or qualitative research methods to co-write op-eds or to co-comment on particular phenomena that are of interest to the media. In this way a broader and more accurate analysis of the problem or issue can be gleaned. Voicing multiple perspectives based on different research traditions can overcome limitations of specific methods of psychological investigation.

Ingrid Haas, assistant professor, University of Nebraska Lincoln

Honestly, I think the media does a pretty good job reporting on research overall. A couple things that seem like common mistakes:

  • Failing to differentiate between correlational and experimental research (and drawing improper conclusions based on that, whether one can discuss a causal relationship or not. this happens all the time with health research).
  • Reporting on unpublished research.
  • Overgeneralizing from a single study or paper.

Roger Giner-Sorolla, professor, University of Kent (England)

1) Not distinguishing between preliminary, low evidence studies (sometimes not even peer reviewed) and more substantial bodies of work. Often stories are sourced from university press offices or non-peer-reviewed conference presentations, on fairly flimsy evidence, instead of from meta-analyses or reviews of bodies of research.

2) Not being able to distinguish between correlational research and causal conclusions. I have more than once seen this happen in headline writing. The [academic] article author is careful not to use causal language, but then the headline writer says "If You Want A Long Life, Move To Dorset" (to cite one example in the Guardian from 2008).

3) It should not be surprising news that we think with our brain, or that some brain region is preferentially activated with some form of mental activity (although the degree of this kind of localization is also usually over-hyped). What is interesting about neuroimaging is the information it tells about psychological process and function, not the discovery of "love regions" or the news that jealousy somehow exists in the brain and not the soul.

4) I have had few interactions with the press and these have been mixed: sometimes very good and insightful, sometimes not getting it right or trying to be exploitative or silly.

Betsy Levy Paluck, associate professor, Princeton University

Small thing: Our technical terms are often translated into folksy, accessible terms. This is important and inevitable. However, please check these terms first with the author! Some folksy terms really do convey a very different idea.

An example from my own work is [reporter using the term] "cool kids" instead of "highly-connected kids in a social network." Popular kids might be preferable. [But] some of these highly-connected kids are really not cool. Work with the social scientist to find the right kind of term that is accessible.

Another bigger problem is that many journalists take for granted that self-reported behaviors are the same as actual behaviors. The studies that capture "actual" behaviors (spending, test scores, observed interactions, and the like) are much rarer than those that simply ask people to report their behaviors. The latter should be reported in a more measured way.

Daniel Voyer, professor, University of New Brunswick Fredericton (Canada)

The media often confuse correlation with causality.

For example, in the 1970s, the media made quite a fuss about findings that children who watched a lot of violence on TV were also more aggressive in a lab setting. Essentially, in the media, this came out as "violent TV causes aggression in children." It was quickly picked by politician and resulted in censoring of some of some TV shows, especially those directed at children like cartoons.

However, it is also possible that children who are more aggressive from the start are also attracted to violent TV shows. So, it is possible that existing aggression "causes" watching violence on TV. This ability to reverse an explanation is a hallmark of a correlation. In reality, unless we measure existing aggression, all we have is a relation between aggression and TV watching habits and we don't know what direction causality is going.

All this to say that the media often interpret relations as if there was an underlying causality. This can be very misleading! Since then, controlled studies have shown that violent TV can cause increased aggression. However, it is only one of many factors so that not all children who watch violent TV will necessarily become aggressive.

Kevin Smith, professor, University of Nebraska Lincoln

For me the biggest thing is not what the media gets wrong, but what it leaves out. Most empirical social scientists, at least the good ones, are all too aware that it's a complex world and any single study, regardless of how well done or how strong the results, is just one study. Its key findings, whatever they are, need to be replicated and re-tested before we make any sweeping claims about causality and effects. Even if those causes and effects are robust and replicate, it's extremely rare in the study of humans to find simple, single cause/single effect relationships.

In my experience in dealing with the media, such caution and caveats — second nature to most experienced researchers who take science seriously — can get ignored. Thus we end up with reports of "researchers find political gene" or "scholars report biological trait Y leads to political trait X."

This makes for a snappy headline and a good hook for a media story, but in some cases it stretches — maybe even distorts — the results of our work. Good journalists, and I've dealt with plenty of those, get this, and try to provide some context.

Dave Nussbaum, adjunct assistant professor of behavioral science, University of Chicago, Booth School of Business

One very common mistake in the media is to treat a single published finding as definitive. It's an easy trap to fall into — psychologists themselves frequently fall prey to it themselves — but no single study can say very much on its own.

Placed in the proper context, it may be the that the study builds on many previous studies and rigorously tested theory, or that it was a one-off that may very well be a fluke. Very often reporting of studies — and press releases — are devoid of context and the reader doesn't have any good way to know which of the two he or she is getting.

Psychologists are often touting the counter intuitiveness of their findings. There's prestige in discovering something that nobody would have predicted. There is good reason for that, but in recent years it has become valued for its own sake, rather than for its instrumental value.

These days, many studies aim for counter intuitiveness for the wrong reasons: Just to make a splash, even if the finding itself has little value. The media is all too quick to pick that up and run with it, which in turn encourages the behavior to continue.

Michael Grandner, assistant professor of psychiatry, University of Arizona

A few pet peeves. First of all, many studies have limitations in terms of generalizability of findings. One study will say that under certain conditions, a certain outcome is more likely, all else being equal. We don't know how this bears out in the real world (which is very messy) or what happens under other conditions (different age groups, different health status). Those studies are way more expensive and take a lot more time. The constraints of a study need to be better delineated.

Also, there is a lot of confusion about odds ratios. When something is more likely to happen, you need to ask how much more likely, and how often is that? If something is 50 percent more likely than an alternative, but that alternative only happens 1 percent of the time, you now have something that happens 1.5 percent of the time. It's still rare. Also, an odds ratio of 2.0 means that that outcome is twice as likely, and has a 2-fold likelihood, but it is only 100% INCREASED likelihood and 1-fold INCREASE.

Another thing — I hate even saying this — but correlation is not causation. If people in group A are more likely to eat carrots, eating carrots may or may not cause you to become a member of group A. It just happens more often. It's the next study that will have to study cause and effect. If you're looking for cause and effect, you better have a control group.

If you're studying college students and trying to make a statement about some group other than college students, you are likely over-extrapolating. College students are not adolescents nor are they adults, they are not typically employed, they don't keep regular schedules, they are socioeconomically different from the general population, etc. They really don't generalize well. And many psychology studies use college students but really try to generalize outside of that group.

Same goes for mice. A study in mice is not a study in humans. It is a starting point and helps human studies know where to look. But you can't generalize an animal study to humans.

Statistical significance is different from clinical significance. If something is truly newsworthy, you should talk about it in terms of effect size. People think "significantly different" means that the difference is salient or relevant. It just means that it is probably greater than nothing. A group of people with an average height of 70 inches and another that is 71 inches may be statistically significantly different, but the difference isn't really that meaningful.

A big issue is a lack of specificity in describing the predictors and outcomes of a study, which leads to overgeneralization and confusion. For example, let's say that a study found that a certain nutrient often found in eggs is associated with increased cholesterol in the blood. Could you then say that eggs cause high cholesterol? NO!

First of all, the study wasn't about eggs, just that one nutrient — many of the others in eggs may reduce cholesterol for all we know. Second, higher levels of cholesterol in the blood does not mean "high cholesterol" just like a few-degree change in temperature when it's freezing outside doesn't make it hot outside. Is that change in blood cholesterol a relevant amount, given the amount of that nutrient in a typical egg, and does it outweigh any effects of some of the other nutrients in that egg?

These questions don't get addressed, which gets people to distrust science reporting because they think it's inconsistent ("Eggs are bad then eggs are good then eggs are bad again then eggs are good again...") when really the data may be consistent just the interpretations are inconsistent.

One more thing — more articles should mention where funding for studies came from.

Jay Van Bavel, assistant professor, New York University

The media is too focused on simple effects (e.g. does money increase happiness), but human behavior is far more complex.

Most scientists I know are focused instead on the conditions under which something happens (e.g. money does increase happiness if you're poor). Add one or two simple qualifier like this provides much more explanatory power and allows us to get beyond silly debates.

For instance, society is stuck with debates about whether or not gun access or mental illness cause mass shootings. Of course guns don't lead everyone to become violent. But in the wrong hands they can be devastating. Thinking about these issues in terms of the interaction between two factors offers far more insight. And might even reduce reflexive opposition to certain scientific findings.

Joe Magee, associate professor, Stern School, New York University

Taking the most recent science as the most currently validated science. This is probably because of pressure to publish what is newsworthy. But people without PhDs who are not journalists also make this mistake as well. The cumulative body of research on a phenomenon is what is needed more than publishing the most recent studies. This probably requires longer-form journalism, and it's not that there are no example of this out there. There are, but it's not common.

Jennifer Lerner, professor, Harvard University

The press lets readers fall victim to hindsight bias ("I knew it all along") by not pointing out that for most results from an experiment — say, that anger triggers a sense of control — the opposite result might have been found — say, that anger triggers a sense of being out of control.

We need the scientific method in order to verify which result is the right one because each could sound equally plausible if we had heard it first.