clock menu more-arrow no yes mobile

Filed under:

Brain imaging research is often wrong. This researcher wants to change that.

Semnic/Shutterstock

When neuroscientists stuck a dead salmon in an fMRI machine and watched its brain light up, they knew they had a problem. It wasn't that there was a dead fish in their expensive imaging machine; they'd put it there on purpose, after all. It was that the medical device seemed to be giving these researchers impossible results. Dead fish should not have active brains.

california fihs

The lit of brain of a dead salmon — a cautionary neuroscience tale. (University of California Santa Barbara research poster)

The researchers shared their findings in 2009 as a cautionary tale: If you don't run the proper statistical tests on your neuroscience data, you can come up with any number of implausible conclusions — even emotional reactions from a dead fish.

In the 1990s, neuroscientists started using the massive, round fMRI (or functional magnetic resonance imaging) machines to peer into their subjects' brains. But since then, the field has suffered from a rash of false positive results and studies that lack enough statistical power — the likelihood of finding a real result when it exists — to deliver insights about the brain.

When other scientists try to reproduce the results of original studies, they too often fail. Without better methods, it'll be difficult to develop new treatments for brain disorders and diseases like Alzheimer's and depression — let alone learn anything useful about our most mysterious organ.

To address the problem, the Laura and John Arnold Foundation just announced a $3.8 million grant to Stanford University to establish the Center for Reproducible Neuroscience. The aim of the center is to clean up the house of neuroscience and improve transparency and the reliability of research. On the occasion, we spoke to Russ Poldrack, director of the center, about what he thinks are neuroscience's biggest problems and how the center will tackle them.

Julia Belluz: The field of neuroscience seems to have a particular problem with irreproducible results — or studies that fail when researchers try to repeat them. What's going on?

Russ Poldrack: I think there are some parts of neuroscience, like neuroimaging, that have a number of features that make it easier for practices to happen that can drive irreproducible findings.

When we do brain imaging, we're collecting data from 200,000 little spots in the brain, which creates a lot of leeway for false positives, bias, and false negatives. If you don't do the proper corrections — to address the fact that you’re doing a statistical test in each of those places — it's very easy to find a highly significant result [that's not actually real].

A group of researchers a few years ago tried to illustrate this problem by putting a dead salmon in an MRI scanner. When they analyzed data without doing proper corrections, you could find activation in the dead salmon's brain. They were trying to say you should do the [necessary statistical tests] or you can find activation pretty much anywhere.

The field also suffers from generally underpowered studies, especially in neuroimaging. It can cost up to $1,000 for each person we scan. When I started doing [brain scan studies] about 20 years ago, most studies had about eight subjects. Now no one would publish that. We all realize it’s way too underpowered.

The number of subjects in the average study has been going up — now in the order of about 20 to 30. For some analyses, that's reasonably powered, and for others, it's way underpowered.

JB: Wasn't there also just a real seduction in having fMRI machines that allowed scientists to watch the brain at work?

mri Levent Konuk /Shutterstock

The amazing fMRI machine — its results need to be interpreted with caution. (Levent Konuk/Shutterstock)

RP: The fMRI has only been around a little over 20 years, and it started to take off in the last 10 to 15 years as a technique lots of people are using. It is really seductive: You see someone’s brain doing something while they’re doing a task.

The bigger problem, however, is simply that most of the people in this field had gotten trained to do statistics on much smaller data sets or different types of data sets that we use [today].

For example, there was some prominent work a few years ago about social pain. The idea was that when people experience social rejection, the pattern in their brain looks like it does when they are experiencing physical pain. That got a lot of play — but in the last couple of years, we realized the brain patterns for social pain and physical pain are really distinct.

JB: What will the center do to address these problems?

RP: Our goal is to build an online data analysis platform that people can use that will help them do the right thing. In our case, doing the right thing means doing the data analyses properly.

In part, the reason it’s difficult to do analyses properly is that a lot of people don't have the computational tools. In some cases, these analyses require more computing power than most people have on their desktop. We want to take advantage of high-performance computing systems to allow them to do analyses that would be way too big on their local machine.

Our hope is also that these free, powerful, and innovative computing tools will be an incentive that can get people to share their data so that others can try to reproduce their results or use the data to ask different questions than the original investigators may have been interested in.

This is part of a bigger movement going on across science for openness and transparency and reproducibility. And I think there are people across a lot of different subfields of science who have come to realize that if we don't get this right, the public is not going to continue to fund research.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.