For years, doctors have had two main strategies for treating depression: antidepressants and psychotherapy. These practices, according to the published research, can be fairly effective.
Or at least that's what we thought. But recent research now suggests that we've actually been overestimating the effectiveness of our best treatments for depression — in part because published studies were giving a biased picture of the medical evidence.
The reason has to do with something called "publication bias." Often there are lots of different scientists conducting studies on whether, say, a particular drug or therapy can alleviate the symptoms of depression. Not all of those studies, however, get published. Journal editors tend to be more interested in papers finding that a particular treatment had a big effect instead of studies showing little or no effect.
This means that journals can often give readers a skewed view of what scientists are actually finding in their labs. A paper finding that a particular treatment had no effect is just as valuable as a paper finding that a treatment had a big effect. But doctors are far more likely to read about the latter, and as a result they can overestimate the effectiveness of treatments.
It seems this is a big problem in the research around treatments for depression. In 2008, researchers looked at studies on antidepressants for a paper in the New England Journal of Medicine. They found that the medicine's benefits had been exaggerated in the scientific literature because of publication bias.
For a new study in the journal PLOS One, researchers zeroed in on psychotherapies for depression. These including talking therapies, like counseling, group or family therapy, and cognitive behavioral therapy. To find out whether their effects have also been exaggerated by publication bias, the researchers went back to the source, identifying grants that the National Institutes of Health gave out for randomized trials looking at psychological treatments for depression.
Of the 55 grants that helped launch studies, about a quarter were never published. (That's a little better than the estimated half of medical studies that are never published.) When the researchers got their hands on the unpublished data and pooled those results together with the research that had already been published, they found that psychotherapy looked a lot less promising. (Compared with the studies in journals, the efficacy of psychotherapy dropped by about 25 percent.)
"The efficacy of psychological interventions for depression has been overestimated in the published literature, just as it has been for pharmacotherapy [antidepressants]," the researchers write. These treatments work — just not as well as published studies suggest.
This new finding carries a lesson for science: Medical decisions shouldn't be based on published data only. There's already a movement afoot to make sure the results of all clinical trials are shared, and that raw data is factored into the decision-making process more often. This new research on treatments for depression should add more urgency to the push. As the researchers write, "Clinicians, guidelines developers, and decision makers should be aware that the published literature overestimates the effects of the predominant treatments for depression."