In December, a group of researchers and medical students at Oxford went on a nerdy but important mission. With the Compare Project, they've been tackling a little-talked-about problem in medical research called "outcome switching."
Stay with me for a minute: When studying a drug or device, researchers measure dozens of health outcomes to understand how the new technology works. Before they start a clinical trial, however, they're supposed to pre-specify which outcomes they really care about on public clinical trials registries. For an antidepressant, for example, these might include people's reports about their mood and how the drug affects sleep, sexual desire, and even suicidal thoughts.
The idea is that researchers won't later selectively publish the most positive or more favorable outcomes and drop the negative ones. In theory, when journals are considering a study manuscript, they are supposed to check whether the authors were actually reporting on those pre-specified outcomes.
In the real world, unfortunately, this check doesn't always happen. This means researchers sometimes cherry-pick results, leaving doctors and patients with a skewed picture of how drugs and devices actually work.
And this is where the Compare group comes in. They're the outcome-switching police. They've been meticulously tracking every new published clinical trial in the world's top medical journals with the trial's registry entry to see if the outcomes that researchers pre-specified are the ones they later report on.
When the group detects outcome switching, the members write a letter to the academic journal pointing out the discrepancy, and then they track how journals respond.
They've now been at it for a few months, so I caught up with Ben Goldacre, the British author and one of the doctors behind the project, to see how journals have been responding to their letters.
So far, they've checked 67 clinical trials. Of those, nine trials were perfect. But among the ones that weren't, they found 301 pre-specified outcomes were never reported and 357 were silently added.
So far, Goldacre said, the responses from journals to these findings have been wildly variable:
BMJ has rapidly issued corrections. NEJM (The New England Journal of Medicine) has dismissed concerns of outcome switching out of hand. JAMA have been friendly but ponderous. Lancet seem to be publishing the correction letters but regarding it as a matter for trialists rather than editors. Annals have been the real surprise for everyone: dismissing concerns, writing error-laden "rebuttals," and even effectively telling trialists that they don't need to worry about replying to corrections on gross misreporting.
You can read, for example, the Compare Project's exchange of 20 letters with NEJM here.
When asked why some journals are more hesitant than others to address the issue, Goldacre said he suspects the project demonstrates why outcome switching has persisted for so long.
We do know that people have published prevalence studies demonstrating that misreporting of this kind is very common. They've not really gone into the reasons. I think our project demonstrates why this problem hasn't been fixed. We're eliciting evidence of widespread misunderstanding from editors, but also evidence that many simply don't care about outcome switching. That's fascinating, because they're all listed as "endorsing CONSORT," the guidelines forbidding things like outcome switching, so before our letters and responses, I think everyone assumed journal editors understood the problem and had mechanisms to address it.
He hopes that as the Compare team continues to compile the evidence that outcome switching remains a widespread problem, journals will start to act more meaningfully on their findings.
These are generally good people, with a strong publicly stated commitment to improving reporting standards. I think like a lot of problems in medicine — infection rates in surgery, waiting times for clinics, and so on — you need simple audit and feedback sometimes, as part of a quality improvement cycle. This audit and feedback needs to be presented in ways that are hard to ignore. It's clear that this wasn't happening in any material sense in many journals. I think it's also possible that academic editors on other journals will become more aware of the problem, to solicit organizational change.