clock menu more-arrow no yes mobile

Filed under:

A surprising amount of medical research isn’t made public. That's dangerous.

Dan Kitwood / Staff

When the results of clinical trials aren’t made public, the consequences can be dangerous — and potentially deadly.

Consider the case of the anti-depressant Paxil, produced by the drug company SmithKline Beecham (now part of GlaxoSmithKline). GSK got approval from the FDA in 1999 for treatment of depression in adults, but not in teenagers. That meant that while doctors could prescribe the drug to adolescents — a so-called “off label” prescription — GSK could not promote the drug to doctors for that purpose.

But the company did just that, according to criminal and civil complaints filed by the Justice Department and a suit by then-New York Attorney General Eliot Spitzer. What’s more, the Justice Department claimed, GSK selectively and misleadingly released information about three studies it had conducted of the drug: It hired a consulting company to write a journal article that played up evidence from one study that the drug worked better as a treatment for pediatric depression than a placebo, played down (better) evidence from the same study that it hadn’t, and soft-pedaled the side effects.

These side effects included suicidal thoughts and actions.

It buried two other studies, the Justice Department noted, in which Paxil had failed to show efficacy in treating depression.

In the end, GSK paid the US government $3 billion in fines for illegal and misleading promotion Paxil and other drugs, and, in 2004, the FDA required manufacturers to put a “black box” warning label on Paxil and other antidepressants about the potential risks of increased suicidal thoughts and actions when used in children and teenagers.

In 2015, researchers published a second look at the data and clinical study reports underlying the study GSK had relied on for promoting Paxil’s use in adolescents. They affirmed the drug “was ineffective and unsafe in this study.” This was part of a much bigger problem afflicting drug research, they said: “There is a lack of access to data from most clinical randomised controlled trials, making it difficult to detect biased reporting.”

You might think a crisis of that scope, involving teenage suicide and billions of dollars, would rouse the scientific establishment to make sure that the results of all clinical trials be made public. But it didn’t happen. Despite public campaigns, and even legal requirements, many clinical trials still report results publicly late or not at all. What, if anything, will prod researchers — and universities and drug companies — to act?

The issue at stake here isn’t the FDA’s approval process. The FDA makes drugmakers go through an intensive application process before it deems new drugs or medical devices safe and effective. When drug companies seek FDA approval for a drug or device, they aren’t allowed to cherry-pick which results they report. The agency requires that companies submit plans outlining all trials they’ll submit for approval, and scrutinizes the trial results (even conducting its own statistical review). But the FDA does not ensure that all of those trial results also enter the public view.

That means doctors and researchers trying to get a full picture of a drug’s effects are out of luck.

During the Paxil legal battles, there was not yet a law in the United States requiring that clinical trials publicly share their results. What is remarkable is that today there is such a law — yet researchers and companies often ignore it.

Some researchers do share their trial results through journal publications. However, one synthesis of studies on the topic found that from one quarter to one half of clinical trials are never published — or are published only years after trials end. In that same report, from 2012, new research found that roughly half of all trials funded by the National Institutes of Health remained unpublished 30 months after the end of a trial (though 68 percent were ultimately published at some point). The reasons for delays and non-publication vary, from researchers’ lack of interest in reporting negative results — the infamous “file drawer problem” — to constraints on the time of researchers.

Progress on transparency legislation

The research transparency movement has been gaining steam, but still can’t declare victory. A 1997 federal requirement mandated that researchers register some trials in a public database (those pertaining to serious or life-threatening diseases). Then in 2005, an association of medical journals started requiring that any study published in one of their publications be registered in an online database before the time of first patient enrollment. That didn’t guarantee results would be made public, but it at least provided an incentive to researchers to make some information about the trial available.

A few years later, an even bigger shift occurred. Congress passed the FDA Amendments Act of 2007, which required that “applicable clinical trials” register and publicly report results within one year of trial completion. (The requirement excluded some trials, such as Phase 1 trials of drug safety as opposed to efficacy.) The site ClinicalTrials.gov, run by the National Library of Medicine, had started posting general information about trials in 2000 — so sick people could sign up, for example — but now became the place where those results were posted. And the law included a penalty: Those who failed to report on time could face fines of up to $10,000 per day.

Yet nearly a decade later, it’s clear that many researchers and institutions basically ignore the law. They report trials late or not at all, but the FDA has yet to levy a fine. An investigation by the health journalism organization STAT, published in December 2015, looked at about 9,000 trials across 98 institutions, from 2008 to 2015. Of trials that were required by the FDA to report their results, 74 percent of industry trials were either not reported or reported late. The figure, maybe surprisingly, was even worse for academic institutions: 90 percent late or unreported.

By STAT’s calculation, if the FDA had enforced the law using the $10,000-per-day day fine, it could have collected over $25 billion since 2008, funding the agency several times over.

And the thrust of STAT’s conclusions has been echoed by other investigations. (After the Paxil episode, GSK, for its part, has been posting trial results to the company website; it also fares better than many other companies and institutions in several recent transparency scorecards.)

A medical culture too comfortable with non-publication and non-reporting

Why hasn’t the FDA enforced the 2007 law on publicizing results, and why hasn’t it levied financial penalties?

One reason, according to several of those that I spoke with, including Deborah Zarin, director of ClinicalTrials.gov, is that the 2007 law contained ambiguity about some of the requirements, including which trials were subject to the law.

Jennifer Miller, founder of Bioethics International, agrees that some researchers have been, at least till very recently, uncertain about whether the 2007 law applied to their trial. The language used in the law to describe applicable studies included the phrase “controlled clinical trials,” and there was some uncertainty about which trials would count as “controlled.” “How can you impose fines on an ambiguous law?” Miller said.

Researchers I spoke to emphasized, however, that clinical trial results are not just a legal issue: It’s an ethical matter, too. Regardless of the law, shouldn’t reporting results be part of the culture of doing clinical trials?

If so, there’s a problem with the current culture. Researchers are rewarded primarily for publishing as much as possible in the highest-ranked journals that they can, says Joseph Ross, an associate professor of medicine at Yale and an associate editor at JAMA Internal Medicine. “There’s no clear incentive for investigators to have a member of their staff do everything required by ClinicalTrials.gov. It gets deprioritized because it is a substantial amount of work, and investigators don’t put it at the top of their list.”

Competition may play a role. Someone who is running a trial might think: “My competitor has similar molecules in the pipeline, why should I tell them why it failed so that they don’t pump money into it?” says Tomasz Sablinski, co-founder of the drug development firm Transparency Life Sciences, who was previously with the pharmaceutical company Novartis.

How to change the norms, so that there’s an internal commitment to reporting results from researchers and institutions? Steven Goodman, an associate dean and professor of medicine at Stanford, notes that it will be important for institutions to provide education to researchers on how to report results, and pay for staff support.

AllTrials, a nonprofit organization founded by medical doctor and public intellectual Ben Goldacre, took on the mission of pushing for clinical trial transparency. AllTrials, which started in the UK and also has a campaign in the US, thinks the laws don’t go far enough: None of the regulations governing clinical trial reporting require sharing results retroactively (that is, before the laws are passed), which leaves many results for already-approved drugs unreported.

Goldacre also collaborated with a web developer and scientist, Anna Powell-Smith, to create the automatically updated Trials Tracker. The tracker scans ClinicalTrials.gov and PubMed to identify how many clinical trials have been reported by companies and institutions with 30 clinical trials or more. After working on transparency for many years, Goldacre believes “naming and shaming” is the main thing that will really grab the attention of those who haven’t reported their trials.

Momentum seems to be gathering, although the Trump administration’s commitment to the cause remains uncertain. In September 2016, Health and Human Services, which oversees the FDA, issued a "final rule" clarifying and expanding the requirements of the 2007 law: It specifies what was meant by “controlled clinical trials,” among other things. (“All interventional studies with prespecified outcome measures.”) The rule also expands the scope of the requirement to include results from certain trials of new drugs and devices which haven’t yet been approved by the FDA.

The National Institutes of Health (NIH) also announced a policy in September 2016 requiring that all its grant recipients publicly report their clinical trial results. The NIH policy and HHS final rule took effect on January 18. Will the organizations ramp up pressure to comply with the law, and will researchers take this obligation seriously? It’s too soon to say.

The obligation to research participants

One reason to care about whether clinical trial results are shared is that hundreds of thousands of patients have put themselves on the line as research subjects. We owe it to them not to let the information their participation enabled get stuck in a file drawer.

“If we made a pact with a person to enter into this experiment, then we have an ethical and scientific obligation to have the results out there, no matter what happened,” said Stanford’s Goodman.

Everyone who conducts a clinical trial should report their results, whatever the outcome. It’s the law, and it’s past time that it was followed. When researchers fail to do so, we should point that out early and often — for the sake of public health.

Stephanie Wykstra is a freelance writer and consultant with a focus on research transparency. She has recently worked with nonprofits including AllTrials USA and Robert Wood Johnson Foundation. Twitter: @Swykstr.


The Big Idea is Vox’s home for smart discussion of the most important issues and ideas in politics, science, and culture — typically by outside contributors. If you have an idea for a piece, pitch us at thebigidea@vox.com.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.