Sixty-year-old vials of live smallpox samples discovered in a lab, accidentally exposing workers to anthrax, and unintentionally shipping a deadly strain of H5N1 bird flu to another lab: these are just a few of the recent safety slip-ups by federal scientists in the US.
These potentially deadly accidents have reignited a concern: If these labs, reputed to be among the safest in the world, can make such hazardous mistakes, what about all the others?
In recent years, researchers from around the world have been asking just that question, focusing in particular on "gain of function" research, which involves tinkering with viruses by adding mutations to them to see whether they literally gain functions—increased transmissibility, drug resistance—they don't already have.
The objective of this work is to give scientists a head start on potentially deadly mutations that might eventually occur in nature. But what if these deadly pathogens escape? Some argue that the latest lab screw-ups demonstrate the need for a moment of sober reflection.
"If we're working with dangerous lethal agents, we need to have a higher standard of safety," said Dr. Michael Osterholm, a leading biosecurity expert and director of the Center for Infectious Disease Research and Policy at the University of Minnesota. "This isn't just a food-borne illness transmitted by dirty hands. These are viruses that can be highly infectious and lethal."
With nearly 20 other concerned scientists, he recently co-founded the "Cambridge Working Group" in response to the lab accidents. "Such incidents have been accelerating and have been occurring on average over twice a week with regulated pathogens in academic and government labs across the country," their website reads. The worry? That the latest anthrax, smallpox, and bird flu mix-ups won't be isolated.
Osterholm and I spoke Thursday to get some context around the lab accidents and why they might be good for science. What follows is a transcript of our discussion, lightly edited for clarity and length.
Julia Belluz: When you first heard about the incidents at our federal labs over these past few months, what were you thinking?
Michael Osterholm: I thought it was a wake-up call that these things can happen. Some people might be shocked or amazed. But I think for many of us, it's a situation where we have understood for some time that lab safety is a priority and it varies in some labs around the world. It's not just a CDC issue; it's a much bigger issue.
JB: We were in a sense lucky that no one was hurt, no one died, as a result of these accidents. So what is the actual risk that lab safety practices can be harmful to the health of not only the lab technicians but to society?
MO: We don't know what the risks are. No risk-benefit analysis has ever been conducted. That's the purpose ofCambridge Working Group. We need to have a comprehensive risk-benefit tool so we can accurately and consistently measure what are the risks and benefits of research and make our decision based on that.
The recent problems in and of themselves haven't fundamentally changed the underlying argument that was made all along: The safety of labs in this country really needs a risk-benefit analysis.
JB: So you're saying that when research with deadly pathogens is done now, there's no such risk-benefit analysis happening right now?
MO: Basically if it's US federally-funded work, there is an initial review that's done by an inter-governmental group. We don't know who does it, how they review it, or what they do. There's no agreed upon risk-benefit approach here. If it's not US-funded, nobody reviews it. Or if it's done in the US using private resources, again no one reviews it. So no one has come up with a comprehensive risk-benefit analysis to use.
JB: Can you describe some of the potential gains and costs of this potentially lethal research?
MO: The benefits could be science that has an impact on vaccines, on antivirals, on surveillance, on our understandings of basic science. Assessing benefits would involve looking at questions such as: ‘Is this research going to benefit the influenza vaccine and how?' 'What is the likelihood?' 'Would it really provide us with an opportunity for new and better drugs?'
The risks questions are: 'Would the virus ever get out?' 'Could it cause disease in the community?' 'Could it spread to become a national and international event?' 'If it is out, can you stop it?' 'How quickly would you have to detect the virus to stop it?' We need to have a legitimate means of doing that and one that is reproducible.
Also, when you're doing (gain of function) work, once you've published the results, you're sharing the detailed genomic information that goes into creating these new enhanced viruses. It enables anyone around the world to be able to do the same work. That's a real concern for me. So even if the lab that's doing the work is under tight scrutiny and adheres to the finest lab safety protocols, what happens if they publish, and some lab in ‘whereverland' decides to do this work under completely opposite conditions. This is like putting a detailed description of a kind of bomb online.
JB: Would you ever say that gain of function research is just too risky to do?
MO: I reject the idea that this is a ‘yes/no' proposition. Some can be very helpful, but it needs to be analyzed on a case-by-case basis.
JB: Who should take the next step to address this regulatory gap? Who should be conducing these analyses that would get us a better understanding of what's at stake?
MO: This needs to be a partnership between the international community and the US. Within the US, it needs to be a partnership between government and the private sector. On the government side, this should include a partner like Human Health Services, whether it's the NIH, CDC, or any number of groups that need to come to the table. We're in a new day when people are beginning to realize how important that might be.
Right now, we don't know what the risk is. One of the really fortunate things in (the latest CDC mishaps) is that no one was injured. We're all pleased that that's the case. But it illustrates the point that a release could occur. We are hopeful that lab safety standards could be improved all over the world but no one is naive enough to suggest that this will lead to overnight change in the culture of lab safety.