clock menu more-arrow no yes

How deadly pathogens have escaped the lab — over and over again

Research into dangerous viruses and bacteria is important, but for the deadliest pathogens, it’s not clear the benefits are worth the risks.

A scientist holds up PCR tubes used for the analysis of the 2009 H1N1 (“swine flu”) outbreak.
Andreas Rentz/Getty Images

In 1977, the last case of smallpox was diagnosed in the wild.

The victim was Ali Maow Maalin of Somalia. The World Health Organization tracked down every person he’d been in face-to-face contact with to vaccinate everyone at risk and find anyone who might have caught the virus already. Thankfully, they found no one had. Maalin recovered, and smallpox appeared to be over forever.

That moment came at the end of a decades-long campaign to eradicate smallpox — a deadly infectious disease that killed about 30 percent of those who contracted it — from the face of the earth. Around 500 million people died of smallpox in the century before it was annihilated.

But in 1978, the disease cropped back up — in Birmingham, in the United Kingdom. Janet Parker was a photographer at Birmingham Medical School. When she developed a horrifying rash, doctors initially brushed it off as chicken pox. After all, everyone knew that smallpox had been chased out of the world — right?

Parker got worse and was admitted to the hospital, where testing determined that she had smallpox after all. She died of it a few weeks later.

How did she get a disease that was supposed to have been eradicated?

It turned out that the building that Parker worked in also contained a research laboratory, one of a handful where smallpox was studied by scientists who were trying to contribute to the eradication effort. Some papers reported that the lab was badly mismanaged, with important precautions ignored because of haste. (The doctor who ran the lab died by suicide shortly after Parker was diagnosed.) Somehow, smallpox escaped the lab to infect an employee elsewhere in the building. Through sheer luck and a rapid response from health authorities, including a quarantine of more than 300 people, the deadly error didn’t turn into an outright pandemic.

Could something like that happen today?

All over the world, bio research labs handle deadly pathogens, some with the potential to cause a pandemic. Sometimes, researchers make pathogens even deadlier in the course of their research (as Science Magazine reported last month, the US government just approved two such experiments after years of keeping them on hold).

Research into viruses can help us develop cures and understand disease progression. We can’t do without this research. But on a few notable occasions, it’s gone dangerously wrong and even killed people.

Reviewing the incidents, it looks like there are many different points of failure — machinery that’s part of the containment process malfunctions; regulations aren’t sufficient or aren’t followed. Human error means live viruses are handled instead of dead ones.

Sometimes, these errors could be deadly. “If an enhanced novel strain of flu escaped from a laboratory and then went on to cause a pandemic, then causing millions of deaths is a serious risk,” Marc Lipsitch, a professor of epidemiology at Harvard, told me.

It’s not that there’s a high rate of mistakes in these labs; the rate of mistakes is actually quite low. But it’s one thing to run a one-in-thousands chance of killing a handful of others from a mistake — the odds we face over a lifetime of driving. It’s another thing entirely to accept a similar probability of killing millions of people.

The cost-benefit analysis for pathogens which might kill the people exposed or a handful of others is vastly different from the cost-benefit analysis for pathogens which could cause a global pandemic — but our current procedures don’t really account for that. As a result, we’re running unacceptable risks with millions of lives.

How pathogens can find their way out of the lab

The US government controls research into “select agents and toxins” that pose a serious threat to human health, from bubonic plague to anthrax. There are 66 select agents and toxins regulated under the program and nearly 300 labs approved to work with them. Researching pathogens and toxins allows us to develop vaccines, diagnostic tests, and treatments. New biology techniques also allow for more controversial forms of research, including making diseases more virulent or more deadly to anticipate how they might mutate in the wild.

So this research can be really important, and a critical part of public health efforts. Unfortunately, the facilities that do such work can also be plagued by a serious problem: human error.

The 1978 smallpox death was, most analyses found, caused by carelessness — poor lab safety procedures and badly designed ventilation. Most people would like to think that we’re not so careless today. But scary accidents — caused by human error, software failures, maintenance problems, and combinations of all of the above — are hardly a thing of the past.

In 2014, as the Food and Drug Administration (FDA) did cleanup for a planned move to a new office, hundreds of unclaimed vials of virus samples were found in a cardboard box in the corner of a cold storage room. Six of them, it turned out, were vials of smallpox. No one had been keeping track of them; no one knew they were there. They may have been there since the 1960s.

Panicked scientists put the materials in a box, sealed it with clear packaging tape, and carried it to a supervisor’s office. (This is not approved handling of dangerous biological materials.) It was later found that the integrity of one vial was compromised — luckily, not one containing a deadly virus.

In a lengthy report on how the incident happened, the FDA found persistent, horrifying shortcomings in the handling of these incredibly dangerous materials. Among them:

The security and inventory control of orphaned biological materials (material whose owner departed the lab, but did not properly remove, destroy, or transfer the material to a new owner) was not maintained.

...

FDA did not follow the CDC Select Agent Guidelines for the packaging and transfer of samples to a high containment facility for securing the materials.

...

FDA did not conduct a complete inventory of all of its laboratories and associated spaces when smallpox was eradicated in 1980 and all biological agents that cause smallpox were consolidated under the WHO Collaborating Centre repositories at the CDC. FDA also did not conduct a complete inventory when the Federal Select Agent Program was enacted in 2003.

The blizzard of dangerous errors over only a few months in 2014, and the additional errors uncovered by subsequent investigations, inspired the US government to change its practices. The government called on all labs that handle secure substances to immediately improve their inventory policies and review their procedures, and to provide written documentation that they’d done so. It launched government-wide reviews to better understand how to safely regulate pandemic pathogens. The FDA began providing better training and conducting periodic audits to make sure that the safety procedures that were ignored in this case are being followed.

The 1979 and 2014 incidents grabbed attention because they involved smallpox, but incidents of unintended exposure to controlled biological agents are actually quite common. Hundreds of incidents occur every year, though not all involve potentially pandemic pathogens.

In 2014, a researcher accidentally contaminated a vial of a fairly harmless bird flu with a far deadlier strain. The deadlier bird flu was then shipped across the country to a lab that didn’t have authorization to handle such a dangerous virus, where it was used for research on chickens.

The mistake was discovered only when the Centers for Disease Control and Prevention (CDC) conducted an extensive investigation in the aftermath of a different mistake — the potential exposure of 75 federal employees to live anthrax, after a lab that was supposed to inactivate the anthrax samples accidentally prepared activated ones.

The CDC’s Select Agents and Toxins program requires that “theft, loss, release causing an occupational exposure, or release outside of primary biocontainment barriers” of agents on its watchlist be immediately reported. Between 2005 and 2012, the agency got 1,059 release reports — an average of an incident every few days. Here are a few examples:

  • In 2008, a sterilization device malfunctioned and unexpectedly opened, exposing a nearby unvaccinated worker to undisclosed pathogens.
  • In 2009, a new high-security bio research facility, rated to handle Ebola, smallpox, and other dangerous pathogens, had its decontamination showers fail. The pressurized chamber kept losing pressure and the door back into the lab kept bursting open while the scientists leaned against it to try to keep it closed. Building engineers were eventually called to handle the chemical showers manually.
  • In 2011, a worker at a lab that studied dangerous strains of bird flu found herself unable to shower after a construction contractor accidentally shut off the water. She removed her protective equipment and left without taking a decontaminating shower. (She was escorted to another building and showered there, but pathogens could have been released in the meantime.)

Now, the vast majority of these mistakes never infect anyone. And while 1,059 is an eye-popping number of accidents, it actually reflects a fairly low rate of accidents — working in a controlled biological agents lab is safe compared to many occupations, like trucking or fishing.

But a trucking or fishing accident will, at worst, kill a few dozen people, while a pandemic pathogen accident could potentially kill a few million. Considering the stakes and worst-case scenarios involved, it’s hard to look at those numbers and conclude that our precautions against disaster are sufficient.

“We have to work with these flu viruses, that is how we can understand them,” Michael Osterholm, director of the Center for Infectious Disease Research and Policy at the University of Minnesota, told Scientific American after the string of 2014 containment disasters. “What’s more important is we have to be able to do this safely. That’s really the key piece. We don’t want to stop this work.”

The challenges of safe handling of pathogens

Why is running labs without such errors so hard?

A look at the CDC’s records of Select Agent containment failures helps answer that question. Errors come from many directions. With worrying frequency, people handle live viruses thinking they’ve been given deactivated ones. Technology that’s a critical part of the containment process can fail unexpectedly. It’s not that there’s a single “problem” piece of technology — it’s that there are so many that are a part of the containment process, and all of them have some small risk of failing. We can secure against showers depressurizing and sterilization equipment flying open when it malfunctions, but many other pieces of hardware are a critical part of containment measures, and they might have obscure malfunctions under the wrong conditions too.

These problems don’t just occur in the US. In the United Kingdom, a recent investigation found:

more than 40 mishaps at specialist laboratories between June 2015 and July 2017, amounting to one every two to three weeks. Beyond the breaches that spread infections were blunders that led to dengue virus — which kills 20,000 people worldwide each year — being posted by mistake; staff handling potentially lethal bacteria and fungi with inadequate protection; and one occasion where students at the University of the West of England unwittingly studied live meningitis-causing germs which they thought had been killed by heat treatment.

Severe acute respiratory syndrome, or SARS, had an outbreak in 2003. Since then it hasn’t reoccurred in the wild, but there have been six separate incidents of it escaping the lab: one in Singapore, one in Taiwan, and four times at one lab in Beijing.

“These narratives of escaped pathogens have common themes,” argued an analysis of biocontainment failures by medical historian Martin Furmanski in the Bulletin of the Atomic Scientists. “There are unrecognized technical flaws in standard biocontainment, as demonstrated in the UK smallpox [case]. ... The first infection, or index case, happens in a person not working directly with the pathogen that infects him or her, as in the smallpox and SARS escapes. Poor training of personnel and slack oversight of laboratory procedures negate policy efforts by national and international bodies to achieve biosecurity, as shown in the SARS and smallpox escapes.”

It’s easy to see why these problems are hard to address. Adding more rules for those handling pathogens won’t help if the people infected are usually not the ones handling the pathogens. Adding more federal and international regulations won’t help if the regulations aren’t consistently followed. And if there are still unrecognized technical flaws in the standards for biocontainment, how would we know until an incident made those flaws apparent?

This is a worry that’s recently back in the news because the US government has approved research aimed at making certain deadly influenza viruses more virulent — that is, making it easier for them to spread from person to person. The researchers involved want to learn more about transmissibility and virulence, in order to better equip us to combat these diseases. The labs conducting such research have taken unusual steps to ensure their safety and to reduce the risk of an outbreak.

But have they reduced it enough? “We imagine that when there’s an accident, it’s because a ventilation system fails or someone just forgets to do something, or that it’s sort of avoidable mechanical or human error,” Lipsitch told me.

Yet many of the recent failures don’t fit that pattern. “Rather, it was people doing something that they thought was the right thing and was neutralizing a dangerous pathogen by killing it, and in fact they still had some dangerous pathogen or contamination with a dangerous pathogen,” he said. “My concern is not really that one of these people will do something that’s foolish or reflects poor training. My concern is that there’ll be human error of the kind that’s not really avoidable.”

Lipsitch does not think we should tighten standards for most research. He argues that our current approach, while its error rate will never be zero, is a good balance of scientific and global health concerns with safety — that is, for most of the pathogens biologists research. But for the most dangerous pathogens, the ones with the potential to spark a global pandemic, he points out that that calculus doesn’t hold.

The influenza pandemic of 1918 killed 50 million people. Models of how influenza spreads suggest that an escape from containment by an influenza virus might not be contained in the local community where the incident occurred, as Birmingham’s smallpox outbreak thankfully was. Is it sufficient for procedures to look airtight on paper, when the stakes are so high? Are there any labs in the world that we can confidently expect to be sufficiently careful with pathogens — to never, ever make the mistakes that other labs have made thousands of times despite high standards of caution?

So far, too much biosecurity policy has been reactive — tightening standards after something goes wrong. Given how badly things can go wrong, that’s not good enough. It’ll be exceptionally challenging to make our labs safer, but when it comes to the riskiest pathogens, we simply have to be up to the challenge.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.