Part of Pandemic-Proof, Future Perfect’s series on the upgrades we can make to prepare for the next pandemic.
Decades ago, when the world first agreed on the norms and guidelines in the Biological Weapons Convention (BWC), designing and producing biological weapons was expensive and difficult. The Soviet Union had a large program, which is suspected to have led to the accidental release of at least one influenza virus that caused tens of thousands of deaths. But the Soviets seem to have never finalized anything deadlier than what nature came up with.
Terrorist groups engaged in biological terrorism — like the Aum Shinrikyo cult, which launched a botched bioattack in Japan in 1993 — have so far largely been unable to improve on anthrax, a naturally occurring pathogen that is deadly to those who inhale it but isn’t contagious and won’t circulate the globe the way a pandemic disease can.
But our ability to engineer viruses has grown by leaps and bounds in recent years, thanks in part to the rapidly falling price of DNA sequencing and DNA synthesis technologies. Those advances have opened the door to innovations in medicine, but they also present a challenge: Viruses as deadly and disruptive as Covid-19, or potentially much worse, are going to be possible to produce in labs worldwide soon, if not right now.
To prevent pandemics that could be far worse than Covid-19, the world has to dramatically change our approach to managing global biological risks. “Amateur biologists can now accomplish feats that would have been impossible until recently for even the foremost experts in top-of-the-line laboratories,” argued Barry Pavel, a national security policy director at the Atlantic Council, and Atlantic Council co-author Vikram Venkatram.
Avoiding a catastrophe in the coming decades will require us to take the risks of human-caused pandemics far more seriously, by doing everything from changing how we do research to making it harder for people to “print” themselves a copy of a deadly virus.
Covid-19 was a warning shot for how fast a pandemic disease can spread around the world, and how ill-equipped we are to protect ourselves from a truly killer virus. If the world takes that warning shot seriously, we can insulate ourselves against the next pandemic, be it naturally occurring or human-made. With the right steps, we could even make ourselves “highly resistant if not immune to human-targeted biological threats,” MIT biologist Kevin Esvelt told me.
But if we ignore the threat, the consequences could be devastating.
Lab origins of pathogens, explained
It isn’t known for certain whether the virus that caused Covid-19 was an accidental release from the Wuhan Institute of Virology (WIV), which was studying similar coronaviruses, or a far more common “zoonotic spillover” from an animal in the wild. An analysis by the US intelligence community found both possibilities plausible. A pair of preprint studies published in 2022 pointed toward a live animal market in Wuhan as the origin of the first outbreak. And recent reporting in Vanity Fair spotlighted risky and reckless research modifying coronaviruses in the lab to study whether they would infect humans more easily, and detailed how the scientists conducting such research closed ranks to ensure their work was not blamed for the pandemic.
The reality is we may never know for sure. It can take years to conclusively trace back a zoonotic disease to its animal source, and China has made it clear it won’t cooperate with further investigations that could clarify any role WIV research may have played in Covid’s origin, however inadvertently.
Whatever chain of events caused Covid-19, we already know that infectious disease outbreaks can originate in a lab. In 1978, a year after the final reported cases of smallpox in the wild, a lab leak caused an outbreak in the UK. Photographer Janet Parker died, while her mother got a mild case and recovered; more than 500 people who’d been exposed were vaccinated. (Smallpox vaccination can protect against smallpox even after an exposure.) Only that quick, large-scale response prevented what could have been a full-blown recurrence of the once-extinct disease.
That’s not our only close brush with the return of smallpox, a disease that killed an estimated 300 million people in the 20th century alone. Six unsecured smallpox vials were discovered sitting in a refrigerator in the US National Institutes of Health (NIH) in 2014, having been forgotten there for decades among 327 vials of various diseases and other substances. One of the vials had been compromised, the FDA found — thankfully not one of the ones containing smallpox or another deadly disease.
Other diseases have been at the center of similar lab mishaps. In March 2014, a Centers for Disease Control and Prevention (CDC) researcher in Atlanta accidentally contaminated a vial of a fairly harmless bird flu with a far deadlier strain. The contaminated virus was then shipped to at least two different agricultural labs. One noticed the mistake when their chickens sickened and died, while the other was not notified for more than a month.
The mistake was communicated to CDC leadership only when the CDC conducted an extensive investigation in the aftermath of a different mistake — the potential exposure of 75 federal employees to live anthrax, after a lab that was supposed to inactivate the anthrax samples accidentally prepared activated ones.
After SARS emerged in 2003, there were six separate incidents of SARS infections resulting from lab leaks. Meanwhile, last December, a researcher in Taiwan caught Covid-19 at a moment when the island had been successfully suppressing outbreaks, going without a domestic case for more than a month. Retracing her steps, Taiwan authorities suspected she’d caught the virus from a bite by an infected mouse in a high-security biology lab.
“The fact is that laboratory accidents are not rare in life sciences,” former Sen. Joe Lieberman told the bipartisan Commission on Biodefense this March. “As countries throughout the world build additional laboratories to conduct research on highly infectious and deadly pathogens, it’s clear that the pace of laboratory accidents will naturally increase.
According to research published last year by King’s College London biosecurity researchers Gregory Koblentz and Filippa Lentzos, there are now nearly 60 labs classified as BSL-4 — the highest biosecurity rating, for labs authorized to carry out work with the most dangerous pathogens — either in operation, under construction, or planned in 23 different countries. At least 20 of those labs have been built in the last decade, and more than 75 percent are located in urban centers where a lab escape could quickly spread.
Alongside the near certainty that there will be more lab escapes in the future, engineering the viruses that could conceivably cause a pandemic if they escaped is getting cheaper and easier. That means it’s now possible for a single lab or small group to conceivably cause mass destruction across the whole world, either deliberately or by accident.
“Potential large-scale effects of attempted bioterrorism have been mitigated in the past by terrorists’ lack of expertise, and the inherent challenge of using biotechnology to make and release dangerous pathogens. Now, as people gain greater access to this technology and it becomes easier to use, the challenge is easing,” Pavel argues. The result? “Incidents of bioterrorism soon will become more prevalent.”
Dangerous research and how to combat it
The BWC, which went into force in 1975, was the first international treaty to ban the production of an entire category of weapons of mass destruction.
Identifying or creating new bioweapons was made illegal for the 183 nations that are party to the treaty. The treaty also required nations to destroy or make peaceful use of any existing bioweapons. As then-President Richard Nixon put it in 1969 when he announced the US would abandon any offensive bioweapons work of its own, “Mankind already carries in its own hands too many of the seeds of its own destruction.”
But the BWC is underfunded and little-prioritized despite the magnitude of the threat biological weapons pose. It has just a few staff members running its implementation support unit, compared to hundreds at the Chemical Weapons Convention, and a budget smaller than that of the average McDonald’s franchise. The US could easily bolster the BWC significantly with a relatively small funding commitment, and should absolutely do so.
And despite the treaty’s broad aims, much of the work to identify dangerous pathogens that could potentially act as bioweapons is still ongoing — not as part of Cold War-era covert programs deliberately designed to create pathogens for military purposes, but through well-intentioned programs to study and learn about viruses that have the potential to cause the next pandemic. That means the Biological Weapons Convention does little to constrain much of the research that now poses the greatest risk of future biological weapons use, even if the release of those viruses would be entirely inadvertent.
One such type of science is what’s called “gain of function” research, in which researchers make viruses more transmissible or more deadly in humans as part of studying how those viruses might evolve in the wild.
“I first heard about gain of function research in the 1990s, only then we had a different term for it: biological weapons research and development,” Andy Weber, former assistant secretary of defense for nuclear, chemical, and biological defense programs in the Obama administration and now a senior fellow at the Council on Strategic Risks, told me. “The intent is 180 degrees off — NIH is trying to save the world from pandemics — but the content is almost entirely overlapping.”
The status of gain of function research has been hotly contested over the last decade. In 2014, after the series of scary lab safety and containment failures I outlined above and after revelations of alarming gain of function work on bird flu, the NIH, which funds much of the cutting-edge biology research worldwide, imposed a moratorium on gain of function work on pathogens with pandemic potential like influenza or SARS. But in 2017, the moratorium was lifted without much explanation.
Right now, the US is funding gain of function work at a few select laboratories, despite the objections of many leading biologists who argue that the limited benefits of this work aren’t worth the costs. In 2021, a bill was introduced to prohibit federal research grants that fund gain of function research on SARS, MERS, and influenza viruses.
Beyond the risk that a virus strengthened through gain of function work might accidentally escape and trigger a larger outbreak — which is one theory, albeit unproven, for how Covid-19 began — it can be hard to differentiate legitimate, if risky, research from deliberate efforts to create malign pathogens. “Because of our government support for this risky gain of function research, we’ve created the perfect cover for countries that want to do biological weapons research,” Weber told me.
The No. 1 thing he’d recommend to prevent the next pandemic? “Ending government funding for risky research that plausibly could have caused this and future pandemics.”
Another potentially risky area of virology research involves identifying animal species that act as reservoirs of viruses that have the potential to cross over into humans and cause a pandemic. Scientists involved in this work go out to remote areas to take samples of those pathogens with dangerous potential, bring them back to the lab, and determine whether they might be able to infect human cells. This is precisely what researchers at the WIV apparently did in the years leading up to Covid-19 as they searched for the animal source of the original SARS virus.
Such work was advertised as a way to prevent pandemic-capable pathogens from crossing over into humans, but it was largely useless when it came time to fight SARS-CoV-2, Weber says. “After having done this work for 15 years, I think there’s little to show for it,” Weber told me. That’s not the only view within the virology community, but it’s not a rare one. Weber thinks Covid-19 should lead to a rethinking. “As the intelligence community concluded, it’s plausible that it actually caused this pandemic. It was of zero help in preventing this pandemic or even predicting this pandemic.”
There’s certainly a place for work identifying viruses at the wildlife-human boundary and preventing spillover, but the limited track record of viral discovery work has many experts questioning whether our current approach to viral discovery is a good idea. They argue that the benefits have been overstated while the potential harms have been undercounted.
At every stage of the process, such research generates the possibility of causing the animal-human spillover that the scientists intend to study and prevent. And the end result — a detailed list of all of the pathogens that researchers have identified as incredibly dangerous if released — is a gift to biological weapons programs or to terrorists.
Thanks to improvements in DNA synthesis technology, once you have the digital RNA sequence for a virus, it’s relatively straightforward to “print” the sequence and create your own copy of the virus (more on this below). These days, “there is no line between identifying a thing as pandemic capable and it becoming available as a weapon,” Esvelt told me.
The good news? It shouldn’t be hard for policymakers to change course on dangerous research.
The NIH funds a large share of biology research globally, and a renewed NIH ban on funding dangerous research would significantly reduce how much of that dangerous work takes place. If the US adopts firm and transparent policies against funding research into making pathogens deadlier or identifying pandemic-capable pathogens, it will be easier to exercise the global leadership needed to discourage that work in other countries.
“China funds this research too,” Esvelt told me. It might be that, spooked by Covid-19, they’re open to reconsidering, but “if we don’t stop, it’s going to be really hard to talk to China and get them to stop.”
All of that amounts to a simple prescription for policymakers: Stop funding dangerous research, and then build the scientific and policy consensus necessary to get other nations to also stop funding such research.
Behind that simple prescription lies a great deal of complexity. Many discussions of whether the US should be funding dangerous research have run aground in technical arguments over what counts as “gain of function” work — as if the important thing is scientific terminology, not whether such research might trigger a pandemic that could kill millions of people.
“94% of countries have no national-level oversight measures for dual-use research, which includes national laws or regulation on oversight, an agency responsible for the oversight, or evidence of a national assessment of dual-use research,” a 2021 report by the Johns Hopkins Center for Health Security and the Nuclear Threat Initiative found.
And if that were to happen, the result could be as bad or worse than anything nature can cook up. That’s precisely what happened in a pandemic simulation put on in 2018 by the Johns Hopkins Center for Health Security. In the fictional scenario, a terror group modeled on Aum Shinrikyo engineers a virus that combines the high transmissibility of parainfluenza — a family of viruses that generally cause mild symptoms in young children — with the extreme virulence of the Nipah virus. The result is a supervirus that in the exercise eventually kills 150 million people around the world.
DNA synthesis and how it changes the bioweapons calculus
“Advances in synthetic biology and biotechnology make it easier than ever before to make pathogens more lethal and transmissible, and advances in the life sciences are occurring at a pace that governments have been unable to keep up with, which increases the risk of deliberate or accidental releases of dangerous pathogens,” Lieberman told the bipartisan Commission on Biodefense in March.
One of the most exciting recent areas of progress in biology has been the increasing ease of DNA synthesis — the ability to “print” DNA (or RNA, which makes up the genetic material of viruses like influenzas, coronaviruses, measles, or polio) from a known sequence. It used to be that creating a specifically desired DNA sequence was incredibly expensive or impossible; now, it is much more straightforward and relatively cheap, with multiple companies in the business of providing mail-order genes. While scientific skill is still very much required to produce a virus, it is nowhere near as expensive as it used to be, and can be done by a much smaller team.
This is great news; DNA synthesis enables a great deal of important and valuable biology research. But progress in DNA synthesis has been so fast that coordination against dangerous actors who might misuse it has lagged.
Furthermore, checking the sequence against a list of known dangerous sequences requires researchers to maintain a list of known dangerous sequences — which is itself something bad actors could use to cause harm. The result is an “information hazard,” what the existential risk scholar Nick Bostrom defines as “risks that arise from the dissemination or the potential dissemination of true information that may cause harm or enable some agent to cause harm.”
“DNA is an inherently dual-use technology,” James Diggans, who works on biosecurity at the industry-leading synthetic DNA provider Twist Bioscience, told me in 2020. What that means is DNA synthesis makes fundamental biology research and lifesaving drug development go faster, but it can also be used to do research that can be deadly for humanity.
That’s the conundrum that biosecurity researchers — in industry, in academia, and in the government — are faced with today: trying to figure out how to make DNA synthesis faster and cheaper for its many beneficial uses while ensuring every printed sequence is screened and hazards are appropriately handled.
If that sounds like a challenging problem now, it’s only likely to get worse in the future. As DNA synthesis gets ever cheaper and easier, many researchers anticipate the creation of tabletop synthesizers that would allow labs to simply print their own DNA as needed for their research, no middleman needed. Something like a tabletop synthesizer could make for awesome progress in biology — and worsen the challenge of preventing bad actors from printing out dangerous viruses.
And as DNA synthesis gets cheaper, screening for dangerous sequences becomes a larger percentage of the cost, and so the financial advantage of cutting corners on screening could become bigger, as companies that don’t do screening may be able to offer considerably lower prices.
Esvelt and the team he works with — which includes US, EU, and Chinese researchers — have developed a framework for a potential solution. They want to maintain a database with hashes of deadly and dangerous sequences — mathematically generated strings that correspond uniquely to each sequence, but can’t be reverse-engineered to learn the dangerous original sequence if you don’t already know it. That will allow checking sequences against a list of deadly ones without risking anyone’s privacy and intellectual property, and without maintaining a public list of deadly sequences that a terror group or bioweapons program could use as a shopping list.
“Later this year, we anticipate making DNA synthesis screening available for free to countries worldwide,” Esvelt told me.
To make things truly safe, such a proposal should be accompanied by government requirements that DNA synthesis companies send sequences on for screening against a certified database of dangerous sequences like Esvelt’s. But the hope is that such regulations will be welcomed if screening is secure, transparent, and free of charge to consumers — and that way, research can be made safer without slowing down progress on legitimate biology work.
International governance is always a difficult balancing act, and for many of these questions we’re going to need to keep revisiting our answers as we invent and improve new technologies. But we can’t afford to wait. The omicron variant of Covid-19 infected tens of millions of people in the US in the space of just a few months. When a disease hits, it can hit fast, and it can be too late by the time we know we have a problem.
Thankfully, the risk of a serious catastrophe can be much reduced by our choices in advance, from screening programs to making deadly viruses harder to engineer, to global efforts to end research into developing dangerous new diseases. But we have to actually take those steps, immediately and on a global basis, or all the planning in the world won’t save us.