Do national governments have a plan to prevent global catastrophe? Do policymakers even know enough about those risks to make plans, and do they have access to the expertise they need for effective planning?
The answer is — not really. At least, that’s one takeaway from a new report by the University of Cambridge’s Centre for the Study of Existential Risk, which looks at how well equipped policymakers are to make policy dealing with global catastrophic risks. Those are dangers with the potential to cause mass disruption around the world: nuclear war, severe warming from climate change, a global pandemic, and other possibilities.
In a lot of ways, governments are not well equipped at all for those decisions, the report concludes. And it’s not hard to see why. Many of the risks we face today are global — things can go wrong for us in the United States even if US policy is itself pretty good, because the world is so interconnected. A nuclear war anywhere in the world would affect the whole world, for example. Climate change is already affecting the entire planet.
And some of the risks discussed in the report — such as the use of bioengineering technology to make pandemic diseases more deadly, or the hazards from advanced artificial intelligence systems — could be set into motion by a single company or research lab.
Then there are the risks we can’t even anticipate yet. A hundred years ago, an organization dedicated to researching catastrophic risks to humanity wouldn’t have known that the nuclear bomb was just over the horizon. Similarly, there might be technologies we are on the brink of discovering that have the potential to be exceptionally dangerous and which we don’t even know to look out for.
The report concludes that “national governments often struggle with understanding, and developing policy for, extreme risks.” The same goes for institutions like the UN, though they’re not the focus of the report.
And that’s awful news for all of us. The thing is, while some of these dangers are speculative, many of them are quite concrete and not as unlikely or distant as they might initially sound. And it takes only one major catastrophe to potentially kill hundreds of millions of people or knock human civilization off course.
We need sound policy to address these issues, and in many cases we need to address them before they get bad, not afterward when it might be too late.
Here’s why governments are bad at dealing with global catastrophic risks
The report identifies a few reasons policymakers struggle so much to develop good strategies for global risk mitigation.
The first reason is that it won’t help them get elected.
Most of the big challenges the Center for Existential Risk analyzes are long-term risks — not issues we’re face in the next five years, but issues that we’ll likely need to grapple with before the end of this century. No one will ever win reelection based on their sensible biosecurity policy that reduces the risk of problems in 20 years. You can’t even win elections in the US based on your plans for addressing climate change, as Jay Inslee’s departure from the presidential race demonstrates.
Many of the risks that policymakers need to address, from nuclear winter to risks from advanced artificial intelligence, sound so speculative that a candidate might actually be penalized in a campaign for raising them.
And if you can’t win an election through sound policy on an issue, it’s much harder to get policymakers to treat the issue as a genuine priority.
Another problem? “The bureaucracies that support government can be ill-equipped to understand these risks,” the report writes. Policymakers rely heavily on the departments of the government that research issues and produce detailed reports. The US government has strong internal departments for research in some areas.
But in others, there’s very little expertise on hand. In cases like climate change, researchers have been driven out of the government by politically motivated department shakeups. In cases like AI, there are very few experts on the astoundingly fast-moving advances of the last few years — and most of them are in industry, not in civil service. That’s a recipe for disaster.
The report lays out a four-step plan for connecting governments with the resources they need to address catastrophic risks effectively. First, they argue, governments need to hire scientists and develop quality internal teams doing research on risk.
Next, they need to improve the links between science and policy. The US government had scientists doing high-quality research on climate change, but this didn’t produce sound government policy. You need to connect your scientists with legislators and regulatory bodies so that their recommendations actually inform policy.
Third, they recommend supporting academic research on catastrophic risks. The field of catastrophic risk management looks very different from what it looked like a decade ago, largely because of researchers across the US and UK who have published papers defining and shaping the field, establishing its methods, and analyzing the most critical risks. Right now, that research is largely funded by philanthropists. But it should be funded by states, which have (or should have) an interest in preventing mass catastrophes that will affect their people.
Finally, they recommend increased resources in government for science and technology expertise. Making more funding available is a great way to convince researchers and policymakers alike to flock to a new area — and more research is desperately needed to prioritize among the risks we face and plan sensible solutions.
Overall, it’s discouraging how little governments seem to be doing about problems that threaten the continued existence of their societies. But it’s not an impossible problem. With the right commitments, we could develop the tools needed for governments to address risk.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.