clock menu more-arrow no yes mobile
An illustration of Jason Matheny. Rebecca Clarke for Vox

Filed under:

Jason Matheny is helping humanity prepare for the existential threats of the future

From AI to bioengineered risks, Jason Matheny studies what governments will face in the coming years.

Jason Matheny has been described as an “apocaloptimist” — which, according to Matheny, means he sees “that we’re on a really good trajectory, if we can just avoid any threats to our existence.” The blend of hope for a better future alongside an intense focus on potential threats and barriers to that future is a hallmark of his work for the last decade.

Matheny started out with the Future of Humanity Institute (FHI), a research center at Oxford University that studies existential threats to humanity, whether from artificial intelligence, bio-engineered pandemics, or more bizarre dangers. Studying the future, naturally, has made Matheny’s work ahead of the curve, and it has some serious staying power. Matheny’s 2007 paper on reducing the risk of human extinction, where he argues that investing in nearer-term problems like world hunger could indirectly reduce the risk of catastrophic global threats, is still being cited to this day.

Because of his unique understanding of existential risk, Matheny joined the US intelligence community to modernize its perspective on what risk could be. In 2009, Matheny left FHI for the Intelligence Advanced Research Projects Activity (IARPA), the US intelligence community’s version of DARPA. IARPA invests in a wide range of cutting-edge speculative research projects in areas like AI and synthetic biology, including a tournament on geopolitical forecasting for national intelligence, which Matheny helped run from 2010 to 2015.

In 2018, he moved to the National Security Commission on Artificial Intelligence, a US government agency that advises Congress on how AI impacts national security. Around the same time, he founded, with Georgetown University, the Center for Security and Emerging Technology (CSET), an organization also aimed at providing data-based recommendations to US policymakers on changes related to progress in artificial intelligence.

“There are a range of challenges related to AI, but national security is a critical area of focus,” Matheny said of his work at CSET, citing “cybersecurity, intelligence, and systems for analysis and collection, as well as AI that is embedded in weapon systems of competing nations” as particularly key issues.

In July, Matheny became CEO of the Rand Corporation, the venerable California-based policy think tank that funds research on technology, infrastructure, health care, energy, climate, and many other areas. He’s especially focused on preventing “truth decay” — the decreasing trust in facts and data within the American political debate — and how, across the board, this decay could hold back efforts to improve policy. He still prioritizes preventing technological catastrophe while remaining hopeful that technology can, if used cautiously, solve rather than cause more problems.

“We now have a moment where we need to think about what will define the next 75 years,” Matheny says. “If you could read a history book in the year 2098, what are going to be the key themes, the highlights?” He adds that he hopes the histories will include Rand “reducing the risk of human extinction by .00000001 percent or greater. Hopefully greater.”

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.