Skip to main content

Believe that journalism can make a difference

If you believe in the work we do at Vox, please support us by becoming a member. Our mission has never been more urgent. But our work isn’t easy. It requires resources, dedication, and independence. And that’s where you come in.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Support Vox

How smart will AI get? Ajeya Cotra has an answer.

Predicting the pace of AI intelligence is the first step to knowing what we should do about it.

Rebecca Clarke for Vox

How smart will AI get? Ajeya Cotra has an answer.

Predicting the pace of AI intelligence is the first step to knowing what we should do about it.

Sigal Samuel
Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.

Let’s say you have hundreds of millions of dollars. You want to help the world as much as possible. How do you know which causes to spend money on and how much to give?

This is exactly the situation that major charitable organizations, like Open Philanthropy, find themselves in. Should they prioritize saving kids from malaria? Preventing a manmade pandemic? What about runaway AI?

Ajeya Cotra, a senior research analyst at Open Phil, works on answering questions like these. Her investigations into specific causes, as well as her meta-investigations into how we can even think through such hard questions, are refreshingly nuanced and thoughtful.

AI risk is the specific cause that Cotra has devoted most of her time to thinking about lately. In 2020, she put out a report that aimed to forecast when we’ll most likely see the emergence of transformative AI (think: powerful enough to spark a major shift like the Industrial Revolution). The question of AI timelines is crucial for figuring out how much funding we should spend on mitigating risks from AI versus other causes — the closer transformative AI is to happening, the more pressing the need to invest in safety measures becomes.

Cotra came up with a way to estimate what might seem to be unknowable. She uses the human brain to estimate how much computation we’d need to train an AI that could perform as well as a human. By using this “biological anchor” for her work, she arrived at 2050 as her median estimate for transformative AI.

This year, though, she updated her timelines in light of the recent explosion in AI development. Her new median estimate is that transformative AI will emerge by 2040, which would make what had seemed to be a long-term risk now possibly right around the corner.

There’s plenty of room to debate whether Cotra’s biological anchor approach is the right one. But if she’s even anywhere in the ballpark, that’s a pretty eye-popping estimate.

And it’s got practical implications. “This update should also theoretically translate into a belief that we should allocate more money to AI risk over other areas such as bio risk,” Cotra writes, “[and] to be more forceful and less sheepish about expressing urgency when ... trying to recruit particular people to work on AI safety or policy.”

Beyond helping us think through specific causes like AI, Cotra has offered a way to think through the meta question of how to allocate resources between different causes. She calls it “worldview diversification.”

In a nutshell, it says that we shouldn’t just divvy up resources based on how many beneficiaries each cause claims to have. If we did that, we’d always prioritize longtermist causes because things that will shape the far future will affect the hundreds of billions who may live, not just the 8 billion alive today. Instead, we should acknowledge that there are different worldviews (some that prioritize current problems like, say, malaria, and some that are longtermist) and that each might have something useful to offer. Then we should divvy up our budget among them based on our credence — how plausible we find each one.

This approach, which Open Phil has been gravitating toward in practice, has clear advantages over an approach that’s based only on calculating which cause has the most beneficiaries. The charity world is better for Cotra having articulated that.

A whole new thing that could end the worldA whole new thing that could end the world
Future Perfect

Why we won’t cause a mirror bacteria apocalypse — thanks to science.

By Kelsey Piper
25 things we think will happen in 202525 things we think will happen in 2025
Future Perfect

From tariffs and a Trump/Elon break-up to artificial general intelligence, here’s what could happen in 2025, according to the Future Perfect team.

By Dylan Matthews, Bryan Walsh and 4 more
The 14 predictions that came true in 2024 — and the 10 that didn’tThe 14 predictions that came true in 2024 — and the 10 that didn’t
Future Perfect

The 24 forecasts we made in 2024, revisited.

By Bryan Walsh, Dylan Matthews and 4 more
LA thinks AI could help decide which homeless people get scarce housing — and which don’tLA thinks AI could help decide which homeless people get scarce housing — and which don’t
The Highlight

Without enough houses for its growing homeless population, the city is using machine learning to make its process fairer.

By Carly Stern
Giving healthy kids antibiotics saves lives. There’s a catch.Giving healthy kids antibiotics saves lives. There’s a catch.
Features

An intervention to reduce child mortality may accelerate drug resistance.

By Jess Craig
9 actually good things that happened in 20249 actually good things that happened in 2024
Future Perfect

It wasn’t the easiest year, but 2024 was not without its bright spots.

By Bryan Walsh