Skip to main content

Believe that journalism can make a difference

If you believe in the work we do at Vox, please support us by becoming a member. Our mission has never been more urgent. But our work isn’t easy. It requires resources, dedication, and independence. And that’s where you come in.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Support Vox

How AI’s booms and busts are a distraction

However current companies do financially, the big AI safety challenges remain.

OpenAI Releases New Artificial Intelligence Model GPT-4o
OpenAI Releases New Artificial Intelligence Model GPT-4o
A photo illustration of GPT-4o is seen on May 14, 2024.
CG/VCG via Getty Images
Kelsey Piper
Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

What does it mean for AI safety if this whole AI thing is a bit of a bust?

“Is this all hype and no substance?” is a question more people have been asking lately about generative AI, pointing out that there have been delays in model releases, that commercial applications have been slow to emerge, that the success of open source models makes it harder to make money off proprietary ones, and that this whole thing costs a whole lot of money.

I think many of the people calling “AI bust” don’t have a strong grip on the full picture. Some of them are people who have been insisting all along that there’s nothing to generative AI as a technology, a view that’s badly out of step with AI’s many very real users and uses.

This story was first featured in the Future Perfect newsletter.

Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.

And I think some people have a frankly silly view of how fast commercialization should happen. Even for an incredibly valuable and promising technology that will ultimately be transformative, it takes time between when it’s invented and when someone first delivers an extremely popular consumer product based on it. (Electricity, for example, took decades between invention and truly widespread adoption.) “The killer app for generative AI hasn’t been invented yet” seems true, but that’s not a good reason to assure everyone that it won’t be invented any time soon, either.

But I think there’s a sober “case for a bust” that doesn’t rely on misunderstanding or underestimating the technology. It seems plausible that the next round of ultra-expensive models will still fall short of solving the difficult problems that would make them worth their billion-dollar training runs — and if that happens, we’re likely to settle in for a period of less excitement. More iterating and improving on existing products, fewer bombshell new releases, and less obsessive coverage.

If that happens, it’ll also likely have a huge effect on attitudes toward AI safety, even though in principle the case for AI safety doesn’t depend on the AI hype of the last few years.

The fundamental case for AI safety is one I’ve been writing about since long before ChatGPT and the recent AI frenzy. The simple case is that there’s no reason to think that AI models which can reason as well as humans — and much faster — aren’t possible, and we know they would be enormously commercially valuable if developed. And we know it would be very dangerous to develop and release powerful systems which can act independently in the world without oversight and supervision that we don’t actually know how to provide.

Many of the technologists working on large language models believe that systems powerful enough that these safety concerns go from theory to real-world are right around the corner. They might be right, but they also might be wrong. The take I sympathize with the most is engineer Alex Irpan’s: “There’s a low chance the current paradigm [just building bigger language models] gets all the way there. The chance is still higher than I’m comfortable with.”

It’s probably true that the next generation of large language models won’t be powerful enough to be dangerous. But many of the people working on it believe it will be, and given the enormous consequences of uncontrolled power AI, the chance isn’t so small it can be trivially dismissed, making some oversight warranted.

How AI safety and AI hype ended up intertwined

In practice, if the next generation of large language models aren’t much better than what we currently have, I expect that AI will still transform our world — just more slowly. A lot of ill-conceived AI startups will go out of business and a lot of investors will lose money — but people will continue to improve our models at a fairly rapid pace, making them cheaper and ironing out their most annoying deficiencies.

Even generative AI’s most vociferous skeptics, like Gary Marcus, tend to tell me that superintelligence is possible; they just expect it to require a new technological paradigm, some way of combining the power of large language models with some other approach that counters their deficiencies.

While Marcus identifies as an AI skeptic, it’s often hard to find significant differences between his views and those of someone like Ajeya Cotra, who thinks that powerful intelligent systems may be language-model powered in a sense that is analogous to how a car is engine-powered, but will have lots of additional processes and systems to transform their outputs into something reliable and usable.

The people I know who worry about AI safety often hope that this is the route things will go. It would mean a little bit more time to better understand the systems we’re creating, time to see the consequences of using them before they become incomprehensibly powerful. AI safety is a suite of hard problems, but not unsolvable ones. Given some time, maybe we’ll solve them all.

But my sense of the public conversation around AI is that many people believe “AI safety” is a specific worldview, one that is inextricable from the AI fever of the last few years. “AI safety,” as they understand it, is the claim that superintelligent systems are going to be here in the next few years — the view espoused in Leopold Aschenbrenner’s “Situational Awareness” and reasonably common among AI researchers at top companies.

If we don’t get superintelligence in the next few years, then, I expect to hear a lot of “it turns out we didn’t need AI safety.”

Keep your eyes on the big picture

If you’re an investor in today’s AI startups, it deeply matters whether GPT-5 is going to be delayed six months or whether OpenAI is going to next raise money at a diminished valuation.

If you’re a policymaker or a concerned citizen, though, I think you ought to keep a bit more distance than that, and separate the question of whether current investors’ bets will pay off from the question of where we’re headed as a society.

Whether or not GPT-5 is a powerful intelligent system, a powerful intelligent system would be commercially valuable and there are thousands of people working from many different angles to build one. We should think about how we’ll approach such systems and ensure they’re developed safely.

If one company loudly declares they’re going to build a powerful dangerous system and fails, the takeaway shouldn’t be “I guess we don’t have anything to worry about.” It should be “I’m glad we have a bit more time to figure out the best policy response.”

As long as people are trying to build extremely powerful systems, safety will matter — and the world can’t afford to either get blinded by the hype or be reactively dismissive as a result of it.

4 reasons why factory farming still exists4 reasons why factory farming still exists
Future Perfect

Despite being reviled by just about everyone.

By Kenny Torrella
How long should you meditate?How long should you meditate?
Future Perfect

And other practical meditation questions, answered.

By Oshan Jarow
How meditation deconstructs your mindHow meditation deconstructs your mind
Future Perfect

Want to learn how to meditate? Scientists have a new theory that might change how you practice.

By Oshan Jarow
Gigantic SUVs are a public health threat. Why don’t we treat them like one?Gigantic SUVs are a public health threat. Why don’t we treat them like one?
Future Perfect

The anti-tobacco playbook could help turn the US public against their beloved oversized cars.

By David Zipper
You can’t optimize your way to being a good personYou can’t optimize your way to being a good person
The Highlight

I tried to make the perfect moral choice every time. It eroded my humanity.

By Sigal Samuel
A whole new thing that could end the worldA whole new thing that could end the world
Future Perfect

Why we won’t cause a mirror bacteria apocalypse — thanks to science.

By Kelsey Piper