clock menu more-arrow no yes mobile

Finally, a realistic roadmap for getting AI companies in check

It’s time for AI regulators to move fast and break things.

A smartphone screen shows GitHub’s Copilot tool, against a computer screen showing the AI GPT-4. CFOTO/Future Publishing via Getty Images
Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.

New AI systems are coming at us so fast and furious that it might seem like there’s nothing we can do to stop them long enough to make sure they’re safe.

But that’s not true. There are concrete things regulators can do right now to prevent tech companies from releasing risky systems.

In a new report, the AI Now Institute — a research center studying the social implications of artificial intelligence — offers a roadmap that specifies exactly which steps policymakers can take. It’s refreshingly pragmatic and actionable, thanks to the government experience of authors Amba Kak and Sarah Myers West. Both former advisers to Federal Trade Commission chair Lina Khan, they focus on what regulators can realistically do today.

The big argument is that if we want to curb AI harms, we need to curb the concentration of power in Big Tech.

To build state-of-the-art AI systems, you need resources — a gargantuan trove of data, a huge amount of computing power — and only a few companies currently have those resources. These companies amass millions that they use to lobby government; they also become “too big to fail,” with even governments growing dependent on them for services.

So we get a situation where a few companies get to set the terms for everyone: They can build hugely consequential AI systems and then release them how and when they want, with very little accountability.

“A handful of private actors have accrued power and resources that rival nation-states while developing and evangelizing artificial intelligence as critical social infrastructure,” the report notes.

What the authors are highlighting is the hidden-in-plain-sight absurdity of how much power we’ve unwittingly ceded to a few actors that are not democratically elected.

When you think about the risks of systems like ChatGPT and GPT-4-powered Bing — like the risk of spreading disinformation that can fracture democratic society — it’s wild that companies like OpenAI and Microsoft have been able to release these systems at their own discretion. OpenAI’s mission, for example, is “to ensure that artificial general intelligence benefits all of humanity” — but so far, the company, not the public, has gotten to define what benefiting all of humanity entails.

The report says it’s past time to claw back power from the companies, and it recommends some strategies for doing just that. Let’s break them down.

Concrete strategies for gaining control of AI

One of the absurdities of the current situation is that when AI systems produce harm, it falls to researchers, investigative journalists, and the public to document the harms and push for change. But that means society is always carrying a heavy burden and scrambling to play catch-up after the fact.

So the report’s top recommendation is to create policies that place the burden on the companies themselves to demonstrate that they’re not doing harm. Just as a drugmaker has to prove to the FDA that a new medication is safe enough to go to market, tech companies should have to prove that their AI systems are safe before they’re released.

That would be a meaningful improvement over existing efforts to better the AI landscape, like the burgeoning industry in “audits,” where third-party evaluators peer under the hood to get transparency into how an algorithmic system works and root out bias or safety issues. It’s a good step, but the report says it shouldn’t be the primary policy response, because it tricks us into thinking of “bias” as a purely technical problem with a purely technical solution.

But bias is also about how AI is used in the real world. Take facial recognition. “It is not social progress to make black people equally visible to software that will inevitably be further weaponized against us,” Zoé Samudzi noted in 2019.

Here, again, the report reminds us of something that should be obvious but so often gets overlooked. Instead of taking an AI tool as a given and asking how we can make it fairer, we should start with the question: Should this AI tool even exist? In some cases, the answer will be no, and then the right response is not an audit, but a moratorium or a ban. For example, pseudoscience-based “emotion recognition” or “algorithmic gaydar” tech should not deployed, period.

The tech industry is nimble, often switching tactics to suit its goals. Sometimes it goes from resisting regulation to claiming to support it, as we saw when it faced a chorus calling for bans on facial recognition. Companies like Microsoft supported soft moves that served to preempt bolder reform; they prescribed auditing the tech, a much weaker stance than banning police use of it altogether.

So, the report says, regulators need to keep their eyes peeled for moves like this and be ready to pivot if their approaches get co-opted or hollowed out by industry.

Regulators also need to get creative, using different tools in the policy toolbox to gain control of AI, even if those tools aren’t usually used together.

When people talk about “AI policy,” they sometimes think of it as distinct from other policy areas like data privacy. But “AI” is just a composite of data and algorithms and computational power. So data policy is AI policy.

Once we remember that, we can consider approaches that limit data collection, not only to protect consumer privacy, but also as mechanisms to mitigate some of the riskiest AI applications. Limit the supply of data and you’re limiting what can be built.

Similarly, we might not be used to talking about AI in the same breath as competition law or antitrust. But we’ve already got antitrust laws on the books and the Biden administration has signaled that it’s willing to boldly and imaginatively apply those laws to target the concentration of power among AI companies.

Ultimately, the biggest hidden-in-plain-sight truth that the report reveals is that humans are in control of which technologies we deploy and when. Recent years have seen us place moratoria and bans on facial recognition tech; in the past, we’ve also organized a moratorium and created bright-line prohibitions in the field of human genetics. Technological inevitability is a myth.

“There is nothing about artificial intelligence that is inevitable,” the report says. “Only once we stop seeing AI as synonymous with progress can we establish popular control over the trajectory of these technologies.”