Five months ago, when I published a big piece laying out the case for slowing down AI, it wasn’t exactly mainstream to say that we should pump the brakes on this technology. Within the tech industry, it was practically taboo.
OpenAI CEO Sam Altman has argued that Americans would be foolish to slow down OpenAI’s progress. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he told the Atlantic. Microsoft’s Brad Smith has likewise argued that we can’t afford to slow down lest China race ahead on AI.
But it turns out the American public does not agree with them. A whopping 72 percent of American voters want to slow down the development of AI, compared to just 8 percent who prefer speeding up, according to new polling from the think tank AI Policy Institute.
The poll, conducted by data analytics firm YouGov, surveyed 1,001 Americans from across the age, gender, and political spectrum: 42 percent of respondents affiliated themselves with Donald Trump and 47 percent with Joe Biden. The racial breakdown was a bit less representative: 73 percent of respondents identified as white, 12 percent as Black, and 7 percent as Hispanic. Most respondents did not have a college degree.
Jack Clark, the CEO of AI safety and research company Anthropic, took note of the survey in his popular newsletter. “These results are interesting because they appear to show a divergence between elite opinion and popular opinion,” he wrote. Specifically, “this survey shows that normal people are much more cautious in their outlook about the technology and more likely to adopt or prefer a precautionary principle when developing the tech.”
Americans are clearly voicing their wish for AI — slow down! — and it’s important for policymakers to know that this is what their constituents want. That could embolden them to adopt badly needed policies that promote more caution on AI. After all, this tech has the potential to do serious harm, like spreading disinformation that could impact elections. Washington’s job is to protect the interests of American voters, not those of Big Tech execs who seek to wield enormous power despite never having been democratically elected.
Moving toward “zero trust” AI governance
Here’s another striking finding from the AI Policy Institute’s polling: 82 percent of American voters don’t trust AI companies to self-regulate.
To Sarah Myers West, managing director for the research center AI Now Institute, this public distrust is both appropriate and unsurprising. “I think people have learned from the past decade of tech-enabled crises,” she told me. “It’s quite clear when you look at the evidence that self-regulatory approaches don’t work.”
Social media is a prime example. For years, companies like Meta have said they’re eager to be regulated, but in practice they’ve continued to perpetuate serious harms because they refuse to meaningfully change their business model. There’s a reason Mark Zuckerberg keeps getting dragged before Congress.
“Look at the very, very delicate phrasing that OpenAI uses when they make their calls for regulation,” West said. “There’s always a qualifier — like saying, ‘We want regulation for artificial general intelligence, or for models that exceed a particular threshold’ — thus excluding everything that they already have out in commercial use.” Tech companies know regulation is probably inevitable, so they support certain moves that serve to preempt bolder reform.
Aware that blind trust in the benevolence of Big Tech is not an option, West and her team at the AI Now Institute this month published a new framework called “Zero Trust AI Governance.” It’s exactly what it sounds like — a call for lawmakers to take matters into their own hands.
The new framework recommends flipping Big Tech’s favorite strategy on its head. Whereas tech execs embrace regulatory approaches that slow-roll action with drawn-out processes, require complicated and hard-to-enforce regimes, and place the burden of accountability onto under-resourced members of the public, Zero Trust AI Governance proposes these three principles:
1. Time is of the essence — start by vigorously enforcing existing laws.
2. Bold, easily administrable, bright-line rules are necessary.
3. At each phase of the AI system lifecycle, the burden should be on companies to prove their systems are not harmful.
The AI Policy Institute’s polling shows strong public support for these ideas. Here are some more of the topline findings:
58 percent of voters — including a plurality of Republicans — want the federal government to thoroughly regulate AI.
76 percent of voters want AI-generated images to be required to contain proof they were generated by a computer.
65 percent of voters support requiring advanced AI models to demonstrate they are safe before they are released, while just 11 percent oppose that policy.
“Powerful and potentially harmful AI development is not an inevitability,” Daniel Colson, the executive director at AI Policy Institute, said in a statement. “Our political leaders, and we as a society more broadly, need to choose what risks we are willing to endure for the sake of the potential of technological progress.”
Crucially, that requires not making the same mistake we made in the social media era: equating tech progress with social progress. “That’s been like a foregone conclusion,” West said. “I’m not convinced that AI is associated with progress in every instance.”
If the survey results are any indication, it seems as though a good portion of the American public may also be outgrowing that naiveté.