It’s a big week for Americans who’ve been sounding the alarm about artificial intelligence.
On Tuesday morning, the White House released what it calls a “blueprint” for an AI Bill of Rights that outlines how the public should be protected from algorithmic systems and the harms they can produce — whether it’s a recruiting algorithm that favors men’s resumes over women’s or a mortgage algorithm that discriminates against Latino and African American borrowers.
The bill of rights lays out five protections the public deserves. They boil down to this: AI should be safe and effective. It shouldn’t discriminate. It shouldn’t violate data privacy. We should know when AI is being used. And we should be able to opt out and talk to a human when we encounter a problem.
It’s pretty basic stuff, right?
In fact, in 2019, I published a very similar AI bill of rights here at Vox. It was a crowdsourced effort: I asked 10 experts at the forefront of investigating AI harms to name the protections the public deserves. They came up with the same fundamental ideas.
Now those ideas have the imprimatur of the White House, and experts are excited about that, if somewhat underwhelmed.
“I pointed out these issues and proposed the key tenets for an algorithmic bill of rights in my 2019 book A Human’s Guide to Machine Intelligence,” Kartik Hosanagar, a University of Pennsylvania technology professor, told me. “It’s good to finally see an AI Bill of Rights come out nearly four years later.”
It’s important to realize that the AI Bill of Rights is not binding legislation. It’s a set of recommendations that government agencies and technology companies may voluntarily comply with — or not. That’s because it’s created by the Office of Science and Technology Policy, a White House body that advises the president but can’t advance actual laws.
And the enforcement of laws — whether they’re new laws or laws that are already on the books — is what we really need to make AI safe and fair for all citizens.
“I think there’s going to be a carrot-and-stick situation,” Meredith Broussard, a data journalism professor at NYU and author of Artificial Unintelligence, told me. “There’s going to be a request for voluntary compliance. And then we’re going to see that that doesn’t work — and so there’s going to be a need for enforcement.”
The AI Bill of Rights is mostly a tool to educate America
The best way to understand the White House’s document might be as an educational tool.
Over the past few years, AI has been developing at such a fast clip that it’s outpaced most policymakers’ ability to understand, never mind regulate, the field. The White House’s Bill of Rights blueprint clarifies many of the biggest problems and does a good job of explaining what it could look like to guard against those problems, with concrete examples.
The Algorithmic Justice League, a nonprofit that brings together experts and activists to hold the AI industry to account, noted that the document can improve technological literacy within government agencies.
This blueprint provides necessary principles & shares potential actions. It is a tool for educating the agencies responsible for protecting & advancing our civil rights and civil liberties. Next, we need lawmakers to develop government policy that puts this blueprint into law.— Algorithmic Justice League (@AJLUnited) October 4, 2022
Julia Stoyanovich, director of the NYU Center for Responsible AI, told me she was thrilled to see the bill of rights highlight two important points: AI systems should work as advertised, but many don’t. And when they don’t, we should feel free to just stop using them.
“I was very happy to see that the Bill discusses effectiveness of AI systems prominently,” she said. “Many systems that are in broad use today simply do not work, in any meaningful sense of that term. They produce arbitrary results and are not subjected to rigorous testing, and yet they are used in critical domains such as hiring and employment.”
The bill of rights also reminds us that there’s always “the possibility of not deploying the system or removing a system from use.” This almost seems too obvious to need saying, yet the tech industry has proven it needs reminders that some AI just shouldn’t exist.
“We need to develop a culture of rigorously specifying the criteria against which we evaluate AI systems, testing systems before they are deployed, and re-testing them throughout their use to ensure that these criteria are still met. And removing them from use if the systems do not work,” Stoyanovich said.
When will the laws actually protect us?
The American public, looking across the pond at Europe, could be forgiven for a bit of wistful sighing this week.
While the US has just now released a basic list of protections, the EU released something similar way back in 2019, and it’s already moving on to legal mechanisms for enforcing those protections. The EU’s AI Act, together with a newly unveiled bill called the AI Liability Directive, will give Europeans the right to sue companies for damages if they’ve been harmed by an automated system. This is the sort of legislation that could actually change the industry’s incentive structure.
“The EU is absolutely ahead of the US in terms of creating AI regulatory policy,” Broussard said. She hopes the US will catch up, but noted that we don’t necessarily need much in the way of brand new laws. “We already have laws on the books for things like financial discrimination. Now we have automated mortgage approval systems that discriminate against applicants of color. So we need to enforce the laws that are on the books already.”
In the US, there is some new legislation in the offing, such as the Algorithmic Accountability Act of 2022, which would require transparency and accountability for automated systems. But Broussard cautioned that it’s not realistic to think there’ll be a single law that can regulate AI across all the domains in which it’s used, from education to lending to health care. “I’ve given up on the idea that there’s going to be one law that’s going to fix everything,” she said. “It’s just so complicated that I’m willing to take incremental progress.”
Cathy O’Neil, the author of Weapons of Math Destruction, echoed that sentiment. The principles in the AI Bill of Rights, she said, “are good principles and probably they are as specific as one can get.” The question of how the principles will get applied and enforced in particular sectors is the next urgent thing to tackle.
“When it comes to knowing how this will play out for a specific decision-making process with specific anti-discrimination laws, that’s another thing entirely! And very exciting to think through!” O’Neil said. “But this list of principles, if followed, is a good start.”