Facial recognition is having a moment.
Across the US, local politicians and national lawmakers on both sides of the aisle have started introducing rules that bar law enforcement agencies from using facial recognition technology to surveil everyday citizens.
In just the past few months, three cities — San Francisco, Oakland, and Somerville, Massachusetts — have passed laws to ban government use of the controversial technology, which analyzes pictures or live video of human faces in order to identify them. Cambridge, Massachusetts, is also moving toward a government ban. Congress recently held two oversight hearings on the topic and there are at least four pieces of current federal legislation to limit the technology in some way.
And now, Recode has learned that two top lawmakers, Rep. Elijah Cummings (D-MD) and Rep. Jim Jordan (R-OH), plan this fall to introduce a new bipartisan bill on facial recognition, according to representatives from both legislators’ offices. The specifics of the bill are still being hashed out, but it could include issuing a pause on the federal government’s acquisition of new facial recognition technology, according to a staffer from Jordan’s office.
Facial recognition is a rare case where regulators are working together — on a bipartisan level, no less — to try to get ahead of technology instead of catching up to it. That’s because this powerful new technology has the potential to infringe on Americans’ civil liberties — no matter their political persuasion — and to have a chilling effect on free speech.
“It seems like there’s a huge moment right now for regulating facial recognition. We’ve been working on surveillance issues broadly for years, and now something about this is striking a nerve with people,” said Evan Greer, director of the advocacy group Fight for the Future, which has been pushing for a national ban on the technology. Greer said she believes that, in the next few months, many more cities or states will regulate the technology; four states have already introduced facial recognition legislation.
So far, this impending wave of legislation hasn’t prevented private companies from using the technology. Even in cities like San Francisco with facial recognition bans, companies like Apple are still free to sell cellphones that have facial recognition built into their products. That’s a different and less controversial application of the technology, mainly because consumers have the choice not to use it. And if companies don’t properly notify users when it employs this tech, like a federal court recently found Facebook guilty of, they could face legal and financial penalties.
But when facial recognition technology is used by law enforcement agencies, the general public is often unknowingly having their faces monitored, scanned, and tracked. That’s why lawmakers are increasingly setting a higher bar for government agencies using the tools.
Some police departments and advocates of the technology have argued that outright bans are going too far, and that the technology can help law enforcement more effectively stop crime. Already, dozens of local police departments across the US use the technology to match driver’s license pictures and mug shots to criminal databases. But the recent uptick in regulation has the potential to slow down that process.
An Orwellian use case
One of the main reasons why facial recognition technology is being legislated is simple: It’s scary.
Unlike your cellphone or computer, there’s no way to turn off your face. Indeed, in China, the technology is already ubiquitous and used for mass surveillance of ordinary citizens in public life — most alarmingly to target the Uighur Muslim ethnic minority in what’s been called “automated racism.” For many Americans, a society where government is always watching you is inherently un-American.
“I think people are understanding that this technology is different,” said Neema Singh Guliani, senior legislative council with the American Civil Liberties Union (ACLU). “The idea that the government may be able to identify or track people when they’re at a protest or visiting a doctor is especially concerning.”
People are also concerned about the way facial recognition has been rolled out; in many jurisdictions, it’s been done in secret and without much community input. The fear is that the powerful software can be used by law enforcement agencies to track anyone they deem suspicious, without any reasonable evidence that they’ve committed a crime.
A few years ago, police in Orlando, Florida, started piloting Amazon’s facial recognition software — called Rekognition — that connects data from live video feeds with facial recognition technology to watch and track people in real time. The pilot, which placed four surveillance cameras in public areas around the city (which the police department said was only used to test on its own officers), was rolled out without any public notice or legal guidelines. After sustained public scrutiny and reports of technical limitations, the department decided in June to drop its contract with Amazon. The Orlando case was an example of how, in a lawless environment, facial recognition technology can escalate into a PR nightmare.
Three years ago, the ACLU started working with cities to pass legislation that would let those cities take control over the rollout of surveillance technology — including facial recognition as well as other tools such as license plate scanners — in what’s called the Community Control Over Police Surveillance ordinance. Now, 13 US cities have passed such laws, and several other cities and states are working on passing similar legislation.
But all this local regulation doesn’t stop federal law enforcement from using facial recognition. The Center on Privacy & Technology at Georgetown Law Center discovered last month that US Immigration and Customs Enforcement (ICE) had access to databases of drivers’ license photos from 21 states. That means if you live in one of those states, your drivers’ license photo could be used without your knowledge in a digital version of a criminal line-up.
The fact that ICE is using facial recognition to help deport immigrants is a major reason liberal politicians, particularly those in sanctuary cities, want to rein in government use of the technology. But though the first cities to ban facial recognition have been liberal strongholds like San Francisco and Oakland, several Republican lawmakers are also raising concerns. Rep. Jim Jordan (R-OH) co-chairs the House Oversight Committee, which held recent hearings on the technology, and has been a conservative leader on the topic.
“Facial recognition is concerning from the perspective of government having too much power,” said a spokesperson from Jordan’s office at the House Oversight Committee. “That’s where the congressman is coming from. It’s an instinctive civil libertarian and constitutionalist perspective.”
Jordan plans to introduce legislation in the coming months with Cummings, and his office anticipates the bill will have broad support in Congress.
A uniquely biased tool
Another big reason why regulators are putting a pause on facial recognition is that it’s been proven to be biased and less accurate when applied to women and people of color.
In a particularly egregious example, the ACLU ran a test of Amazon’s facial recognition software and found it incorrectly identified 28 black members of Congress as criminals.
Researchers at MIT have found that, overall, the software returned worse results for women and darker-skinned individuals. The researcher behind the MIT project, Joy Buolamwini, started researching the topic after noticing that AI had difficulty recognizing her own face as compared to her lighter-skinned classmates. This can have serious consequences when facial recognition is used to make life-altering decisions, such as whether or not someone should be arrested.
In both the ACLU and MIT cases, Amazon disputed the inaccurate findings, saying that testers didn’t apply the correct settings. The company argued that results should only be used if they meet a 99 percent threshold setting for accuracy. But in the real world, critics argue, facial recognition isn’t performed under perfect conditions. In a police trial in Oregon, officers were found to be using matches with Amazon’s software below the 99 percent threshold setting to investigate suspects, according to reporting by Gizmodo.
In places like Maryland, police agencies have used facial recognition technology more heavily in black communities and to target activists. For example, police in Baltimore used it to identify and arrest people who protested Freddie Gray’s death at the hands of law enforcement.
“There is certainly a long history of certain groups and people being surveilled by government agencies. We can’t allow that to go unchecked — that’s the bigger issue,” Cambridge Mayor Marc McGovern told Recode. The Cambridge City Council recently moved legislation forward for a facial recognition ban, which it will vote on in the coming months.
McGovern, who authored the ban, said facial recognition is simply too inaccurate and potentially chilling on free speech. He says that, in Cambridge, police have also acknowledged those risks and have supported his call for a ban.
“For me, it’s always been the question of how do we balance civil liberties and rights to privacy? How do we not target individuals but also give the police department the support it needs?” he told Recode.
San Francisco as a model
San Francisco was the first city to ban city government use of facial recognition in what’s now considered by civil liberty advocates to be the gold standard for regulating the technology.
When San Francisco passed the ban, it also passed a broader surveillance oversight ordinance that requires city agencies to get city approval before purchasing other kinds of surveillance technologies, such as automatic license plate readers and camera-enabled drones.
Lee Hapner, a legislative aide for San Francisco Supervisor Aaron Peskin, who was the leading author of the legislation, said he was excited to see other cities follow San Francisco’s lead in banning facial recognition.
“We’ve said for a long time that ‘As goes San Francisco, so goes the rest of California and the US,’” said Hapner, pointing to the city’s leadership on topics like legalizing gay marriage and setting a $15-an-hour minimum wage.
Matt Cagle, a technology and civil liberties attorney at the ACLU who worked on a coalition supporting the legislation in San Francisco, said that in the case of facial recognition, the city made the right move in placing an all-out ban. The ACLU recently called for a nationwide temporary ban on face recognition for law enforcement and immigration enforcement purposes, co-signed by over 60 civil liberties, privacy, investor, and faith groups, “until Congress fully debates what, if any, uses should be permitted.”
“When San Francisco, which is the center of innovation, sounds the alarm bell and takes facial recognition off the table for government use, that’s something we should listen to,” Cagle said.
It may have taken a few years, but a federal bill from Jordan and Cummings — the ranking member and chair of a powerful House committee — is an indication that DC politicians are finally taking the threat of facial recognition misuse seriously.
Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.