The news that IBM will no longer produce facial recognition technology might not sound huge at first. The company’s commitment to opposing this type of racially biased surveillance technology fits into a welcome trend of actions being taken after anti-police brutality protests have swept the nation. Although some are already warning that IBM’s move won’t end the age of facial recognition, others say it’s a significant step in the right direction.
IBM is taking a stand against the development of technology that can lead to human rights abuses. Activists and researchers have sounded the alarm for years about facial recognition technology’s myriad problems, including its racial and gender biases and privacy risks. Some are hailing IBM’s announcement as a notable move, emphasizing that the major technology company’s resources will now be directed elsewhere. IBM’s decision to back away from facial recognition could also send a signal to other major sellers of this technology.
This was not a quiet announcement by IBM. In a letter to members of Congress, IBM CEO Arvind Krishna said the company would no longer make general-purpose facial recognition and analysis software, citing concerns about the technology’s use by law enforcement agencies. He clarified that IBM “firmly opposes” the use of facial recognition “for mass surveillance, racial profiling, violations of basic human rights and freedoms.” The letter also outlined various efforts the company would take in response to ongoing anti-police brutality demonstrations, such as endorsing a federal registry for police misconduct.
The news follows extensive efforts by organizers and researchers highlighting how facial recognition can have baked-in racial and gender biases. Notably, in 2018, Joy Buolamwini and Timnit Gebru co-authored a widely influential paper highlighting bias in facial recognition systems. This ground-breaking research showed racial and gender disparities in the accuracy of AI-powered facial classification software, especially for women with darker skin. It specifically featured analysis of disparities in IBM’s technology, among systems sold by other companies. Research from the National Institute of Standards and Technology published earlier this year has contributed even more evidence that the technology can be and often is biased, including based on gender, age, and race.
IBM wasn’t super specific in its announcement. However, a person familiar with the matter said that IBM will limit itself to the development of visual object detection and will no longer make APIs that could be used to power facial recognition available to outside or internal developers. The company would not comment further on its reasoning to halt the use of the technology. Deborah Raji, a technology fellow at the AI Now Institute, noted that the company somewhat quietly removed the ability to do face detection from its API last fall, while cautioning that IBM was far from the most dangerous provider of this technology.
“[Big companies] heavily influence the public discussion on it, and they heavily influence the policy discussions on it,” Raji said of facial recognition technology. “If they can disavow the technology, it can send a signal to policymakers and the public that this technology is not okay.”
“IBM is definitely moving in the right direction and will hopefully provide some pressure for these other companies to make that move,” she added, though she was cautious to declare a complete victory until other companies, such as Amazon, stop pushing their technology to law enforcement.
The Associated Press, meanwhile, reported that IBM’s decision “is unlikely to affect its bottom line,” as the company is heavily focusing on areas like cloud computing and other companies had more success in the market for facial recognition technology. Still, Mutale Nkonde, a research fellow at Harvard and Stanford who leads the nonprofit AI for the People, told the AP that “the symbolic nature of this is important.”
As the public has become increasingly aware of the use of facial recognition in recent years, several localities have recently moved to bar use of the tech by their government agencies. But the ongoing demonstrations against police brutality have brought increased scrutiny of law enforcement’s use of the technology. Last week, during a Philadelphia public comment session on facial recognition, citizens expressed concern that the police would receive funding for the technology. Then, on Tuesday, the Boston City Council held a hearing about an ordinance to ban facial recognition by the government.
Buolamwini, who is the founder of the Algorithmic Justice League, spoke at the Boston hearing, where she warned about the risks and problems that come with facial recognition. She also published a Medium post on Tuesday and commended IBM’s decision, which she called a “first move forward towards company-side responsibility to promote equitable and accountable AI.”
“This is a welcome recognition that facial recognition technology, especially as deployed by police, has been used to undermine human rights, and to harm Black people specifically, as well as Indigenous people and other People of Color (BIPOC),” Buolamwini wrote. She also noted that IBM had responded to their research demonstrating disparities in its technology by promptly issuing a statement, contrasting with the response of other companies selling facial recognition.
Some of these other companies in the space have a much larger business focused on facial recognition. Amazon’s facial recognition technology, Rekognition, launched in 2016 and has since been sold to police departments across the United States. Following an American Civil Liberties Union (ACLU) investigation into law enforcement’s use of Rekognition, Amazon employees protested the company’s practices, and although Amazon has called for regulation of facial recognition technology, the company continues to sell its Rekognition technology.
Others developing and selling facial recognition are not technology companies with household names. Clearview AI, for instance, is a firm that has reportedly offered the technology to hundreds of police departments in the US. The company controversially scraped billions of images from the web without peoples’ consent, enabling law enforcement to potentially identify someone with just an image of their face. On June 8, Sen. Edward Markey demanded to know if Clearview was working with law enforcement to use its technology on protesters.
So it remains unclear how IBM’s announcement will ultimately impact what other companies are doing with facial recognition. Amazon did not reply to repeated requests for comment about whether it plans to rethink the sale of its technology to law enforcement. Clearview AI did not respond to Recode’s request for comment regarding the use of facial recognition on protesters.
Meanwhile, NEC, which reportedly sells facial recognition to government agencies in the US, has seemed willing to discuss its use of the technology. Ahead of IBM’s announcement, NEC America President and CEO Mark Ikeno sent a statement to Recode about the risks of facial recognition being used to target protesters.
“We would be completely opposed to any efforts to use facial recognition or other technologies to persecute people for exercising their First Amendment rights,” Ikeno said, “and we are unaware of any law enforcement customers that are using it in this manner.”
Still, organizers and researchers have been clear about what they want next. In her Medium post, Buolamwini called on technology companies that work in artificial intelligence to make donations of at least $1 million to organizations that focus on racial justice within the tech sector, such as Data for Black Lives and Black in AI, emphasizing that technology companies must fund groups that can hold them accountable externally. She also called for companies working on facial recognition to make a commitment to sign the Safe Face Pledge.
Raji, of AI Now, believes the actions of major technology companies like IBM could help broaden acceptance for regulation. “That regulation should hopefully affect the NECs and the Clearviews of the world that are willing to do it,” she said.
In an interview with NPR this week, Nate Freed Wessler, a lawyer with the ACLU’s Speech, Privacy, and Technology Project, wondered if anyone would follow IBM’s lead.
“It’s good that IBM took this step, but it can’t be the only company,” Freed Wessler said. “Amazon, Microsoft, and other corporations are trying to make a lot of money by selling these dangerous, dubious tools to police departments. That should stop right now.”
But ultimately, regardless of what major technology companies are doing, the lack of strong government regulations will leave facial recognition technology open to abuse or misuse.
“There’s always going to be some small, shady firm that is willing to do the worst things with technology that you can do, and will sell it to whoever will buy it,” Evan Greer, the deputy director of the digital rights group Fight for the Future told Recode. “It’s not going to be enough to just kind of call on companies to cancel their contracts or back away from making this type of technology. We need lawmakers to step in and do their jobs.”
Though there have been several proposals, there is currently no federal law that regulates facial recognition. The technology is facing growing skepticism from national politicians, namely Sen. Jeff Merkley, who has been working on relevant legislation. In May, the Algorithmic Justice League released a white paper calling for the creation of a federal office to oversee facial recognition technologies. Meanwhile, local governments have charged ahead with their own regulations and proposals for using or prohibiting the facial recognition technology.
Even if IBM’s commitment is just symbolic, the company is big enough and powerful enough to move the needle on the perception of facial recognition. That alone could shape the future of the technology.
Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.