Facebook is changing how it uses facial recognition technology — the feature on the platform that allows faces to be tagged in photos. From now on, if you want that feature to be part of your social media experience, you’ll have to opt to turn it on. It’ll be off by default for all new users, and for current users who do nothing when they receive an upcoming notice about the change, Facebook announced in a Tuesday blog post.
It’s a sign of the company’s increasing awareness that it needs to project a more privacy-conscious image — and of the public’s increasing wariness toward facial recognition technology, which can identify an individual by analyzing their facial features in images, in videos, or in real time.
Facebook popularized facial recognition years ago in a context that seemed totally innocent: It tagged our friends’ faces for us in photos we’d posted to the social media network. That was, for many of us, the first introduction to facial recognition, and we barely gave the convenient tech a second thought.
Fast forward to 2019, and the headlines are saturated with people lambasting its ill effects. The technology now plays a role in policing and surveillance, and AI experts point to evidence showing that it can disproportionately harm people of color. Behemoth companies like Apple, Amazon, and Microsoft are all mired in controversy over the tech. Some cities, like San Francisco, Oakland, and Somerville, have already instituted local bans on it.
Last month, Democratic presidential candidate Sen. Bernie Sanders called for a total nationwide ban on the use of facial recognition software for policing. Sen. Elizabeth Warren said she’d create a task force to “establish guardrails and appropriate privacy protections” for surveillance tech, including facial recognition.
And as the tech pops up everywhere from our apartment buildings to our airports, it’s increasingly sparking controversy among citizens. Take, for example, the black tenants in Brooklyn who objected to their landlord’s plan to install the tech in their rent-stabilized building. Or the traveler who complained via Twitter that JetBlue had checked her into her flight using facial recognition without her consent.
Against this backdrop, it’s no wonder Facebook is doing all it can to be (or at least appear to be) more judicious about its use of the controversial technology. The company has been embroiled in a series of privacy scandals — of which Cambridge Analytica is only the most glaring — and the last thing it needs is more accusations that it’s failing to take users’ privacy seriously.
In fact, one of the Federal Trade Commission’s recent accusations against Facebook was that the company misled users years ago by not properly explaining its use of facial recognition. The $5 billion settlement Facebook reached with the FTC in July requires the company to be clearer on its use of that technology.
In its Tuesday blog post, under a section titled “Privacy Matters,” Facebook noted: “We’ve continued to engage with privacy experts, academics, regulators and people on Facebook about how we use face recognition and the options you have to control it.”
The company went on to explain that back in December 2017, it had given some users a new setting providing “an easy on or off switch for a broader set of uses of face recognition, such as helping you protect your identity on Facebook” by alerting you if someone’s using a photo of your face as their profile photo.
As of now, every user — not just some — will get this setting prompting us to opt in to all uses of facial recognition tech on Facebook, or to stay out.
Facebook’s notification emphasizes the pros of opting in. It says making that decision means “we can help protect against someone using your photo to impersonate you” and “we can tell people with visual impairments when you’re in a photo or video through a screen reader.” But to the extent that opting in to this tech means further normalizing it, there are some very serious cons to consider.
What makes facial recognition tech so controversial?
First of all, not everyone thinks facial recognition tech is necessarily problematic. Advocates say the software can help with worthy aims, like finding missing children and elderly adults or catching criminals and terrorists. Microsoft President Brad Smith has said it would be “cruel” to altogether stop selling the software to government agencies. This camp wants to see the tech regulated, but not banned.
But consider the well-documented fact that human bias can creep into AI. Often, this manifests as a problem with the training data that goes into AIs: If designers mostly feed the systems examples of white male faces, and don’t think to diversify their data, the systems won’t learn to properly recognize women and people of color. And indeed, we’ve found that facial recognition systems often misidentify those groups, which could lead to them being disproportionately held for questioning when law enforcement agencies put the tech to use.
In 2015, Google’s image recognition system labeled African Americans as “gorillas.” Three years later, Amazon’s Rekognition system wrongly matched 28 members of Congress to criminal mug shots. Another study found that three facial recognition systems — IBM, Microsoft, and China’s Megvii — were more likely to misidentify the gender of dark-skinned people (especially women) than of light-skinned people.
Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a recent report from the AI Now Institute explains.
Say the tech gets just as good at identifying black people as it is at identifying white people. That may not actually be a positive change. Given that the black community is already overpoliced in the US, making black faces more legible to this tech and then giving the tech to police could just exacerbate discrimination. As Zoé Samudzi wrote at the Daily Beast, “It is not social progress to make black people equally visible to software that will inevitably be further weaponized against us.”
Woodrow Hartzog and Evan Selinger, a law professor and a philosophy professor, respectively, argued last year that facial recognition tech is inherently damaging to our social fabric. “The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled,” they wrote. The worry is that there’ll be a chilling effect on freedom of speech, assembly, and religion.
Luke Stark, a digital media scholar who works for Microsoft Research Montreal, made another argument in a recent article titled “Facial recognition is the plutonium of AI.” Comparing software to a radioactive element may seem over the top, but Stark insists the analogy is apt. Plutonium is the biologically toxic element used to make atomic bombs, and just as its toxicity comes from its chemical structure, the danger of facial recognition is ineradicably, structurally embedded within it, because it attaches numerical values to the human face. He explains:
Facial recognition technologies and other systems for visually classifying human bodies through data are inevitably and always means by which “race,” as a constructed category, is defined and made visible. Reducing humans into sets of legible, manipulable signs has been a hallmark of racializing scientific and administrative techniques going back several hundred years.
The mere fact of numerically classifying and schematizing human facial features is dangerous, he says, because it enables governments and companies to divide us into different races. It’s a short leap from having that capability to “finding numerical reasons for construing some groups as subordinate, and then reifying that subordination by wielding the ‘charisma of numbers’ to claim subordination is a ‘natural’ fact.”
In other words, racial categorization too often feeds racial discrimination. This is not a far-off hypothetical but a current reality: China is already using facial recognition to track Uighur Muslims based on their appearance, in a system the New York Times has dubbed “automated racism.” That system makes it easier for China to round up Uighurs and detain them in internment camps.
It may seem strange to talk about an authoritarian government’s repression of a minority in the same breath as Facebook’s convenient photo-tagging feature. But that’s exactly the point: The same technology underlies them both, and the risk of uncritically normalizing one is that we inch toward a world where the other becomes more prevalent.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.