clock menu more-arrow no yes mobile
An illustrated portrait of Joy Buolamwini. Rebecca Clarke for Vox

Filed under:

Joy Buolamwini saw first-hand the harm of AI bias. Now she’s challenging tech to do better.

How a personal experience with facial recognition tech sparked a broad campaign for algorithmic justice.

Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.

What started as a small art project has turned into a movement that’s challenging the biggest tech companies in the world.

Joy Buolamwini was a grad student at MIT who, as part of a class project, wanted to make a mirror that could inspire her every morning. It would use software to track her face and overlay the image of one of her heroes, like Serena Williams, on top. Just one problem: When Buolamwini, who is Black, looked in the mirror, the software wouldn’t detect her. To get it to work, she literally had to cover her face with a white mask.

What was going on? Buolamwini did her own research and discovered that bias had crept into facial recognition systems from companies like Google, Microsoft, IBM, and Amazon. Because the AI systems were trained on datasets of mostly white men’s faces, they were great at recognizing pale males — but bad at recognizing faces like hers.

In 2019, publications like the New York Times started reporting on her findings, and companies like Amazon started getting defensive. She’d put a major tech company on its toes. And that was just the start.

Buolamwini realized that AI bias was creating real-world harms across an array of fields: It was deciding who gets hired, who gets a mortgage, who gets a college acceptance letter — and who doesn’t. The problem was broad, and it needed a broad social movement to tackle it.

So she founded the Algorithmic Justice League, where researchers work with activists to hold the AI industry to account. Buolamwini has testified before Congress and state hearings and advocated on behalf of specific communities of color harmed by facial recognition technology. She’s also used storytelling to raise public awareness of AI bias.

Buolamwini refers to herself as a “poet of code,” and for good reason: She’s a gifted science communicator. Her spoken-word poetry conveys the problems with AI in accessible, visceral ways. For example, in “AI, Ain’t I a Woman,” she shows how facial recognition misgenders famous Black women like Oprah Winfrey and Michelle Obama.

Works like these caught the eye of director Shalini Kantayya, who made a film, Coded Bias, that follows Buolamwini as she fights for algorithmic justice. After making waves at festivals in 2020, it’s now on Netflix. The film cements Buolamwini’s role as someone who is revealing, popularizing, and aiming to fix AI ethics problems — a role she’s excelled in thanks to her gifts in both science and art.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.