While much of Silicon Valley worries about doomsday scenarios where AI will take over human civilization, Kriti Sharma has a different kind of concern: What happens if disadvantaged groups don’t have a say in the technology we’re creating?
Sharma, an engineering leader at London-based Sage, has become a leading voice in the discussion with governments, companies and the developer community about the importance of building ethical AI. She’s given expert testimony on the matter at the U.K. Parliament, and took part in the inaugural Obama Foundation summit in Chicago.
At Sage, she crafted an “Ethics of Code AI Core Principles”. No. 1: “AI should reflect the diversity of the users it serves.”
This article originally appeared on Recode.net.
Will you support Vox’s explanatory journalism?
Most news outlets make their money through advertising or subscriptions. But when it comes to what we’re trying to do at Vox, there are a couple reasons that we can't rely only on ads and subscriptions to keep the lights on.
First, advertising dollars go up and down with the economy. We often only know a few months out what our advertising revenue will be, which makes it hard to plan ahead.
Second, we’re not in the subscriptions business. Vox is here to help everyone understand the complex issues shaping the world — not just the people who can afford to pay for a subscription. We believe that’s an important part of building a more equal society. We can’t do that if we have a paywall.
That’s why we also turn to you, our readers, to help us keep Vox free. If you also believe that everyone deserves access to trusted high-quality information, will you make a gift to Vox today?
Yes, I'll give $5/month
Yes, I'll give $5/month
We accept credit card, Apple Pay, and
Google Pay. You can also contribute via