:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/15986155/Vox_The_Highlight_Logo_wide.jpg)
A woman’s job application is rejected because of a recruiting algorithm that favors men’s résumés. A girl dies by suicide after graphic images of self-harm are pushed up on her feed by social media algorithms. A black teen steals something and gets rated high-risk for committing future crime by an algorithm used in courtroom sentencing, while a white man steals something of similar value and gets rated low-risk.
In recent years, advances in computer science have yielded algorithms so powerful that their creators have presented them as tools that can help us make decisions more efficiently and impartially. But the idea that algorithms are unbiased is a fantasy; in fact, they still end up reflecting human biases. And as they become ever more ubiquitous, we need to get clear on what they should — and should not — be allowed to do.
In a new book, A Human’s Guide to Machine Intelligence, Kartik Hosanagar, a University of Pennsylvania technology professor, argues we need an algorithmic bill of rights to protect us from the many risks AI is introducing into our lives, alongside the various benefits. People have called for such protections in the past, and in April, Sens. Cory Booker (D-NJ) and Ron Wyden (D-OR) introduced the Algorithmic Accountability Act. If passed, it would require companies to audit their algorithms for bias and discrimination. Some AI experts praised it as a “great first step” but noted that it leaves a number of concerns unaddressed.
All this got me wondering: Which demands, exactly, belong on an algorithmic bill of rights?
So I reached out to 10 experts (including Hosanagar) who are at the forefront of investigating how AI risk is creeping into the mundane aspects of life as well as high-stakes fields like immigration, medicine, and criminal justice. I asked them each to name a protection the public needs enshrined in law.
Allow me to present the result: a crowdsourced algorithmic bill of rights.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/16290982/i.jpg)
Transparency: We have the right to know when an algorithm is making a decision about us, which factors are being considered by the algorithm, and how those factors are being weighted.
Transparency is the No. 1 concern on people’s minds, judging by the responses I received. “We’re not even fully aware of when an algorithm is being used to make decisions for us or about us,” Hosanagar told me.
Say you’re applying for a mortgage. You deserve to know: Is an algorithm being used to make a decision about you? Is that decision based solely on the information you put down on your application form, or are your social media posts and other data obtained from third-party sources also being used? How does the mortgage approval algorithm rate different factors — does it place the greatest weight on income, medium weight on education, and low weight on current address, for example?
Cathy O’Neil, the author of Weapons of Math Destruction, put it this way in an email: “In situations where our financial lives, our livelihood, or our liberty is at risk — so not in the case of every algorithm under the sun — we should know what attributes about us are being used [and] we should know how our ‘scores’ depended on the values of those attributes.”
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/16290997/ii.jpg)
Explanation: We have the right to be given explanations about how algorithms affect us in a specific situation, and these explanations should be clear enough that the average person will be able to understand them.
Related to transparency is the demand for explainability. All algorithmic systems should carry something akin to a nutritional label laying out what went into them, according to Amy Webb, author of The Big Nine and founder of the Future Today Institute, which researches emerging technologies. “The terms of service for an AI application — or any service that uses algorithmic decision-making processes — should be written in language plain enough that a third grader can comprehend it,” she said. “It should be available in every language as soon as the application goes live.”
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/16291001/iii.jpg)
Consent: We have the right to give or refuse consent for any AI application that has a material impact on our lives or uses sensitive data, such as biometric data.
A demand for the right to consent has been gathering steam as more people realize that images of their faces are being used to power facial recognition technology. NBC reported that IBM had scraped a million photos of faces from the website Flickr — without the subjects’ or photographers’ permission. The news sparked a backlash. People may have consented to having their photos up on Flickr, but they hadn’t imagined their images would be used to train a technology that could one day be used to surveil them. Some states, like Oregon and Washington, are currently considering bills to regulate facial recognition.
The issue of consent extends well beyond that particular technology. Imagine you’re applying for a new job. Your prospective bosses inform you that your interview will be conducted by a robot — a practice that’s already in use today. Regardless of what they tout as the benefits of this AI system, you should have the right to give or withhold consent, according to MIT computer scientist Joy Buolamwini. “Permission must be granted,” she said, “not taken for granted.”
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/16291005/iv.jpg)
Freedom from bias: We have the right to evidence showing that algorithms have been tested for bias related to race, gender, and other protected characteristics — before they’re rolled out. The algorithms must meet standards of fairness and nondiscrimination and ensure just outcomes.
Like Buolamwini, who founded the Algorithmic Justice League to fight bias in automated decision-making systems, Yeshimabeit Milner is deeply concerned about AI that discriminates against people of color. As the founder and executive director of Data for Black Lives, Milner has drawn attention to problems with predictive policing (algorithmic systems for predicting where crime is likely to occur) and criminal risk assessments (algorithmic systems for predicting recidivism). Police officers and judges use both these systems to guide their decisions, despite evidence that they’re biased against black people.
Algorithmic bias can result when the initial data used to train an AI system isn’t diverse enough (say, if it includes mostly white men) or if it reflects biased decisions authorities made in the past. For example, if officers overpoliced a certain neighborhood, yielding a high rate of arrests there, and that arrest data is used to train an AI, the system could end up reinforcing the old bias.
Explaining why it’s so crucial for the law to protect against algorithmic bias, Milner said, “If a defendant is labeled ‘high risk’ by a recidivism algorithm, that can mean the difference between a fine and a prison sentence.” She added that where discriminatory algorithms go unchecked, they extend the shelf life of racist public policy: “The harms caused by decades of redlining have been amplified by new forms of ‘digital redlining’ like credit scores and predictive policing. It is not a coincidence that the communities that were labeled hazardous on redlining maps in 1933 are the predictive policing hotspots of today.”
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/16291010/v.jpg)
Feedback mechanism: We have the right to exert some degree of control over the way algorithms work.
The appropriate type and level of control we have over a given algorithm will depend on its specific use. But we should always be able to communicate with an algorithmic system that’s making decisions for us. As Hosanagar explains in his book, “It can be as limited and straightforward as giving a Facebook user the power to flag a news post as potentially false; it can be as dramatic and significant as letting a passenger intervene when he is not satisfied with the choices a driverless car appears to be making.”
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/16291014/vi.jpg)
Portability: We have the right to easily transfer all our data from one provider to another.
The big companies running our data through their algorithms — such as Facebook, Twitter, Google, and Microsoft — haven’t generally made it easy for us to take back our data and opt out. Pedro Domingos, a University of Washington computer science professor, wants us to be able to transfer our data from one provider to another with one click. Without that portability, he said, “we risk being locked into one of the big providers, with increasingly negative consequences as the data becomes more important.”
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/16291016/vii.jpg)
Redress: We have the right to seek redress if we believe an algorithmic system has unfairly penalized or harmed us.
Too often, companies view their data as proprietary and don’t want to release it to researchers for external audit. “Corporate secrecy laws are a barrier to due process,” said Jason Schultz, the AI Now Institute’s research lead for law and policy. “They contribute to the ‘black box effect,’ rendering systems opaque and unaccountable, making it hard to assess bias.”
The right to redress — and all of the above rights — should supersede corporate secrecy laws that stand in the way of due process.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/16291020/viii.jpg)
Algorithmic literacy: We have the right to free educational resources about algorithmic systems.
Because every citizen will be affected by algorithms, every citizen should have the opportunity to learn what algorithms are, how they work, and which risks they pose. Yet not everyone has the disposable income and time required to learn about them. Governments should offer this education for free.
Finland offers a promising example. Last year, the Nordic country announced plans to teach the basics of AI to 1 percent of its population — about 55,000 people — using a free online course called Elements of AI. The idea was to start with that relatively modest number and slowly build up. In short order, 140,000 people around the world registered for it (encouragingly, 40 percent were women). The scheme took off in part because it was pitched as a national challenge, and in part because it assured people with zero tech background that they could come to understand AI “with no complicated math or programming required,” as the course website says. The program has since spread to Sweden.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/16291023/ix.jpg)
Independent oversight: We have the right to expect that an independent oversight body will be appointed to conduct retrospective reviews of algorithmic systems gone wrong. The results of these investigations should be made public.
Just as important as making sure algorithms are tested for bias before they’re rolled out is making sure they’re examined for unintended effects after they’re used. Eric Topol, a physician and the author of Deep Medicine, told me too many algorithms are validated only on computers, not in real-world clinical environments. “We have already learned that there is a chasm between the accuracy of an algorithm, especially determined this way, and a favorable impact on clinical outcomes,” he said, explaining that just because an algorithm appears to work great in a computer simulation doesn’t mean it’ll work as intended in all doctors’ offices.
Topol believes that after implementation of the algorithm in a clinical practice, the results should be assessed and made available to doctors and patients.
The broader concept here is that we need to make sure problematic incidents are investigated after they occur. Ben Shneiderman, a computer science professor at the University of Maryland, argues that we need to create a National Algorithms Safety Board for this purpose.
“Other fields, like aviation safety, have come to understand that independent oversight helps to prevent deadly outcomes,” Shneiderman told me. “Therefore, I have proposed a National Algorithms Safety Board, which would investigate deadly accidents, like the Boeing 737 Max and Tesla crashes.”
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/16291026/x.jpg)
Federal and global governance: We have the right to robust federal and global governance structures with human rights at their center. Algorithmic systems don’t stop at national borders, and they are increasingly used to decide who gets to cross borders, making international governance crucial.
Immigration is a high-stakes domain increasingly being guided by automated decision-making. In Europe, three countries are planning to test the use of AI lie detectors on asylum seekers at border patrol checkpoints. Even Canada, a country usually seen as refugee-friendly, is using AI to vet immigrants and refugees, sparking an outcry from human rights lawyers.
“The use of algorithms in migration can create a high-risk laboratory of technological experiments,” Petra Molnar, a Toronto-based lawyer, told me. “The nuanced and complex nature of refugee and immigration decisions may be lost on these technologies.” To keep algorithms from leading to serious breaches of human rights at the border, we need to promote international norms around them.
This raises the question: Who should be tasked with enforcing these norms? Government regulators? The tech companies themselves?
Inspired by Shneiderman’s vision, Hosanagar advocates for the creation of an independent Algorithmic Safety Board, modeled on the Federal Reserve Board. Each country would have its own board at the federal level, and these boards would talk to one another and ensure there is some consistency across countries. Right now, some places have aggressive regulatory legislation (like the EU’s General Data Protection Regulation), while in other countries there’s almost no regulation at all. Hosanagar wants to see international coordination handled by a body modeled on the International Telecom Union, which governs cellphone communication.
He argues for a mix of government regulation and self-regulation from tech companies. We need both, in his opinion, because tech companies can’t always be trusted to police themselves, and government regulators can’t always understand the complexities of fast-developing AI technology by themselves.
How the algorithmic bill of rights was made
When I reached out to the 10 experts, some of them got back to me with more than one recommendation. But there was a lot of overlap between their ideas, so I streamlined them into the 10 demands listed above. Then I sent the completed bill to the experts so they could see what their peers had come up with and offer ideas for improvement.
Not every expert agreed with each and every item on the list. One or two took issue with specific recommendations, while others agreed with the broad strokes of all but had different ideas about how they should be implemented. Hosanagar, for example, said auditing doesn’t need to be performed by an oversight body, but could instead be done “by any independent team, including another company or another team within the organization.”
Finally, I want to emphasize that by its very nature, a bill of rights like this is a work in progress. AI is developing so fast that as time ticks on, we’ll almost certainly become aware of the need for other protections from risks we haven’t yet imagined. For now, having a concrete, if provisional, list of demands may help catalyze public conversation and action. If you believe something important has been left off the list, you’re welcome to drop me a note.