Though many people might not realize it, complicated algorithms, including AI in some cases, help cities like New York make decisions every day about everything from where kids should go to school to who should receive extra screening from police and which neighborhoods should have more fire stations.
These systems have the potential to help government be more efficient by processing large volumes of information — like, say, DNA samples at crime scenes — rapidly. But if these systems are implemented poorly, they can also introduce bias across racial, gender, and class lines to exacerbate societal inequalities. And while researchers have proven AI can be biased at an aggregate level, the victims of these biases don’t know when it’s happening to them.
Take, for example, how black individuals are flagged at a higher rate by a risk assessment algorithm used across the country to decide whether someone convicted of a crime should receive parole, among other key legal decisions in the justice process. A ProPublica investigation found that the tool wrongly labeled black defendants as future criminals at almost twice the rate of white defendants.
Unlike a decision made by a human being, there’s often no way to appeal an incorrect decision made by an algorithm — because of the often inscrutable “black box” of logic that determines the AI’s analysis. And it’s not only agencies in the legal and criminal justice system that use these types of tools. According to a report released last year by the research group AI Now Institute, social welfare agencies use algorithms to decide which families should receive a secondary visit from a social worker. Housing agencies use them to prioritize who should receive temporary or permanent shelter. Health agencies use software from vendors like IBM to assess who should be eligible for Medicaid.
The New York City Council passed a law in 2017 that would create a special task force to investigate city agencies’ use of algorithms and deliver a report with recommendations. Many applauded the move as a rare example of politicians getting ahead of technology’s impact on society rather than scrambling to grapple with its consequences.
But now, a year and a half later, several members of the task force and outside experts such as representatives from AI Now and the Data & Society Research Institute say they’re worried the group won’t be able to see its mission through. They cite a lack of information about how exactly the city uses these algorithms, many of which are still shrouded in secrecy.
These experts also point to the lack of public input from people whose day-to-day lives are potentially harmed by the use of algorithms. The concern about the once-promising project is a sign that even when a government puts effort behind regulating new technology like AI, the implementation can prove too complicated to handle. It’s of further concern that many of the algorithms are owned and run by private companies including Microsoft, Amazon, and IBM, and exactly how they’re baked can be protected as trade secrets. But without the information it needs, the task force is stalled in its analysis.
New York’s reckoning with artificial intelligence
“New York has a tremendous opportunity,” said Janet Haven, executive director of the Data & Society Institute, who was one of several outside experts who gave testimony about the task force at a public hearing on Thursday. “But time is growing short to deliver on the promise of this body.” The task force is set to deliver a public report that’s due later this year.
The biggest roadblock: People on the task force don’t have a list of the systems they’re studying. Despite repeated requests, the group hasn’t been able to get hold of a list of all the types of automated decision-making technologies being used by city agencies. Representatives from the mayor’s office who co-chair the task force have said that compiling such a list would be too exhaustive and that in some cases, the information is proprietary. Many outside advocates have characterized these as inadequate excuses. But this presents an existential problem for the task force: How can it identify specific biases in technology without knowing what exactly that technology is?
“You cannot build a road map for the future if you don’t know where you are today,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, one of the groups invited to speak at the recent public hearing. “You cannot build a comprehensive framework for [automated decision-making systems] if you don’t know what those tools look like.”
It’s no surprise that city agencies wouldn’t want to reveal sensitive information about the kinds of technologies they provide, such as predictive policing technologies, that could potentially make it easier for would-be criminals to go under the radar.
The mayor’s office says another reason it’s not sharing more information about how it’s using these tools is that it lacks a clear definition of what technologies are included in the term “automated decision systems.” Some have argued that even a spreadsheet could technically be an example of an automated decision-making system.
While it certainly seems like overkill for the city to spend its time compiling a list of every Excel document used by bureaucrats, critics say that shouldn’t stop them from identifying the most complicated technologies such as AI, especially those with the potential to inflict the most harm if they perpetuate biases.
Without details, experts fear the task force will only be able to give general policy advice rather than specific examples of how the city can make its algorithms more fair.
“Generic recommendations will be ineffective,” said Haven. “If the city wanted generic advice, they could look to existing scholarship.”
Secrecy wrapped in secrecy
Critics are also taking exception to what they’re calling a lack of public outreach on the topic.
In the past year, the task force has met 18 times in closed meetings. Those meetings haven’t been recorded. That’s because the group wanted to create a “safe space” for discussion, according to Jeff Thamkittikasem, director of the Mayor’s Office of Operations and a co-chair of the task force, who spoke at the recent hearing. Later this month, the task force will hold its first of two meetings open to the public — which outside experts are saying is long overdue. There’s also an online form for the public to give input, which was announced with little fanfare and is largely vague.
“We need those individuals who are being impacted by this — whether that’s what school their kid goes to ... or housing issues — we need them to have a way to meaningfully understand how those tools work and how this task force is potentially addressing those issues,” said Cahn.
This isn’t to say that AI couldn’t — or isn’t already — having a positive impact as well. One city council member, Eric Ulrich, spoke of the potential for algorithms to help identify individuals at risk of drug addiction or diabetes and then help government provide those individuals with preventive services. He said such tools “could be very helpful with the opioid crisis right now” or with helping prevent diabetes, but acknowledged the need to balance privacy rights. “Of course it’s nobody’s business what someone had for dinner last night,” he said.
Another New York City council member, Brad Lander, asked experts at the hearing for “real-time consulting” about the fairness of a new system he’s proposing intended to reduce reckless drivers on the road. His plan would create a system to flag individuals with an abnormally high number of repeated reckless driving violations. The government would then take steps to intervene, such as by requiring those individuals to take driving lessons or, in more extreme cases, even revoking their drivers’ licenses if they fail to make changes.
It was an interesting use case, and an example of the kind of thing the task force might be able to assess more comprehensively as part of a larger framework for how to ensure such systems are fair. But in the absence of that process, what Lander was essentially asking for was on-the-fly advice from the two AI-focused nonprofit leaders in the room. They mentioned that the type of data going into Lander’s proposed system matters. For example, if it’s picking up red light violations from traffic cameras concentrated in low-income neighborhoods, then the system could reflect an inherent bias against those low-income populations.
What’s worrying is that the public doesn’t know enough about the other systems that, similar to Lander’s plan, could introduce inequities if they’re not implemented with care.
It would be a shame if New York — a city that’s home to some of the leading experts on the ethical, technical, and legal complexities of AI — can’t fulfill its goal of taking a first stab at developing guidelines about the public application of its use. As we’ve seen major tech companies struggle with self-regulating their AI technologies over and over again, it seems up to government to lead the way. If a place with as many resources as New York City can’t figure it out, who will?
This article originally appeared on Recode.net.