Software is already pervasive in our society, but artificial intelligence software raises unique concerns even among the technological elite. The recent announcement that tech titans, including Elon Musk, have committed $1 billion to an artificial intelligence research center out of concern for what AI may become denotes an important question: Will AI software obey the law of the land and adhere to our ethical standards?
It is true that AI software is increasingly autonomous and potentially self-modifying, but it is our view of AI as a hegemonic, monolithic entity that drives our fear of it. Just as there is no one “software” entity, there will be no one AI entity. So here is a new viewpoint on how AI will be kept in check — more AI.
Who will be a guard to the guardian?
Think about it. Software security solutions are in fact software. Software-watching software is how we combat most dangers presented on the Internet today. So instead of treating AI as a monolithic entity, we foresee AI software designed by multiple different parties, where one AI can be utilized to check and counterbalance others. Economic markets, and the political process, are shaped by the interplay of actors with conflicting interests with laws and mechanisms in place to prevent both collusion and the concentration of power in any single entity — we advocate a checks-and-balances structure for AI, by AI.
For illustration, consider a bank being sued for denying more loan applications to African Americans than to white Americans. The head of the bank vehemently denies the charge. She points out that for the last three years the bank has relied on an AI program to sort applications. The bank required the program’s designers to determine which applicants pose more risk than others, under one condition — that the AI software be colorblind. The bank stipulated that the software must not only refrain from using race as a category, but also avoid any surrogate variables, for instance, ZIP codes. Still, the plaintiffs showed that the program discriminated. They pointed to cases in which African-American applicants were denied loans despite having credit scores as good as or better than white applicants who had had their loans approved.
The finding of discrimination does not settle the matter. The question of intent remains. The law treats deliberate offenses much more harshly than unintended ones — for example, murder versus involuntary manslaughter. Let’s assume that the bank, and the software designers, are honest and transparent. Could an AI program discriminate, even if it was not designed to do so?
The answer lies in the distinct nature of AI programs: They mine large amounts of data to draw their own conclusions, in effect making choices that could take the program far from the designers’ original guidelines. Indeed, these programs continually change their models as new data are entered. It is hence quite possible that the program developed a model that is indeed discriminatory. Likewise, how do we ensure that autonomous cars obey traffic laws, and make ethical on-the-spot driving decisions as they evolve and encounter unanticipated situations?
What we need is the AI community to research and design “guardian AI” — a second-order AI software that will police AI. That is, use AI to check AI in both senses of the term: To verify claims and keep it in line. A person cannot keep up with the complex and evolving nature of AI software, and ordinary software will fall short to keep AI legal and ethical. Guardian programs could potentially determine whether a problem is due to an error in the original program, or in automated learning of new models.
In fact, we need a slew of AI guardians that would monitor, audit, interrogate, verify compliance and, in extreme cases, enforce the law and even shut down some programs. These AI guardians will have ever more work cut out for them, given the rapidly growing use of AI in fields as diverse as marketing (IBM has put Watson to use here), warfare (DARPA is developing AI-driven robot soldiers and pilotless bombers) and entertainment (see the new Barbie).
Consider the following examples:
- Manipulation of searches: An organization called FairSearch has charged that Google searches are “not in the best interests of its users,” but aimed to advance its business interests. In the future, an AI audit could help assure the public that Google’s AI algorithm is beyond reproach — and do so without direct human involvement, thus protecting Google’s proprietary trade secrets.
- Car safety: The need for AI checking on AI will become particularly evident when driverless cars are sold on a large scale, as AI programs begin to interact with one another on the road — with potentially unpredictable consequences. The public will want to know how these cars are programmed to deal with encounters with small animals, bike riders and pedestrians, etc. AI inspectors will play a key role here. Given that the AI programs of driverless cars will continue to evolve as these cars are more widely used, we will need an AI monitoring program that will go along for the ride and determine on the spot how the AI drivers are performing.
- Privacy: American laws banned the disclosure of select, particularly sensitive kinds of information, especially the disclosure of medical information by physicians, clinics and hospitals. However, it is not illegal to use insensitive information to ferret out sensitive information — for instance, to divine that a woman has cancer after she missed several days of work each month for six months and purchased a wig and a large amount of vitamins. Such information can be used by banks to call in loans or refuse to make them, and by employers to refuse to hire or promote sick people. If such a practice is deemed illegal, AI audits could be used to ensure the law is heeded.
- Shutting down malware: American adversaries are reportedly installing malware in our power grid that could be employed to shut down American power grids, a threat recently spelled out in “Going Dark,” a book by Guy R. McPherson. In the future, such malware is very likely to be married with AI programs that would allow it to morph if it encounters new cyber security measures. AI guardians would be needed to detect such self-modifying malware and to shut it down.
There is no doubt that as AI continues to advance into our lives, other AI programs need to be developed to help keep them in check. The brainteaser then is — who will be a guard to the guardian? Who will prevent the AI inspectors, auditors, compliance agents and other such guardians from violating the law themselves? So far, no one seems to have come up with a satisfactory answer to this challenge. However, we do know that society benefits when no AI is all-powerful. Power needs to be divided among guardians so that they can check and balance one another. Above all, the people subject to software should have the final say.
Amitai Etzioni is a professor of international affairs at The George Washington University. He previously served as senior adviser at the Carter White House; has taught at Columbia University, Harvard and the University of California at Berkeley; and served as president of the American Sociological Association. His newest book, “Privacy in a Cyber Age,” was recently published by Palgrave Macmillan.
Oren Etzioni is CEO of the Allen Institute for Artificial Intelligence. He has been a professor at the University of Washington’s Computer Science department since 1991. He has also founded or co-founded several companies, including Farecast (sold to Microsoft in 2008) and Decide (sold to eBay in 2013), and is the author of more than 100 technical papers that have garnered over 23,000 citations. Reach him @etzioni.
This article originally appeared on Recode.net.