clock menu more-arrow no yes

Artificial intelligence doesn’t have to be evil. We just have to teach it to be good.

There’s a pressing need to find a moral compass to direct the intelligent machines we’re increasingly sharing our lives with.

Microsoft’s problematic Tay bot
Twitter

Sensational reports surfaced earlier this year about Google’s DeepMind AI growing “highly aggressive” when left to its own devices. Researchers at Google had AI “agents” face off in 40 million rounds of a fruit-gathering computer game. When apples grew scarce, the agents started attacking each other, killing off the competition—humanity’s worst impulses echoed ... or so the critics said.

Nor is it hard to find other examples of AI “learning” the wrong types of behavior, like Microsoft’s infamous Tay bot. Deployed on Twitter in early 2016, Tay was supposed to “learn” from user interactions. (“The more you talk, the smarter Tay gets,” boasted her profile.) But she was beset with racist, anti-semitic and misogynistic commentary, almost from the start. Learning from her environment, Tay began spitting out a string of inflammatory responses, including, infamously, “bush did 9/11, and Hitler would have done a better job than the monkey we have now.” Microsoft developers pulled the plug a mere 16 hours after Tay’s release.

This is a simple example. But herein lies the challenge. Yes, billions of people contribute their thoughts, feelings and experiences to social media every single day. But training an AI platform on social media data, with the intent to reproduce a “human” experience, is fraught with risk. You could liken it to raising a baby on a steady diet of Fox News or CNN, with no input from its parents or social institutions. In either case, you might be breeding a monster.

The reality is that while social data may well reflect the digital footprint we all leave, it’s neither true to life nor necessarily always pretty. Some social posts reflect an aspirational self, perfected beyond human reach; others, veiled by anonymity, show an ugliness rarely seen “in real life.”

Ultimately, social data — alone — represents neither who we actually are nor who we should be. Deeper still, as useful as the social graph can be in providing a training set for AI, what’s missing is a sense of ethics or a moral framework to evaluate all this data. From the spectrum of human experience shared on Twitter, Facebook and other networks, which behaviors should be modeled and which should be avoided? Which actions are right and which are wrong? What’s good ... and what’s evil?

Coding religion and ethics into AI

Grappling with how to build ethics into AI isn’t necessarily a new problem. As early as the 1940s, Isaac Asimov was hard at work formulating his Laws of Robotics. (The first law: A robot may not harm a human being, or through inactivity allow a human to come to harm.)

But these concerns aren’t science fiction any longer. There’s a pressing need to find a moral compass to direct the intelligent machines we’re increasingly sharing our lives with. (This grows even more critical as AI begins to make its own AI, without human guidance at all, as is already the case with Google’s AutoML.) Today, Tay is a relatively harmless annoyance on Twitter. Tomorrow, she may well be devising strategy for our corporations ... or our heads of state. What rules should she follow? Which should she flout?

Here’s where science comes up short. The answers can’t be gleaned from any social data set. The best analytical tools won’t surface them, no matter how large the sample size.

But they just might be found in the Bible.

And the Koran, the Torah, the Bhagavad Gita and the Buddhist Sutras. They’re in the work of Aristotle, Plato, Confucius, Descartes and other philosophers both ancient and modern. We’ve spent literally thousands of years devising rules of human conduct — the basic precepts that allow us (ideally) to get along and prosper together. The most powerful of these principles have survived millennia with little change, a testament to their utility and validity. More importantly, at their core, these schools of thought share some remarkably similar dictates about moral and ethical behavior — from the Golden Rule and the sacredness of life to the value of honesty and virtues of generosity.

As AI grows in sophistication and application, we need, more than ever, a corresponding flourishing of religion, philosophy and the humanities. In many ways, the promise — or peril — of this most cutting-edge of technologies is contingent on how effectively we apply some of the most timeless wisdom. The approach doesn’t have to, and shouldn’t, be dogmatic or aligned with any one creed or philosophy. But AI, to be effective, needs an ethical underpinning. Data alone isn’t enough. AI needs religion: A code that doesn’t change based on context or training set.

In place of parents and priests, responsibility for this ethical education will increasingly rest on frontline developers and scientists. Ethics hasn’t traditionally factored into the training of computer engineers; this may have to change. Understanding hard science alone isn’t enough when algorithms have moral implications. As emphasized by leading AI researcher Will Bridewell, it’s critical that future developers are “aware of the ethical status of their work and understand the social implications of what they develop.” He goes so far as to advocate study in Aristotle’s ethics and Buddhist ethics so they can “better track intuitions about moral and ethical behavior.”

On a deeper level, responsibility rests with the organizations that employ these developers, the industries they’re part of, the governments that regulate those industries and — in the end — us. Right now, public policy and regulation on AI remains nascent, if not nonexistent. But concerned groups are raising their voices. Open AI — formed by Elon Musk and Sam Altman — is pushing for oversight. Tech leaders have come together in the Partnership on Artificial Intelligence to explore ethical issues. Watchdogs like AI Now are popping up to identify bias and root it out.

What they’re all searching for, in one form or another, is an ethical framework to inform how AI converts data into decisions — in a way that’s fair, sustainable and representative of the best of humanity, not the worst.

This isn’t a pipedream. If fact, it’s eminently within reach. It’s worth pointing out that in the case of Google’s “highly aggressive” fruit-gathering AI, researchers eventually switched up the context. Algorithms were deliberately tweaked to make cooperative behavior beneficial. In the end, it was those agents who learned to work together who triumphed. The lesson: AI can reflect the better angels of our nature, if we show it how.


Ryan Holmes is the founder and CEO of Hootsuite. He started the company in 2008, and has helped grow it into the world’s most widely used social relationship platform, with 16 million-plus users, including more than 800 of the Fortune 1000 companies. Reach him @Invoker.


This article originally appeared on Recode.net.

Sign up for the newsletter Sign up for The Weeds

Get our essential policy newsletter delivered Fridays.