How worried should we be about artificial intelligence?
Recently, I asked a number of AI researchers this question. The responses I received vary considerably; it turns out there is not much agreement about the risks or implications.
Non-experts are even more confused about AI and its attendant challenges. Part of the problem is that “artificial intelligence” is an ambiguous term. By AI one can mean a Roomba vacuum cleaner, a self-driving truck, or one of those death-dealing Terminator robots.
There are, generally speaking, three forms of AI: weak AI, strong AI, and superintelligence. At present, only weak AI exists. Strong AI and superintelligence are theoretically possible, even probable, but we’re not there yet.
Understanding the differences between these forms of AI is essential to analyzing the potential risks and benefits of this technology. There are a whole range of concerns that correspond to different kinds of AI, some more worrisome than others.
To help make sense of this, here are some key distinctions you need to know.
Artificial Narrow Intelligence (often called weak AI) is an algorithmic or specialized intelligence. This has existed for several years. Think of the Deep Blue machine that beat world champion Garry Kasparov in chess. Or Siri on your iPhone. Or even speech recognition and processing software. These are forms of nonsentient intelligence with a relatively narrow focus.
It might be too much to call weak AI a form of intelligence at all. Weak AI is smart and can outperform humans at a single task, but that’s all it can do. It’s not self-aware or goal-driven, and so it doesn’t present any apocalyptic threats. But to the extent that weak AI controls vital software that keeps our civilization humming along, our dependence upon it does create some vulnerabilities. George Dvorsky, a Canadian bioethicist and futurist, explores some of these issues here.
Then there’s Artificial General Intelligence, or strong AI; this refers to a general-purpose system, or what you might call a “thinking machine.” Artificial General Intelligence, in theory, would be as smart — or smarter — than a human being at a wide range of tasks; it would be able to think, reason, and solve complex problems in myriad ways.
It’s debatable whether strong AI could be called “conscious”; at the very least, it would demonstrate behaviors typically associated with consciousness — commonsense reasoning, natural language understanding, creativity, strategizing, and generally intelligent action.
Artificial General Intelligence does not yet exist. A common estimate is that we’re perhaps 20 years away from this breakthrough. But nearly everyone concedes that it’s coming. Organizations like the Allen Institute for Artificial Intelligence (founded by Microsoft co-founder Paul Allen) and Google’s DeepMind project, along with many others across the world, are making incremental progress.
There are surely more complications involved with this form of AI, but it’s not the stuff of dystopian science fiction. Strong AI would aim at a general-purpose human level intelligence; unless it undergoes rapid recursive self-improvement, it’s unlikely to pose a catastrophic threat to human life.
The major challenges with strong AI are economic and cultural: job loss due to automation, economic displacement, privacy and data management, software vulnerabilities, and militarization.
Finally, there’s Artificial Superintelligence. Oxford philosopher Nick Bostrom defined this form of AI in a 2014 interview with Vox as “any intellect that radically outperforms the best human minds in every field, including scientific creativity, general wisdom and social skills.” When people fret about the hazards of AI, this is what they’re talking about.
A truly superintelligent machine would, in Bostrom’s words, “become extremely powerful to the point of being able to shape the future according to its preferences.” As yet, we’re nowhere near a fully developed superintelligence. But the research is underway, and the incentives for advancement are too great to constrain.
Economically, the incentives are obvious: The first company to produce artificial superintelligence will profit enormously. Politically and militarily, the potential applications of such technology are infinite. Nations, if they don’t see this already as a winner-take-all scenario, are at the very least eager to be first. In other words, the technological arms race is afoot.
The question, then, is how far away from this technology are we, and what are the implications for human life?
For his book Superintelligence, Bostrom surveyed the top experts in the field. One of the questions he asked was, "by what year do you think there is a 50 percent probability that we will have human-level machine intelligence?" The median answer to that was somewhere between 2040 and 2050. That, of course, is just a prediction, but it’s an indication of how close we might be.
It’s hard to know when an artificial superintelligence will emerge, but we can say with relative confidence that it will at some point. If, in fact, intelligence is a matter of information processing, and if we assume that we will continue to build computational systems at greater and greater processing speeds, then it seems inevitable that we will create an artificial superintelligence. Whether we’re 50 or 100 or 300 years away, we are likely to cross the threshold eventually.
When it does happen, our world will change in ways we can’t possibly predict.
Why we should be worried
We cannot assume that a vastly superior intelligence is containable; it would likely work to improve itself, to enhance its capabilities. (This is what Bostrom calls the “control problem.”) A hyper-intelligent machine might also achieve self-awareness, in which case it would begin to develop its own ends, its own ambitions. The hope that such machines will remain instruments of human production is just that — a hope.
If an artificial superintelligence does become goal-driven, it might develop goals incompatible with human well-being. Or, in the case of Artificial General Intelligence, it may pursue compatible goals via incompatible means. The canonical thought experiment here was developed by Bostrom. Let’s call it the paperclip scenario.
Here’s the short version: Humans create an AI designed to produce paperclips. It has one utility function — to maximize the number of paperclips in the universe. Now, if that machine were to undergo an intelligence explosion, it would likely work to optimize its single function — producing paperclips. Such a machine would continually innovate new ways to make more paperclips. Eventually, Bostrom says, that machine might decide that converting all of the matter it can — including people — into paperclips is the best way to achieve its singular goal.
Admittedly, this sounds a bit stupid. But it’s not, and it only appears so when you think about it from the perspective of a moral agent. Human behavior is guided and constrained by values — self-interest, compassion, greed, love, fear, etc. An Advanced General Intelligence, presumably, would be driven only by its original goal, and that could lead to dangerous, and unanticipated, consequences.
Again, the paperclip scenario applies to strong AI, not superintelligence. The behavior of an a superintelligent machine would be even less predictable. We have no idea what such a being would want, or why it would want it, or how it would pursue the things it wants. What we can be reasonably sure of is that it will find human needs less important than its own needs.
Perhaps it’s better to say that it will be indifferent to human needs, just as human beings are indifferent to the needs of chimps or alligators. It’s not that human beings are committed to destroying chimps and alligators; we just happen to do so when the pursuit of our goals conflicts with the wellbeing of less intelligent creatures.
And this is the real fear that people like Bostrom have of superintelligence. We have to prepare for the inevitable, he told me recently, and “take seriously the possibility that things could go radically wrong.”