Is the AI apocalypse near? Movies like the Terminator franchise and the Matrix have long portrayed dystopian futures where computers develop superhuman intelligence and destroy the human race — and there are also thinkers who think this kind of scenario is a real danger.
But these thinkers overestimate the likelihood that we'll have computers as smart as human beings and exaggerate the danger that such computers would pose to the human race. In reality, the development of intelligent machines is likely to be a slow and gradual process, and computers with superhuman intelligence, if they ever exist, will need us at least as much as we need them. Here's why.
1) Genuine intelligence requires a lot of practical experience
Bostrom, Kurzweil, and other theorists of super-human intelligence have seemingly infinite faith in the power of raw computational power to solve almost any intellectual problem. Yet in many cases, a shortage of intellectual horsepower isn't the real problem.
To see why, imagine taking a brilliant English speaker who has never spoken a word of Chinese, locking her in a room with an enormous stack of books about the Chinese language, and asking her to become fluent in speaking Chinese. No matter how smart she is, how long she studies, and how many textbooks she has, she's not going to be able to learn enough to pass herself off as a native Chinese speaker.
That's because an essential part of becoming fluent in a language is interacting with other fluent speakers. Talking to natives is the only way to learn local slang, discover subtle shades in the meanings of words, and learn about social conventions and popular conversation topics. In principle, all of these things could be written down in a textbook, but in practice most of them aren't — in part because they vary so much from place to place and over time.
A machine trying to develop human-level intelligence faces a much more severe version of this same problem. A computer program has never grown up in a human family, fallen in love, been cold, hungry or tired, and so forth. In short, they lack a huge amount of the context that allows human beings to relate naturally to one another.
And a similar point applies to lots of other problems intelligent machines might tackle, from drilling an oil well to helping people with their taxes. Most of the information you need to solve hard problems isn't written down anywhere, so no amount of theoretical reasoning or number crunching, on its own, will get you to the right answers. The only way to become an expert is by trying things and seeing if they work.
And this is an inherently difficult thing to automate, since it requires conducting experiments and waiting to see how the world responds. Which means that scenarios where computers rapidly outpace human beings in knowledge and capabilities doesn't make sense — smart computers would have to do the same kind of slow, methodical experiments people do.
2) Machines are extremely dependent on humans
In the Terminator series, a military AI called Skynet becomes self-aware and begins using military hardware to attack humans.
This kind of scenario drastically underestimates how much machines depend on human beings to keep them working. A modern economy consists of millions of different kinds of machines that perform a variety of specialized functions. While a growing number of these machines are automated to some extent, virtually all of them depend on humans to supply power and raw materials, repair them when they break, manufacture more when they wear out, and so forth.
You might imagine humanity creating still more robots being created to perform these maintenance functions. But we're nowhere close to having this kind of general-purpose robot.
Indeed, building such a robot might be impossible due to a problem of infinite regress: robots capable of building, fixing, and supplying all the machines in the world would themselves be fantastically complex. Still more robots would be needed to service them. Evolution solved this problem by starting with the cell, a relatively simple, self-replicating building block for all life. Today's robots don't have anything like that and (despite the dreams of some futurists) are unlikely to any time soon.
This means that, barring major breakthroughs in robotics or nanotechnology, machines are going to depend on humans for supplies, repairs, and other maintenance. A smart computer that wiped out the human race would be committing suicide.
3) The human brain might be really difficult to emulate
Bostrom argues that if nothing else, scientists will be able to produce at least human-level intelligence by emulating the human brain, an idea that Hanson has also promoted. But that's a lot harder than it sounds.
Digital computers are capable of emulating the behavior of other digital computers because computers function in a precisely-defined, deterministic way. To simulate a computer, you just have to carry out the sequence of instructions that the computer being modeled would perform.
The human brain isn't like this at all. Neurons are complex analog systems whose behavior can't be modeled precisely the way digital circuits can. And even a slight imprecision in the way individual neurons are modeled can lead to a wildly inaccurate model for the brain as a whole.
A good analogy here is weather simulation. Physicists have an excellent understanding of the behavior of individual air molecules. So you might think we could build a model of the earth's atmosphere that predicts the weather far into the future. But so far, weather simulation has proven to be a computationally intractable problem. Small errors in early steps of the simulation snowball into large errors in later steps. Despite huge increases in computing power over the last couple of decades, we've only made modest progress in being able to predict future weather patterns.
Simulating a brain precisely enough to produce intelligence is a much harder problem than simulating a planet's weather patterns. There's no reason to think scientists will be able to do it in the foreseeable future.
4) To get power, relationships are more important than intelligence
Bostrom suggests that intelligent machines could "become extremely powerful to the point of being able to shape the future according to its preferences." But if we think about how human societies work, it's obvious that intelligence by itself isn't sufficient to become powerful.
If it were, societies would be run by their scientists, philosophers, or chess prodigies. Instead, America — like most societies around the world — is run by men like Ronald Reagan, Bill Clinton, and George W. Bush. These men became powerful not because they were unusually bright, but because they were well-connected, charismatic, and knew how to offer the right combination of carrots and sticks to get others to do their bidding.
It's true that brilliant scientists have played an important role in creating powerful technologies such as the atomic bomb. And it's conceivable that a super intelligent computer would conceive of similar breakthroughs. But building new technologies and putting them into practice usually requires a lot of cash and manpower, which only powerful institutions like governments and large corporations can muster. The scientists who designed the atomic bomb needed Franklin Roosevelt to fund it.
The same point applies to intelligent computers. Any plausible plan for taking over the world would require the cooperation of thousands of people. There's no reason to think a computer would be any more effective at enlisting their assistance for an evil plot than a human scientist would be. Indeed, given that persuasion often depends on long-standing friendships, in-group loyalties, and charisma, a disembodied, friendless computer program would be at a huge disadvantage.
A similar point applies to the "singularly," Ray Kurzweil's idea that computers will someday become so intelligent that humans will no longer even be able to understand what they're doing. The most powerful ideas aren't ones that only their inventor can understand. Rather, powerful ideas are ones that can be widely understood and adopted by many people, multiplying their effect on the world. That will be as true of computer-generated ideas as it is of ideas generated by people. To change the world, a super-intelligent computer would need to bring the human race along with it.
5) The more intelligence there is in the world, the less valuable it will become
You might expect that computers will use their superior intelligence to become fabulously wealthy and then use their vast wealth to bribe humans into doing their bidding. But this ignores an important economic principle: as a resource grows more abundant, its value falls.
Sixty years ago, it cost millions of dollars to buy a computer that could do less than a modern smartphone. Today's computers can do vastly more than earlier generations, but the value of computing power has fallen even faster than computers' capabilities have improved.
So the first super-intelligent computer might be able to earn a lot of money, but its advantage will be fleeting. As computer chips continue getting cheaper and more powerful, people will build more and more super-intelligent computers. The unique capabilities of super-intelligent computers, whatever those turn out to be, will become commodities.
In a world of abundant intelligence, the most valuable resources will be those that are naturally limited, like land, energy, and minerals. Since those resources are controlled by human beings, we'll have at least as much leverage over intelligent computers as they'll have over us.