/cdn.vox-cdn.com/uploads/chorus_image/image/33121469/176464022.0.jpg)
Robots that can kill people aren't science fiction anymore: they're reality. Russia has deployed armed robots, different from drones because they can select targets and decide to fire on them without any human input, to guard its missile bases. Russia wants to expand its robotic capabilities considerably, and it's likely several other countries do as well. We're slouching towards a future where robots play a frontline role in combat.
The armed robots issue is becoming so real, so fast, that 87 countries sat down at a United Nations-convened conference from May 13th to the 15th to discuss banning the things. Those nations, including Russia, China, and the United States, discussed amending the UN Convention on Certain Conventional Weapons, which 117 countries have accepted, to prohibit the use of armed robots during wartime. A lot of the news coverage on this issue has treated robot arms control as if it's a joke or a novelty. It's neither: For over a year, Human Rights Watch has been building a campaign to pressure for banning military robots, arguing that they pose an unacceptable threat to civilian populations. Are they right? Should we be banning what HRW calls "killer robots"?
The debate about robots in warfare comes down to the question of whether they would make war crimes more or less likely. There are serious arguments on either side. In many ways, this new argument about robots is an extension of much older argument about why war crimes happen and how to prevent them. This isn't a joke anymore: the debate over military robotics is about preventing horrific abuse of real people.
UPDATE: 87 countries attende CCWUN experts meeting on lethal autonomous weapons systems at @UNGeneva this week! pic.twitter.com/R9cYJfJKoA
— Stop Killer Robots (@BanKillerRobots) May 16, 2014
You may have reasonably assumed that the debate over military robots would come down to programming; for example, whether robots can be taught how to distinguish civilians from combatants. And often this is where the discussion centers. But the much bigger issue isn't the robots; it's how their human controllers will use them.
Russia's crude robots notwithstanding, everyone agrees our current AI isn't adequate to make moral distinctions. While debating the morality or immorality of hypothetical robot AI is theoretically interesting, at this point it's just theoretical. So it's hard to argue for or against military robots based on the strengths or merits of an artificial intelligence that does not yet exist.
Robotics advances in small steps: automated systems designed to shoot down missiles and homing torpedoes are semi-robotic. As technology like that advances, we'll get a better sense of whether robots will be safe for war zones.
There are in-principle arguments that aren't dependent on the specifics of programming. At the UN, a number of experts argued that robotic warriors were more humanitarian warfighters because they'd "never rape." The assumption is that wartime rape is a crime of passion. And robots, by their nature, don't have passions.
Charli Carpenter, a political scientist at the University of Massachussetts-Amherst, makes a compelling argument that robots could commit war crimes — because war crimes, contrary to what we might prefer to believe, are often not committed by rogue soldiers as crimes of passion but as deliberate tools of terror engineered by top commanders. In the Bosnian War, for example, Bosnian Serb soldiers were ordered by their commanders to use rape as a tool of terror, and soldiers who refused were threatened with castration.
Robots, unlike people, always do what they're told. Carpenter's point is that human-rights abusing governments could program robot warriors to do whatever they'd want, and they'd do it, without compunction or thought. If the reality of war-time atrocities is that they tend to be intentional, not crimes of passion, then that's a huge count in favor of banning military robots today.
Of course, the story is different in modern liberal-democratic countries. Democratic countries generally don't launch extermination campaigns, and at least nominally attempt to punish human rights abuses by their soldiers. So robots programmed by liberal-democratic regimes could, at least theoretically, improve on inherently flawed human judgments and cut out the cases where war crimes really are the result of bad apples or out-of-control rage and hatred on the ground level.
This insight is at the heart of the most sophisticated case against a ban, made by American University's Ken Anderson and Columbia's Matt Waxman. Anderson and Waxman's argument is that countries are likely to start using robots if they think they're militarily useful, regardless of what international law says about them. Indeed, there is something of a track record of nations ignoring arms control agreements when it's militarily convenient.
Anderson and Waxman propose that, instead of imposing a ban on robots that is likely to fail, democratic countries should push or regulate the robotics industry such that military robots will be more responsible and less dangerous. The United States, in particular, leads the world in military robotics. If the US pushed global robotics innovators to only design technology with certain humanitarian constrains in mind, that might prevent technology that could be abused in the way Carpenter envisions from ever coming to be. Even dictators have to buy their robots from someone.
We're not even close to resolving this debate. The UN deliberations this week are just the beginning of discussions about banning armed robots. But the academic debate around this, however abstract, should underscore why the robotics debate matters so much. We live in an age where a genocidal robot army is becoming a real possibility. We owe it to ourselves to think about how to prevent that bleak future.