clock menu more-arrow no yes mobile

Robots waging war? Science fiction no more.

How autonomous weapons could change the future battlefield.

This advertising content was produced in collaboration between Vox Creative and our sponsor, without involvement from Vox Media editorial staff.

Artificial intelligence has already crept into our cars, our homes, and our phones. It’s changed how nations communicate, but can it transform how they wage war? Maybe. Do we want it to? Maybe not. These questions sound like they came out of a science fiction novel, but autonomous weapons — those that can select and attack targets without human intervention — have been under development for decades.

A robot on the battlefield is no longer a science fiction trope.

We’ve already seen imagined technology become the reality in war — tanks, landmines, nuclear bombs, and long-range missiles are all prime examples of this advancement. Armed forces have been using remotely controlled, unmanned vehicles since as early as World War II. And the development of autonomous military technology has grown rapidly since the Vietnam War. One study at Arizona State University’s Global Security Initiative shows that while there were no weapons with homing capabilities in 1965, there were almost 175 such weapons in 2015.

Simple autonomous functions like homing and navigation first emerged in the 1960s and ‘70s, and expanded in later decades to include more advanced technology like target identification and firing. The surveillance drones of the ‘90s gave way to remotely controlled armed drones for counterterrorism efforts after 9/11.

Military systems are seeing more autonomy as the years go by, veering close to full autonomy. Advances in computer image processing allow weapons to better distinguish targets and are already being incorporated into weaponry. “They represent a new frontier of autonomy,” writes Dr. Heather Roff of the Arizona State University findings.

The barrier to full autonomy — the human operator — could be removed in the future, triggering a larger ethical debate around the use of autonomous weapons.

Advocates say the precision of autonomous weapons could keep civilians safer.

Proponents of autonomous weapons suggest that they could actually make war more humane. They’d be so precise in their targeting that they’d perform even better than their human counterparts, keeping civilians safe and only attacking what they’ve been programmed to attack. They wouldn’t fall prey to human emotions of fear or vengeance, firing out of fear or committing torture. “You don’t get emotion. You don’t get anger. You don’t get revenge. You don’t get panic and confusion. You don’t get perversity, in the sense that machinery won’t go rogue,” says Dr. William Boothby, a former lawyer and air commodore for the Royal Air Force.

Champions even remind us that technologies originally developed as weapons in the past have been used as some of the greatest innovations of our time. Ballistic missiles modified into rockets reached outer space and became the predecessors to space exploration vehicles. The toxic compounds in World War II-era mustard gas served as models for substances that could kill cancer cells. And many of the researchers and scientists behind the atom bomb aimed to control nuclear power and turn it into an energy solution.

Opponents argue autonomous weapons lack the context clues to make life and death decisions.

Critics of autonomous weapons argue that machines lack the vital context clues they need to make decisions, putting civilians at risk. A human combatant can distinguish between a soldier with a gun and a civilian trying to protect his or her home. The combatant can rely on practical knowledge and an understanding of social cues to make a judgment call. He or she can also use inner ethics to judge if a command is questionable. A robot can’t necessarily do all that; distinction is difficult to build into a machine. It could kill the wrong people, even commit war crimes based on how it’s programmed. “While [lethal autonomous robotics] may thus in some ways be able to make certain assessments more accurately and faster than humans, they are in other ways more limited, often because they have restricted abilities to interpret context and to make value-based calculations,” writes Christof Heyns, a UN special rapporteur on extrajudicial executions.

Extremely precise weapons are so powerful that a mistake could result in excessive force and unnecessary death. Some nations have already made tragic miscalculations. In 2003, a remotely controlled air defense system mistook allied planes for enemy jets and shot them down, on two separate occasions. “The causes of the fratricides were a complex mix of human and machine failures,” writes Dr. John Hawley at the Center for a New American Security. And opponents worry that autonomous weapons could even be mass-produced and fall into the wrong hands.

Machines can’t follow the rules of war. Only humans can.

Annibale Greco

So how can militaries prevent machines from violating the rules of war? Maintain human control. At the end of the day, international humanitarian law (paraphrased below) is meant to be followed by people:

You have to be able to tell apart combatants and civilians.

You can only use force that’s necessary.

You can’t cause excessive civilian death.

Any doubt about the situation, and you shouldn’t attack.

Only humans can be held accountable for acts of war. They are the ones who wage battle, after all. The International Committee of the Red Cross, and all those concerned about the impacts of war, advocate for human control over all weapons and methods of warfare. By keeping people at the helm, the world is doing whatever it can to keep civilians and combatants safe.

Sources are provided for informational and reference purposes only. They are not an endorsement of Advertiser or Advertiser’s products.