clock menu more-arrow no yes mobile

Filed under:

Elon Musk-Backed Group Attempts to Avert Judgment Day With AI Rules

A research group aims to build a practical and ethical framework for AI -- and wrest the story from the Terminator.

Skydance Productions

The Terminator is back, and with him, the running quips (usually tongue-halfway-in-cheek) that the machines are getting closer and closer to taking over.

A consortium of artificial intelligence researchers is trying to wrest that narrative away, while laying the foundation for technical and ethical ground rules in the field.

Earlier this week, the Future of Life Institute, a Boston-based group, doled out 37 grants to researchers with projects focused on “keeping AI robust and beneficial.” They range from the highly academic — building probabilistic models for AI software — to the vividly abstract — a philosophic framework for the “human control” of autonomous weapons. The bulk of the grants came from Elon Musk, the Tesla and SpaceX founder, who has pledged $10 million in support.

The grants happened to coincide with the release of “Terminator Genisys,” the newest entrant in the movie franchise created by James Cameron.

“The danger with the Terminator scenario isn’t that it will happen,” Max Tegmar, the FLI president and an MIT physicist, said in a statement, “but that it distracts from the real issues posed by future AI.”

Daniel Dewey, the program officer for grants at FLI, added more context. While the group welcomes the attention the blockbuster brings to AI, they aren’t so pleased with the obsession around Skynet. “I’m glad that science fiction exists so that people get interested in the future. But we are pushing back to a certain extent,” he told Re/code. “We’re interested in creating the research for the problems that do exist.”

Problems around super-intelligent or weaponized robots may arise and compound in the near future, Dewey added, but for now, their emergence is not one of them; we won’t soon have super-smart gun-toting robots.

Instead, the group is aiming to develop criteria for engineering best practices and ethical rules when universities, companies and individuals tool around with advanced machinery.

The foundation gave $136,000 to a University of Denver researcher, Heather Roff, to investigate the deployment of AI-enhanced weaponry (an issue the United Nations has on its radar). Another $200,000 went to an initiative around AI cyber security. A quarter million is set aside for a philosophical project with the audacious title “Aligning Superintelligence With Human Interests.”

“It explicitly motivated research to address problems of reliability and safety and beneficialness, for lack of a better word, before it gets powerful,” Dewey said. “And we think it’s best to do this ahead of time. We don’t want to be thinking about autonomous weapons only when we’re making them.”

Recently, the world’s Internet companies have poured considerable resources into AI and machine learning, as computing power and technical capabilities are starting to catch up with the ambitious futuristic visions of tech founders.

In January, the FLI unfurled its manifesto — along with Musk’s funding pledge — during a convention in Puerto Rico. The open letter was signed by most of the industry’s luminaries, including the research directors of Google and Microsoft; Yann LeCun, the head of Facebook’s AI lab; and Geoffrey Hinton, who leads an AI division within Google. Other co-signers included the three co-founders of DeepMind, the deep learning company Google bought last year.

When they were acquired, DeepMind reportedly insisted that Google set up an internal ethics board around AI.

This article originally appeared on Recode.net.