clock menu more-arrow no yes mobile

Filed under:

Researchers Want to Teach Robots Right From Wrong

Can a robot weigh the consequences of its decisions -- and the consequences of those consequences?

Shutterstock / CloneProject

Researchers from Tufts, Brown and Rensselaer Polytechnic Institute are exploring the possibility of developing technology to help robots make moral decisions, through a project funded by the Office of Naval Research.

“Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree,” said Matthias Scheutz, a principal investigator and professor of computer science at Tufts, in a statement. “The question is whether machines — or any other artificial system, for that matter — can emulate and exercise these abilities.”

The lab plans to develop computer frameworks that can model the moral reasoning of humans — incorporating, for instance, the flexibility for a robot to override certain instructions when there are unexpected ethical implications.

The news release cites the example of a battlefield medical robot that was assigned to deliver urgently needed medication but suddenly encounters an injured solider en route. Rather than strictly carrying out its programming, the researchers want robots to factor this kind of unanticipated information into their actions and weigh the consequences of their decisions.

Should it stop and treat the solider? Are even more lives at risk if it doesn’t follow through on its mission? Is it okay to apply treatment that will hurt the solider if it’s in his or her best long-term interests?

“When an unforeseen situation arises, a capacity for deeper, on-board reasoning must be in place, because no finite rule set created ahead of time by humans can anticipate every possible scenario,” said Selmer Bringsjord, head of the cognitive science department at RPI, in a statement.

This article originally appeared on