clock menu more-arrow no yes mobile

Filed under:

Robots are getting easier to control

A new interface by researchers at Georgia Tech makes controlling a robot arm as easy as tapping a touchscreen.

If robots are ever going to make their way into our homes, they’re going to have to be easy to control. At the moment, the interfaces designed to control robots aren’t super intuitive, unless you’re a roboticist.

For starters, the machines have to either be programmed or else follow directions given to them through a user interface, like a controller or a touchscreen.

The standard interface associated with ROS, the open source Robot Operating System, allows the operator to position the machine up, down, front, back, left and right — six degrees of freedom — using arrows on a screen.

So if a person wants a robot arm to pick something up, she has to direct the robot to move in those six directions until it lands on the object and grabs it. Think of it like trying to grab a stuffed animal from a claw machine at an arcade. Not easy.

Researchers from the Georgia Institute of Technology have created a new interface for manipulating robots that’s more foolproof. Instead of having to position a robot with six different directions, their new interface allows an operator to simply tap on an object on a touchscreen. The robot will figure out the best way to navigate to that object and grab it.

Here’s a demonstration of their new interface compared to what’s commonly used today:

In the course of their research, the team — led by Sonia Chernova, a professor of interactive computing and director of the Robot Autonomy and Interactive Learning Lab at Georgia Tech — had 45 non-roboticists try to manipulate a robot using the current standard. On average, each person had four errors each time they tried to grasp something.

But with the new point-and-click interface developed by Chernova’s team, people in the test averaged one error.

“The robot analyzes the surface around the place where the person clicked and looks for geometric properties, for places where it can grip something,” Chernova said in an interview. That means the robot is not relying on the user for specific instructions, but rather analyzing the scene around it.

Making robots that are easier for non-roboticists to use is going to be an increasingly important problem to solve, especially for assistive robots that may one day live in people’s homes and have to follow directions.

Companies like Toyota, for example, are currently working on robots to help aging people do tasks around the house. And Pepper, the humanoid from SoftBank, is already in homes across Japan, though it isn’t designed to do housework.

Having a robot that can be reliably controlled based on input from a touchscreen also means that a remote operator can take over if the machine ever acts out of line, another feature that may prove important for future consumer robots that work in our homes. The interface could also prove useful for remote applications, like in outer space or in search-and-rescue missions where it’s unsafe or impractical for a human to work.


This article originally appeared on Recode.net.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.