/cdn.vox-cdn.com/uploads/chorus_image/image/54318467/Robot_Surveillance_Image.0.jpeg)
When security cameras capture someone leaving a suspicious package at a train station, a person monitoring the camera feeds may, if they act fast enough, be able to coordinate with an agent on the ground to follow the perpetrator before he dodges out of the security camera’s field of vision.
Not super efficient. But in the future, the job may be better be done by robots.
Imagine if the camera that saw the crime was a wheeled robot equipped with facial recognition technology that can share information instantly with other nearby robotic cameras — all programed to surveil a scene and pursue suspects to keep them in sight.
Researchers from Cornell University are building a system for networked coordination between camera robots, drones and mounted smart cameras that can swap information instantly and move around a scene to chase a suspect, change their perspective and even reason about their environment as the machines look for for questionable activity.
Funded by a $1.7 million grant from the U.S. Office of Naval Research, the researchers will be using Segue robots with automatic cameras that can be programmed to pan, tilt and zoom for their experiments. The research team is led by Professor Silvia Ferrari, director of the Laboratory for Intelligent Systems and Controls at Cornell.
“We are trying to teach robots to follow things of interest, like people, cars and animals, and to reason about what they are seeing, what kind of activity is happening and what the agent might be doing next,” Ferrari told Recode in an interview.
One day, the software might be able to manage and coordinate hundreds of robotic cameras, but for the initial experiment, the team at Cornell plans to trial up to 12 camera systems operating simultaneously.
In an area under surveillance that is quite large, a network of mobile robots could be a big help, since one robot — or even an array of mounted cameras — can’t capture everything.
The idea here, says Ferrari, is to make the robotic cameras as autonomous as possible. The researchers are programming their surveillance robots to fuse together all available video data to reason about a scene, and the robots will be connected to the web so they can access more data for when they detect holes in their understanding.
Typically, surveillance systems send data back to a human operator, who interprets the scene to make tactical decisions about what other information is needed and how to collect it.
“Our intention is to automate that side of the network so that the robots are actually in charge of perception,” said Ferrari.
The surveillance robots will be communicating to each other in a computer language, Ferrari says, but will also be able able to translate what they’re thinking into “some syntax” that a human can understand.
“This is basically the only time they’ll be interfacing with a human being,” says Ferrari.
To get robots to reason and make decisions about what to pursue and where to go, the team at Cornell is building artificial intelligence navigation algorithms that are coupled with the ability to perceive and understand the information they are collecting.
That means these robots won’t be programed to simply know how to avoid obstacles or get from point A to point B, like most navigation algorithms, but they also will be able to deduce what needs to be focused on and what agent in its video feed is the right one to pursue.
For now, though, this roving robot surveillance fleet technology is still being built and there’s a lot of work to do before it’s ready to be deployed in the field.
Ferrari says her team should have a working demonstration in the next three years.
This article originally appeared on Recode.net.