clock menu more-arrow no yes mobile

Filed under:

Boston Researcher Cynthia Breazeal Is Ready to Bring Robots Into the Home. Are You?

The MIT researcher has dedicated her career to building robots that can perceive and emulate emotion.

Vjeran Pavic for Re/code

The MIT Media Lab’s Personal Robots Group flanks the soaring atrium on the fourth floor of the Wiesner Building, a wall of metal panels along the southern edge of Cambridge, Mass.

The space looks like the set of an ill-advised Terminator-meets-the-Muppets crossover. Mechanical arms, grippers and eyeballs clutter workbenches alongside colorful anemones, fairies and teddy bears, machines that defy sci-fi conventions demanding robots look like rolling trashcans or hard-shelled humans.

Another stereotype buster, named Jibo, sits on the edge of the desk in researcher Cynthia Breazeal’s cramped office. It’s 11 inches, six pounds and stationary, and resembles nothing so much as a desk lamp. But Breazeal believes it could be the first machine to fulfill the potential of personal robotics, offering average consumers a friendly, affordable helper.

It speaks in a childlike voice, swivels its screen in the manner of a puppy’s head tilt, and winks with a cartoonish eye. It can read to children, snap pictures, flag upcoming appointments, facilitate video chats and order up Chinese delivery.

In many ways, Jibo represents the culmination of decades of research for Breazeal, who pioneered the field of social robotics in the late 1990s.

Working at the cross section of psychology, computer science and engineering, she developed a series of machines that could interact with people in more natural ways, conveying and responding to emotional cues. Breazeal believed that in order for robots to assist humans in everyday settings like homes, hospitals or schools, they first had to behave in ways that put us at ease.

It was an ambitious undertaking, so much so that nearly two decades later, robots still rarely reach the front door, save for a few vacuums and toys. But today, Breazeal and her group are moving machines into the real world that can function as personal assistants, teachers’ aides and caretakers for the sick and elderly.

Given failing public schools, soaring chronic-disease rates and an aging population, she believes that humans are simply going to need more help from machines of this sort.

“I’m looking at these huge societal challenges coming our way,” Breazeal said. “We need technology to do better for us in all these ways.”

But for all the promise of social robots, they’ll arrive in our lives weighed down with considerable baggage, carrying along a host of tricky questions about privacy, security, jobs, digital manipulation and the appropriate boundaries between humans and machines.

robots-cythia-mit-boston-series-20141

“The droids we’re looking for”

Brazeal, 47, mostly grew up in Livermore, Calif., where San Francisco’s suburbs give way to the rural Central Valley. She was the youngest child of two mathematicians: A mother who worked at Lawrence Livermore National Laboratory and a father at Sandia National Laboratories.

In 1977, when she was 10, the family drove to a local movie theater to see “Star Wars.”

“It was jaw-dropping, and I was in awe of the starships flying overhead,” she said. “And then these amazing robots came on and I, like so many people around the world, just fell in love with those droids.”

“They weren’t only super-cool and capable, but they were friends of the people,” she said. “They had emotions, they cared, they were loyal, they were full-fledged characters. I think that that forever changed my ideas of what robots could be and should be.”

She earned degrees in electrical and computer engineering at UC Santa Barbara, a sprawling campus along the Southern California coastline, then headed east to pursue her graduate and doctoral studies at MIT.

In 1990, Breazeal joined the Mobile Robots group (better known as the Mobot Lab), where she worked under prominent researcher Rodney Brooks on the development of a set of insect-like robots named Hannibal and Attila. They were prototype micro-rovers, light, rugged machines that could be sent en masse to Mars, a swarm that offered the built-in redundancy that a single rover couldn’t.

The project directly influenced NASA, which ultimately built a roughly 25-pound (on Earth) rover named Sojourner. On July 6, 1997, the six-wheeled machine rolled onto the surface of Mars, marking the first time a robotic vehicle touched down on the fourth planet from the sun.

In that moment, it occurred to Breazeal that robots had explored the depths of the ocean, the insides of volcanoes and the craggy topography of the Red Planet. But they still hadn’t reached the living room.

That, after all, required navigating the far trickier terrain of human personalities. The next day, Breazeal marched into Brooks’ office.

“I’m changing everything,” she recalls announcing. “I’m stopping everything I’m doing, and I am now going to work on robots that interact with people, because the future is robots in the home.”

Kismet

Brazeal shifted the focus of her doctoral work to explore how machines could convey and respond appropriately to human body language, vocal tone and mood.

But to start, she had to learn how we pull off those tasks. She studied up on development psychology and cognitive development, as well as animal behavior, human-computer interaction and the techniques of cartoon animators.

Humans use their entire bodies to communicate, but the face is a particularly expressive canvas, telegraphing messages through coordinated eyebrow, eyelid and mouth movements. The lips alone can convey disgust, fear, sadness, surprise, anger and joy through six distinctive positions that we all intuitively understand, even if we never think about.

Breazeal started to develop a new robot for her doctoral project that could emulate those expressions and allow her to investigate her emerging theories. She started at the beginning, focusing on modeling our earliest and simplest social interactions, those between an infant and its caregiver.

boston-cythia-wide-1-of-1

The robot’s name was Kismet.

It had big eyes, pink, piglike ears, and a set of red lips made from surgical tubing. It was controlled by 15 computers and equipped with 21 motors, and took in the world through trios of cameras and microphones.

It couldn’t talk or understand language. It didn’t have a body. But it could babble along in a baby-like singsong, maintain eye contact, recognize movement and respond to shifting emotions. Kismet lowered its head, for example, when spoken to in a scolding voice.

Breazeal’s cute, personable robot clinched her Ph.D., helped land her the job as an associate professor at MIT, and drew widespread media attention, including a Time magazine article that caught the attention of Hollywood.

Warner Bros. hired her as a consultant on Steven Spielberg’s 2001 movie, “AI: Artificial Intelligence,” a gig that would also lead to Breazeal’s next robot.

She collaborated with Stan Winston Studio, which did the effects for the film, on a far more complicated, expressive and expensive robot named Leonardo.

It resembled Gizmo from Spielberg’s earlier film “Gremlins,” and featured more than 60 tiny motors that enabled subtle, natural facial expressions.

Dozens of papers and additional robots followed, including a terrarium featuring a serpent-like sea creature dubbed Public Anemone, and a “robonaut” designed to help NASA with mid-space maintenance work.

“Robots can”

At its core, social robotics is about human-computer interaction, enabling means of delivering instructions and receiving information in the most effective way.

Through the history of computing, this has largely occurred on technology’s terms.

We learned to use punchcards, keyboards, mice and touchscreens. But technology is tiptoeing ever closer to our language instead. We can now speak in plain English to Apple’s Siri (sometimes, anyway) or wave our hands in front of Microsoft’s Kinect.

Social robotics goes further still, because human communication occurs across levels beyond words and gestures. It includes enunciation, volume, facial expressions and body language, evolutionary hangovers from ancestors forced to find friends and suss out threats without the benefit of a common tongue.

“There’s a whole language of the nonverbal cues that are really key for how people form judgements of: Do I trust you? Do I like you? Do you like me? Are you an ally?” Breazeal said.

The scientific literature suggests that when people receive the right kind of signals from doctors, nurses, coaches, therapists, tutors, trainers and teachers — cues that suggest emotional support — outcomes improve: They learn more, improve faster, get well more often.

The surprising thing Breazeal’s research found was that those higher success rates could carry over to technology that emulated those behaviors, as well.

“If you design them in this way, if you design them according to these principles of how people interact, and even how companion animals interact with one another, it turns out the human mind is so attuned to that, that it naturally responds and benefits from that sort of interaction,” she said.

The influence of this work is evident throughout the robotics marketplace today.

Take Baxter, developed by Rethink Robotics of Boston. The machine can work directly alongside humans, rather than secured behind some factory cage, thanks to a combination of safety features and social ones.

Before it reaches out to grab an object, its eyes first look in that direction — providing a cue of its intentions that humans near it naturally understand.

“People just intuitively get it,” said Brooks, who went on to co-found Rethink. “It gives people a clue, and you can trace that back to Cynthia’s work.”

“She really pushed social interactions with the robot for the first time,” he said. “She was the founder of that field.”

 A social robot at the MIT Media Lab
A social robot at the MIT Media Lab
James Temple for Re/code

“Work in partnership”

When I visited the Personal Robots Group, a plush blue bear with a green nose and paws sat on a bench near the door.

“I’m Huggable,” it announced. “What’s your name?”

It’s designed to comfort children facing long hospital stays, and monitor their mood.

For now, it’s effectively a marionette. A researcher provides the voice, and operates the movements remotely. But the robots group plans to use interaction data from early trials to begin building in autonomous control.

Boston Children’s Hospital is beginning one of the first studies of the effect of social robots on inpatient experiences using Huggable, recently opening enrollment for 90 patients in its intensive care and oncology units.

The idea is that the machine can monitor children’s emotional states, as well as pain levels, over long periods. It can hold conversations in which the child might reveal more information about how they feel than they would to their care team. It also picks up feedback through sensors, including one that can detect how hard the child is squeezing its hand.

“Our specialists can’t be at the beside all the time, so it’s attractive to think that a social robot could work in partnership, spend time with children, measure where they are accurately and then empower the specialists to provide the most effective and targeted therapy,” said Dr. Peter Weinstock, director of the hospital’s simulator program.

That’s the extent of what is being evaluated in the first study. But the broader hope is that Huggable could intervene, grant some level of comfort by telling stories, provide encouragement or simply offer company.

“Peeping Toms”

But the arrival of social robots presents dangers, as well.

“I get very concerned about appropriate use, the potential for deception, who’s responsible if something goes wrong, and what kind of harms they might cause intentionally or unintentionally,” said Wendell Wallach, a scholar at Yale University’s Interdisciplinary Center for Bioethics and author of the forthcoming book, “A Dangerous Master: How to Keep Technology From Slipping From Our Control.”

“In every single context, if you sit back and think about, there are a lot of issues that they raise,” he said.

Indeed, these are machines equipped with microphones and cameras, connected to the Internet and, in many cases, capable of moving around and manipulating objects.

It presents security and privacy issues that go well beyond the already intractable ones accompanying our laptops and smartphones. Under the invisible control of bad actors, our personal robots could be turned against us, made to serve as Peeping Toms, spies or thieves.

In 2009, researchers at the University of Washington acquired three different toy robots to test their susceptibility to malicious attacks.

They discovered it was relatively easy to detect the presence of the robots on a home network and take control of them. Using packet-sniffer tools, which are readily available across the Internet, they were able to intercept the user name, password, audio and video transmitted from one. And they got another to pick up a set of keys.

University of Washington assistant law professor Ryan Calo raises a separate set of concerns: The ability of personal robots to flex their social skills for manipulative purposes.

The same “persuasive technology” that can be harnessed to encourage people to exercise or eat better, might also be used to nudge consumers to upgrade their robot, pay for a new software package or click on third-party ads.

“It turns out that computers can avail themselves of all the same kind of social pressures as people,” Calo said.

A few leaps farther down that thought path lands us in the world of the 2013 film “Her,” which explored whether increasingly life-like technology could lead people to retreat from the messier world of human relationships in favor of the comforting confines of artificial-intelligence surrogates.

That may be science fiction, but some academics and AI experts are raising similar concerns.

“Technology is seductive when what it offers meets our human vulnerabilities,” writes Sherry Turkle, director of the MIT Initiative on Technology and Self, in “Alone Together: Why We Expect More from Technology and Less from Each Other.” “And, as it turns out, we are very vulnerable indeed.”

“We are lonely but fearful of intimacy,” she said. “Digital connections and the sociable robot may offer the illusion of companionship without the demands of friendship.”

Finally, there’s the multibillion-dollar question permanently latched to robotics: What will all this stuff mean for jobs?

What’s to stop robots that can act in increasingly humanlike ways from marching into our offices, stores and hotels, occupying job ranks immune to such intrusions in the now-almost-quaint age of industrial robots?

Smart observers across industries are evenly divided on what the coming robotic era will ultimately mean for the human labor force, but plenty expect at least an ugly economic transition period.

Brooks, on the other hand, actually sees the opposite problem fast approaching.

“I’m not worried we’re going to have too many robots and not enough jobs,” he said. “I’m worried we’re not going to have enough robots.”

He points to the coming demographic inversion that will occur as growing numbers of baby boomers reach old age, leaving behind a dwindling workforce to support them.

“There won’t be enough people to be nurses,” he said. “We’re going to have a real need for robots to help with health care. It’s the only way we’re going to get through the demographic change.”

 Cynthia Breazeal interacting with a robot at the MIT Media Lab
Cynthia Breazeal interacting with a robot at the MIT Media Lab
Vjeran Pavic for Re/code

“What really matters”

In the end, the weight of such questions isn’t likely to slow the spread of robotics. We tend to embrace the capabilities of emerging technologies first — and grapple with the societal consequences after the fact.

Plenty of consumers appear more than ready for robots. When Breazeal’s startup kicked off a crowdfunding campaign for Jibo in July, offering preorders for $499, thousands snapped at the chance: They hit the fundraising goal of $100,000 in four hours, soared past a $1 million in a week, and finally stopped accepting orders after reaching nearly $2.3 million, the seventh-largest campaign to date on Indiegogo.

The consumer version of Jibo is scheduled to arrive ahead of the 2015 holidays.

Jibo isn’t alone in the market, but it’s arguably the most-capable robot around its price point. Intel has been showing off Jimmy, a 3-D printable robot kit that will cost $1,600, while French robotics company Aldebaran recently unveiled its nearly $2,000 social machine, Pepper.

“Based on the success of its Indiegogo campaign, it seems the concept of Jibo is already widely accepted,” said Bruno Maisonnier, chief executive of Aldebaran, in an email. “This makes us excited as well and shows us that the market is growing increasingly accepting of robots, especially social ones.”

Breazeal has taken a leave of absence from MIT to focus on getting her startup off the ground.

She stresses that they’re taking great care to guard against privacy and security risks, following industry best practices and abiding by relevant regulations.

Despite the thorny questions they raise, Breazeal believes social robots will ultimately serve the role that technology always has, from the cotton gin to the steam engine to the supercomputer: Freeing up people to spend more of their lives on creative and fulfilling pursuits.

“The new, enlightened way of viewing robots is not replacing people but enhancing and complementing and supporting people in what we care about,” she said.

This article originally appeared on Recode.net.