clock menu more-arrow no yes mobile

Filed under:

The moral dilemmas in Westworld offer a scary glimpse of our future

We should probably think about them now.

HBO’s Westworld isn’t quite the best show on TV, but it may be the most interesting.

The premise is irresistible: Robots that look exactly like people have been created and used to staff a futuristic theme park modeled on the Wild West. For a mere $40,000, guests are allowed to treat the robots however they like — including having sex with them and “murdering” them — and they can’t be harmed in return.

The core appeal of the park is that it’s totally free of moral consequences: Customers can play out their weirdest or most violent fantasies with no fear of retribution.

On the first season of the show, the humans who patronize Westworld are predictably awful. They treat the robots as playthings, abusing them at will and for their own pleasure. (You can read Vox’s recap of season one here.)

But suppose a place like Westworld, or something like it, really existed. Imagine that we’ve managed to create robots that look, act, and think just like human beings. In fact, they’re so humanlike that we can’t tell whether they’re robots or people.

What ethical responsibilities would we have to these machines? Would it be wrong to torture them, to kill them, to treat them like objects?

These questions are not as abstract as they appear. It’s entirely possible that we will, eventually, create robots that are similar in form and function to the robots on Westworld. If that happens, the answers to these questions will really matter.

To answer them, I reached out to Paul Bloom, a moral psychologist at Yale University. Along with Sam Harris (yes, that Sam Harris), Bloom recently published an essay in the New York Times asking if it’s wrong to treat the robots in Westworld cruelly.

I spoke to Bloom about the ethical and psychological problems he raised in the essay. I wanted to know what an experiment like Westworld would likely reveal about human nature, and if he thinks creating machines that allow us to indulge our worst instincts would make us less moral.

A lightly edited transcript of our conversation follows.

Sean Illing

If we create conscious machines that look and behave just as we do, would it be wrong to abuse them?

Paul Bloom

That’s the easy one, right? If we create machines that are just like us, and feel pain and anguish and suffering and shame and all of that stuff, then it would be as wrong to hurt them as it would be to hurt each other. That’s the low-hanging fruit in all of this.

To the extent we have clones, to the extent we create creatures in laboratories, to the extent that we create anything that can feel pain, then we’re morally bound not to hurt them. And that seems pretty uncontroversial.

Sean Illing

Well, it’s easy if you assume they can actually feel pain, but it’s not so easy if it’s a machine that’s been programmed to replicate human suffering. In that case, is it actually feeling pain or is it just mechanically signaling the experience of pain?

Paul Bloom

Yeah, that’s when the hard questions arise. I think it really matters whether the robot is feeling pain or signaling the experience of pain. There’s all the difference in the world between a creature that feels pain and really suffers versus something that has no more sentience than a toaster.

So it’s possible that you could have Westworld-like robots that look and talk like us but literally have nothing going on inside. They’re no more conscious than my iPhone. In that case, there’s nothing morally wrong about mistreating [them].

On the flip side, it’s possible that we could create machines that don’t seem conscious, they don’t have human faces and bodies, but might actually be fully conscious organisms, in which case, making them suffer would be as wrong as making a person suffer.

Sean Illing

As a psychologist, what are you thinking about when you’re watching the characters on Westworld rape, torture, and kill the robot hosts? What are you seeing?

Paul Bloom

The main question that runs through my mind is an honest curiosity as to how many people would do this. On the show, it costs $40K to spend a day in the park, but suppose money was no object and it was free. What would you do there? What would most people do there? Would they don the white hat or the black hat?

The show assumes there’s no shortage of people who want to kill innocents, who want to rape and pillage and torture. And as a psychologist, I’m honestly not sure that’s the case. And I strongly suspect there would be major differences between men and women, as the data we have suggests that violent impulses are much more prevalent in men.

HBO

Sean Illing

That’s an interesting question. I’m actually curious what you think an experiment like Westworld would reveal about human nature. Are we likely to become sadists like the characters on the show when there are no consequences for our actions?

Paul Bloom

I don’t think so. You’re asking a very old question. As you know, Plato talked about if you had a ring that would make you invisible, what would you do? A lot of my research, and a lot of other research out there, suggests that, for the most part, we have altruistic and kind impulses.

There are always exceptions, but we’ve been socialized a great deal to not harm people without cause, and I think that holds for most people most of the time. I guess I just find it hard to believe that a lot of people would seize this as an opportunity to do terrible things.

Sean Illing

I guess I find it less hard to believe than you, and I wonder how much our behavior would change once we’re in a space where there are no consequences for our actions, where many of the incentives to be good are stripped away. I imagine a lot of people would discover a lot of things about themselves that they didn’t know.

Paul Bloom

Sure, I could imagine a slippery slope happening. But I think a lot of our moral inhibitions, both biological and cultural, are sort of built in and work even if we consciously know that the rules of society are gone.

Sean Illing

Would creating robot slaves like we see on Westworld actually make us less moral? In effect, we’re creating objects that allow us to indulge our worst instincts more often. That can’t be good for us, right?

Paul Bloom

I don’t know what it will do to us. I recently read a few articles about sex bots, and there are all sorts of opinions about what they will do to us. One opinion is that the presence of sex bots will corrode men’s interactions with real women. But I’ve also heard people argue the opposite, that sex bots might alleviate some frustrations and some problems, particularly for men who have problems meeting and interacting with women.

One good thing that could arise from the creation of robot slaves is the alleviation of loneliness, which is a terrible, terrible thing that literally kills people. A humanoid robot might have enough kindness and sympathy, or seeming kindness and seeming sympathy, to make people’s lives happier.

I have a feeling that in the end, creating more and more sophisticated robots will have hugely positive and hugely negative consequences, and that it will be hard to predict what they will actually look like.

HBO

Sean Illing

What does psychology tell us about how humans interact with machines now? How easily do we anthropomorphize or attribute emotions to lifelike objects?

Paul Bloom

Most psychologists would say it’s incredibly easy for us to do this. People obviously get very attached to their pets, but they also get attached to simple machines with lovable faces. You see this in a lot of research on artificial intelligence. People interact with low-level robots and start to treat them like they’re people.

I’m a little skeptical of this because it’s not clear to me whether people are really responding emotionally or just play-acting. People get attached to their toys, but they never elicit the same feelings as they do toward their dogs. Very few people get truly attached to mechanical things.

Sean Illing

Not yet, but once those machines become more humanlike, that seems likely to change, no?

Paul Bloom

No doubt. Westworld is an interesting demonstration that shows what might happen when artificial intelligence becomes sufficiently advanced that we can no longer tell the difference between robots and people.

I can’t imagine anyone watching Westworld and saying to themselves about Dolores, the robot protagonist, “Who gives a shit about this box of bolts called Dolores? It’s just a machine.” Nobody feels that way. You’re meant to feel outraged that she’s mistreated; you treat her as indistinguishable from a person.

Sean Illing

It seems also likely that these future robots will know us better than we know ourselves, and will be able to respond to our emotional cues in ways that will further blur the line between machine and person.

Paul Bloom

Everything you’re saying is in principle possible, and Westworld illustrates that. From a practical point of view, how close are we to this? I don’t have the foggiest idea. As a psychologist who’s interested in these issues, I feel we’re incredibly far from it. It’s possible because so much of this research is done in secret that we’re closer than I think. But I suspect you and I will go to our graves without meeting a creature like Dolores. It will not be done before we die.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.