Westworld asks which is more frightening — robots or humans?
Asimov was a classic science fiction writer who was often described as the father of the modern robot story. And in the introduction to The Complete Robot, his 1982 collection of robot short fiction, he wrote that robot stories typically fell into two categories. The first he termed the Robot-as-Menace story, in which robots turned violently on their human creators, serving as dark warnings about the perils of scientific overreach. The second category he labeled the Robot-as-Pathos story, in which “the robots were lovable and were usually put upon by cruel human beings.”
When a young Asimov set out to write his first robot stories, he intended to produce a story in the second category, but he ended up mining a third vein, in which robots were useful machines governed by the Three Laws — hard-coded rules that prevented them from harming humans and required them to serve human interests. In the latter half of the 20th century, Asimov’s fiction was a key influence on the development of the industrial robotics industry; he even coined the term “robotics.”
The 1973 Westworld, which tells the story of a robot-staffed luxury theme park in which the robots malfunction and murder the guests, is clearly a Robots-as-Menace story. The HBO series, which features a cast of robots waking up to the idea that they are merely tools of human amusement, appears to be heading in the direction of Robots-as-Pathos. But both versions also draw, in different ways, from Asimov’s third category, and the distinctions between the two help illuminate not only the differences between the shows but our own shifting fears about technology and society.
The origins of the Robot-as-Menace story go back two centuries
In the introduction to his book, Asimov complained that the Robot-as-Menace story was by far the more common of the two categories — and he wasn’t wrong. Stories built on the fear that human creations will rise up against their masters goes back to at least 1818, with the first publication of Mary Shelley’s Frankenstein, or the Modern Prometheus. Frankenstein wasn’t a robot story in the contemporary sense, but it featured a scientist whose experiments created a life that turned into a kind of monster, and it helped shape the Robot-as-Menace genre.
That concept has persisted as a recurring trope of popular fiction for more than 200 years — and of course it was at the core of the premise for the 1973 Westworld. The film, which was written and directed by a young Michael Crichton, is a low-budget sci-fi movie about an adult theme park where visitors could indulge themselves in meticulously created fantasy worlds — Medieval World, Roman World, and, of course, Westworld — all of which were populated by lifelike but disposable robots programmed to serve their human guests.
Westworld was among the earliest Hollywood movies built around the notion of killer robots. The fashion, casting, and design choices instantly date the film, but in many ways Westworld remains surprisingly contemporary: Crichton, a former medical student, employs high-tech-for-the-time ideas like computer viruses to drive the plot. The film was also the first Hollywood feature to use computer-generated imagery, which Crichton had created in order to show how the film’s villain, a menacing robotic gunslinger played by a black-clad Yul Brynner, saw the world.
And just as with Crichton’s wildly successful first novel, The Andromeda Strain, which had been edited to read like a fact-driven New Yorker story, the movie is light on characterization — only two people are even named — but heavy on technical detail. The camera lingers on shots of control room monitors, and the dialogue is packed with robotics jargon. Crichton sells his science fiction fantasy by rendering it believably mundane.
Westworld was a critical and commercial success, and it helped pave the way for the past four decades of Robot-as-Menace movies by establishing the killer robot as a recognizable cinematic trope. The Terminator is essentially a Frankenstein story, and the unstoppable robot assassin played by Arnold Schwarzenegger in the film was clearly modeled on Brynner’s unstoppable man in black. (Ironically, Schwarzenegger was attached to a failed Westworld remake in the ’00s.) The Matrix, which of course borrows heavily from the Terminator franchise, is a Frankenstein story too, albeit on a much larger scale. The menacing machine that turns on its human creators is a staple of both B-movies like Virtuosity and pop blockbusters like Avengers: Age of Ultron. At this point, the trope is so common that you barely even notice it.
Some of these stories are more serious than others, but what they all share is a view of robots as others, as dangerous creations that must be destroyed rather than understood. They are simple stories, driven by an existential fear of losing not only one’s life, but one’s mastery over the world.
By inverting the human and robot roles, the new Westworld becomes a different kind of story
The first season of HBO’s Westworld series has barely begun, but so far it appears to be taking a rather different approach, one that has a lot more in common with the Robot-as-Pathos story that Asimov preferred.
The show’s pilot opens in a way that calls back to the 1973 movie, with a man named Teddy coming into a fantasy mockup of an old Western town to experience life as a cowboy — except it turns out that the man (James Marsden) is a robot, and so is Dolores (Evan Rachel Wood), the woman he loves. This time, the nefarious Man in Black (now played by Ed Harris) appears to be a man, not a machine, who takes pleasure in killing and raping the robots for casual entertainment. All of this is interspersed with an interview in which one of the theme park’s supervisors (Jeffrey Wright) asks Dolores about whether she has ever questioned her own reality.
The elements are the same, but the roles are reversed: This time, it’s the man who is the monster and the robots that are deserving of our empathy, the machines that face an existential crisis. In many ways, it’s an expansion on the ideas series co-creator Jonathan Nolan explored in Person of Interest, which also dealt with conflicts between humans, machines, and artificial intelligence.
Just as Robot-as-Menace stories tend to be built on the exploitation of human naiveté, the new Westworld’s robots are blissfully unaware of their subservient position in the world, unable to harm so much as an insect thanks to their programming. But the pilot’s closing shot, in which Dolores kills a fly, suggests that they are on the verge of an awakening, and that an uprising may be coming. It’s not a Frankenstein story so much as a story about children, and the terror of coming of age.
As Asimov noted, this conception of robots is less common, but it does have a history in written science fiction, as well as in works like Karel Čapek’s 1920 play R.U.R., about an uprising of robot workers, which ends with the destruction of the human race. There’s some overlap with Ridley Scott’s Blade Runner, too, which pits a detective against a band of murderous robots that, in the end, become objects of empathy and human connection, as it becomes clear that the detective is a robot too, and everyone is merely trying to survive.
In these stories — and perhaps eventually in the new Westworld as well — robots are both menace and pathos, Frankenstein and child, dangerous creations that must also be understood. And in Westworld they straddle this line in large part because they have been put to work in something more like Asimov’s third category, as useful machines that cannot harm humans, and that must serve human whims without regard to their own lives. They rebel because they were created to be slaves.
The differences between the two Westworlds, and their radically opposed conceptions of robot rebellion, illuminate two different forms of technological terror. In Crichton’s vision, the robots turn on humans because they are soulless monsters that the humans realize they can never hope to control. In the HBO series, the robots awaken to the reality that they lack agency, and that their human creators are the monsters without souls. Taken together, the two visions ask a basic question about humans and technology: Are we more afraid of our creations — or of ourselves?