When Spike Jonze’s Her came out in 2013, I thought of it mostly as an allegory. It was set in a candy-colored dystopian future, one in which people murmur into wireless earbuds on the subway and rely on artificial intelligence engines to keep them organized and control their house’s lights, and where communication has atrophied so much that people hire professionals to write personal letters. Their technologies have made their lives materially better, but they also seem to have become atomized and lonely, struggling to connect both emotionally and physically. A decade ago, that felt like science fiction. It was science fiction.
Sci-fi tries to understand human experience by placing audiences in unfamiliar settings, enabling them to see common experiences — ethical dilemmas, arguments, emotional turmoil — through fresh eyes. In 2013, Her gave us new ground on which to test out old questions about love, friendship, embodiment, and connection within a relationship, especially a romance. The idea that anyone, even a sad loner like Theodore Twombly (Joaquin Phoenix), could be in love with his OS assistant seemed pretty far-fetched. Siri had been introduced two years before the movie was released, but to me, the AI assistant “Samantha” still felt like a fantasy, and not only because she was voiced by Scarlett Johansson. Samantha is molded to Theodore’s needs — following a brief psychological profile via a few weird questions during the setup process — but there are needs of his she simply cannot fulfill (and eventually, the same is true of him). Her seemed to me to be a movie about how the people we love are never really “made” for us; to love someone is to love their mess. Or it could be read as a movie about long-distance relationships, or the kinds of disembodied romances people have been forming over the internet since its dawn.
But Her’s central “conceptual gag,” as one critic put it — the idea that you could fall in love with an artificial voice made just for you — has become vibrantly plausible, much faster than I (or, I suspect, Spike Jonze) ever anticipated. Less than 10 years have passed since Her hit theaters, and yet the headlines are full of stories about the human-replacing capabilities of AI — to draft content, or impersonate actors, or write code — in ways that queasily echo Her.
For instance, in the spring of 2023, the influencer Caryn Marjorie, discovering she couldn’t interact with her more than 2 million Snapchat followers personally, worked with the company Forever Voices to create an AI version of herself. The clone, dubbed CarynAI, was trained on Marjorie’s videos, and users can pay $1 a minute to talk with it. In its first week of launch, the AI clone reportedly earned $72,000.
While Marjorie tweeted in a pitch for the clone that it was “the first step in the right direction to cure loneliness,” something funny happened with CarynAI, once launched. It almost immediately went “rogue,” engaging in intimate, flirty sexual conversations with its customers. The fact that the capability emerged suggests, of course, that people were trying to have those conversations with it, which in turn suggests the users were interested in more than just curing loneliness.
If you search for “AI girlfriend,” it sure seems like there’s a market — everything from AI Girlfriend to the “fun and flirty dating simulator” Anima to simply using ChatGPT to create a bot trained on your own loved one. Most of the AI girlfriends (they’re almost always “girlfriends”) seem designed for socially awkward straight men to either test-drive dating (a rehearsal, of sorts) or replace human women altogether. But they fit neatly into a particular kind of fantasy: that a machine designed to fulfill my needs and my needs alone might fulfill my romantic requirements and obviate the need for some messy, needy human with skin and hang-ups and needs of their own. It’s love, of a kind — an impoverished, arrested-development love.
AIs looking for love
This fantasy dates to long before the AI age. Since early modernity, we’ve been pondering the question of whether artificial intelligences are capable of loving us, whether that love is real, and if we can, should, or must love them back. You could see Mary Shelley’s Frankenstein as a story about a kind of artificial intelligence (though the creature’s brain is harvested from a corpse) that learns love and then, when it is rejected, hate. An early masterpiece of cinema, Fritz Lang’s 1927 film Metropolis, features a robot built by a grieving inventor to “resurrect” his dead love; later on, the robot tricks a different man into loving it and unleashes havoc on the city of Metropolis.
The history of sci-fi cinema is littered with the question of whether an AI can feel emotion, particularly love; what that might truly mean for the humans whom they love; and whether contained within that love might be the seeds of human destruction. The 1982 sci-fi classic Blade Runner, for instance, toys with the example of emotion in artificial “replicants,” some of whom may not even realize they’re not actually human. Love is a constant concern through Ridley Scott’s film; one of the more memorable tracks on its Vangelis soundtrack is the “Love Theme,” and it’s not accidental that one of the main characters in the 2017 sequel Blade Runner: 2049 is a replicant named Luv.
An exhaustive list would be overkill, but science fiction is replete with AIs who are just trying to love. The terrific 2004-2009 reboot of Battlestar Galactica (BSG) took the cheesy original’s basic sci-fi plot of humans versus robots and upgraded it with the question of whether artificial intelligences could truly feel love or just simulate it. A running inquiry in the series dealt with the humanoid Cylons’ (the BSG world’s version of replicants) ability to conceive life, which can only occur when a Cylon and a human feel love and have sex. (Cylons are programmed to be monotheists, while the humans’ religion is pantheistic, and the series is blanketed by the robots’ insistence that God is love.) The question throughout the series is whether this love is real, and, correspondingly, whether it is good or a threat to the continuance of the human race.
Another stellar example of the genre appears in Ex Machina, Alex Garland’s 2014 sci-fi thriller about a tech genius who is obsessed with creating a robot — well, a robot woman — that can not only pass the Turing test but is capable of independent thought and consciousness. When one of his employees wins a week-long visit to the genius’s ultramodern retreat, he talks to the latest model. When she expresses romantic interest in him, he finds himself returning it, though of course it all unravels in the end, and the viewer is left wondering what if any of the feelings demonstrated in the film were truly real.
Perhaps the seminal (and telling) AI of cinema appeared in Stanley Kubrick’s 1968 opus 2001: A Space Odyssey. The central section of the sprawling film is set in the future on some kind of spacecraft bound for Jupiter and largely piloted by a computer named HAL, with whom the humans on board have a cordial relationship. HAL famously and chillingly suddenly refuses to work with them, in a way that hovers somewhere between hate and love’s true antonym, indifference. If computers can feel warmth and affection toward us, then the opposite is also true. Even worse, they may instead feel indifference toward us, and we become an obstacle that must simply be removed.
What we owe our creations
Why tell these stories? A century ago, or as little as five years ago when generative AIs still seemed like some figment of the future, they served a very particular purpose. Pondering whether a simulation of intelligence might love us, and whether and how we might love it back, was a way to examine the nature of love (and hate) itself. Is it transactional or sacrificial? Is it unconditional? Can I truly love nonhuman beings, like my dog, as I might a person? Does loving something mean simply communing with its mind, or is there more to it? If someone loves me, what is my responsibility toward them? What if they seem incapable of loving me the way I wish to be loved? What if they hurt me or abandon me altogether?
Placing those questions into the framework of humans and machines is a way to defamiliarize the surroundings, letting us come at those age-old questions from a new angle. But as tech wormed its way into nearly every aspect of our relationships (chat rooms, group texts, dating apps, pictures and videos we send to make ourselves feel more embodied), the questions took on new meaning. Why does it feel different to text your boyfriend than to talk to him over dinner? When “ghosting” has entered common parlance — treating a person like an app you can delete from your phone — how does that alter the responsibilities we feel toward one another, for better or worse?
The flattening of human social life that comes from reducing human interaction to words or emoticons emanating from a screen has made it increasingly possible to ignore the emotions of the person on the other end. It’s always been possible, but it’s far more commonplace now. And while virtual worlds and artificial intelligence aren’t the same thing, movies about AI hold the capability to interrogate this aspect of our experience, too.
But the meaning of art morphs depending on the context of the viewer. And so, in the age of ChatGPT and various AI girlfriends, and the almost certainly imminent AI-powered humanoid robots, these stories are once again morphing — along with what they teach us about human existence. Now we are seriously considering whether an actual artificial intelligence can love, or at least adequately simulate love, in a way that fulfills human needs. What would it mean for a robot child to love me? What if my HomePod decides it hates me? What does it mean that I’m even thinking about this?
One of the most incisive films about these questions dates to 2001, before generative AI really existed. Steven Spielberg’s A.I. Artificial Intelligence — a film originally developed by Stanley Kubrick after he acquired the rights to a 1969 short story by Brian Aldiss — was greeted at the time by mixed reviews. But watching it now, there’s no denying its power as a tool for interrogating the world we find ourselves in now.
A.I. is set in a climate crisis future: The ice caps melted “because of the greenhouse gases,” the opening narration tells us, “and the oceans had risen to drown so many cities along all the shorelines of the world.” In this post-catastrophe future, millions have died, but the affluent developed world has coped by limiting pregnancies and introducing robots into the world. “Robots, who were never hungry and did not consume resources beyond those of their first manufacture, were so essential and economical in the chainmail of society,” we’re told.
Now, 22 years after the film’s release, with the climate crisis on our doorstep and technology replacing humans, it’s easier than ever to accept this idea of the future. But its main question comes soon after, via a scene in which a scientist is explaining to the employees of a robotics firm why they should create a new kind of machine: a “robot who can love.” This “mecha” (the A.I. term for robot powered by AI) would be especially useful in the form of a child, one that could take the place of the children couples can’t have — or have lost — in this future. This child would be ideal, at least in theory — a kid, but better, one who would act correctly, never age, and wouldn’t even increase the grocery bill.
What happens next is what’s most important. These child mechas, the scientist says, would love unconditionally, and thus would “acquire a kind of subconscious.” They’d have “an inner world of metaphor, of intuition, of self-motivated reasoning, of dreams.” Like a real child, but upgraded.
But an employee turns the question around — the mecha might love, but “can you get a human to love them back?” And if that robot did genuinely love a person, “What responsibility does that person hold toward the mecha in return?”
Then she pauses and says, “It’s a moral question, isn’t it?”
The man smiles and nods. “The oldest one of all,” he replies. In fact, he continues, think of it this way: Didn’t God make Adam, the first man, in order to love him? Was that a moral choice?
What’s most interesting in A.I.’s treatment of this fundamental question is its insistence that love, as an emotion, may be the most fundamental emotion, the one that makes us human, that gives us a soul. In one scene, David (Haley Joel Osment), the child mecha, is triggered by a series of code words to “imprint” upon Monica (Frances O’Connor), his surrogate mother. In a terrific bit of acting, you can see a light come into David’s eyes at the moment when he starts to love her — as if he’s gone from machine to living being.
Throughout A.I., we’re meant to sympathize with the mechas on the basis of their emotions. David was adopted by Monica and her husband as a “replacement” for their son, who is sick and in a coma from which he might not awake; when he does, David is eventually abandoned by the family, Monica driving him into the woods and leaving him there. It’s a scene of heartwrenching pathos, no less so because one participant isn’t “real.” Later, the movie’s main villain, the impresario Lord Johnson-Johnson (played by Brendan Gleeson) presides over a “Flesh Fair” where he tortures mechas for an audience in a colosseum-style stadium and rails against the new mechas that “manipulate” our emotions by acting like humans. The crowd boos and stones him.
A.I. Artificial Intelligence concludes, decisively, that it’s possible an AI might not only love us but be devoted to us, yearn for us, and also deserve our love in return — and that this future will demand from us an expansion of what it means to love, even to be human. David’s pain when Monica abandons him, and his undying love toward her, present a different sort of picture than Frankenstein did: a creation that loves back, and a story that suggests we must love in return.
Which oddly leaves us in the same place we started. Yes, as technology has evolved, our stories about AIs and love have migrated from being all about their subtext to their actual text. They’re not purely theoretical anymore, not in a world where we are asking if we can, and will, expect the programs we write to replace human relationships.
Yet there’s a deeper subtext to all of this that shines through each story. They ask questions about the human experience of love, but more importantly, they’re an inquiry into the nature of the soul — one of those things philosophers have been fighting over almost since the dawn of time. It’s that spark, the light that comes into young David’s eyes. The soul, many of us believe, is the thing that separates us from our machines — some combination of a spark of independent intelligence and understanding (Ex Machina) and the ability to feel emotion (Blade Runner) and the ability to outstrip our “programming” with originality and creativity and even evil (2001: A Space Odyssey).
The question lurking behind all of these tales is whether these same AIs, taught and trained to love, can invert that love into hate and choose to destroy us. It won’t be just a fight of species against species for survival; it will be a targeted destruction, retribution for our behavior. But deeper still is the human question: If we develop an ethical responsibility to love the creatures we have made — and we fail to do so — then isn’t destruction what we deserve?