clock menu more-arrow no yes mobile

Filed under:

Why Reddit’s face-swapping celebrity porn craze is a harbinger of dystopia

Reddit just wants to put Emma Watson’s head on the bodies of adult film actresses, but that has dire implications for reality itself.

Reddit
Aja Romano writes about pop culture, media, and ethics. Before joining Vox in 2016, they were a staff reporter at the Daily Dot. A 2019 fellow of the National Critics Institute, they’re considered an authority on fandom, the internet, and the culture wars.

“REALITY CHECK,” wrote a deeply troubled user on Reddit’s r/deepfakes subreddit, after news broke that the forum had become a repository for AI technology with far-reaching implications. “ALL OF THIS IS FUCKING INSANE.”

The technology in question? A new tool, driven by machine learning, that lets users easily swap the faces of their favorite celebrities onto preexisting video images.

In other words, endless videos in which the faces of porn stars have been replaced by celebrity faces — or rather, algorithmic approximations of celebrity faces that reside deep within the Uncanny Valley.

On r/deepfakes, eerie approximations of Emma Watson, Emilia Clarke, Sophie Turner, Natalie Portman, Kristen Bell, Daisy Ridley, Ariana Grande, and many others borrow the expressions, moves, sultry-eyed camera stares, and orgiastic glee of the porn stars upon whose faces they’ve been transplanted. You can find an example here. (Warning: this is full-on porn, and quite jarring and eerie at that. Do not click if you don’t want to see jarring, eerie, full-on porn.)

The Redditor alarmed by all of this, poshpotdllr, came to the subreddit to express the fear that this new face-swapping craze would result in a planet full of celibate men using digital enhancements to feed their unattainable fantasies, while women would be forced to “[choose] between solitude and polygamy.”

That argument is a tad dire — we’re still a long way off from robot-based porn driving us all to gender-based separatism. The sudden rise of the face-swapping tool and the plausibility of its output does, however, raise a host of serious questions about where all this technology is headed.

Most important: What happens to issues of consent when videos like this proliferate across the internet? And what implications does it hold for the integrity of any video in a digital age?

How did this happen?

The concept of the “deepfake” grew out of the subreddit r/CelebFakes, a community devoted to photo manipulations of celebrities. CelebFakes has 50,000 users and has existed on Reddit since 2011 — long before infamous non-consensual leak of celebrity nude photos in 2014.

The subreddit has been mainly devoted to photoshopping celebrities to appear nude, including an array of famous women made to look topless on red carpets. These photos often spread onto porn sites, and while all of this is clearly dicey, the Reddit forum responsible for purveying them at least mandates no celebrities under 18 and no creepy “fakes of your neighbor.”

Until recently, the top post on CelebFakes was a seamless video manipulation from two years ago that spliced an interview with Emma Watson over footage of an adult film actress removing her top. The result was a Black Mirror-ish image of Watson stripping down in a newsroom. While the 2,000 upvotes said much, at least one commenter was disturbed by the implications.

“Wow, that is so well done that it’s a bit scary,” the commenter wrote. “Imagine if some[one] that hates you puts your face on some bestiality or [child pornography].”

What was about to come, however, was far creepier than the video splicing. On September 30, 2017, a user named deepfakes posted a series of videos to CelebFakes in a thread requesting manipulations of Game of Thrones actress Maisie Williams.

The footage clearly featured a virtual recreation of Williams’s face — but the similarities were striking enough to compel one respondent to ask deepfakes to share the algorithm he was using.

u/deepfakes

As deepfakes posted more of the simulations, including one of a fake bob-haired Emma Watson in a porn template that made her potentially interchangeable with, say, a fake bob-haired Taylor Swift, more requests for the AI source followed.

Eventually, deepfakes launched a new subreddit specifically for the video celebrity swaps, r/deepfakes, and released the script for his face-swapping process unto Reddit.

Examples of deepfakes’ script applied to a training set for combining Donald Trump and Nicolas Cage.
u/deepfakes

Deepfakes, the user, told Motherboard in December that he was using popular open-source tools for machine learning combined with a wide internet search of publicly available images of various celebrities.

In the past, this kind of algorithm-based machine learning has been used to train neural networks to do everything from watch and recreate movies to draw human faces on their own. It’s easier than ever for programmers to train computers to simulate all kinds of things for our enjoyment, education, and advancement.

While there’s been plenty of praise for these developments, others have warned that unfettered algorithmic development amounts to a “weapon of math destruction,” both because algorithms essentially enshrine biased data into computer training, and because there’s very little consensus on how, or even if, they should be regulated.

Lawmakers aren’t generally in a position to quash programmers who are awash in free, open-source programs that help you set up your own neural networks and then teach them how to do stuff.

As the rise of r/deepfakes illustrates, it’s easy for the technology to rapidly advance when many users are collectively working with and learning about it. Before long, r/deepfakes users were building on each other’s data sets to create even more convincing facial swapping models — like this training set illustrating the morphing of adult film actress Little Caprice into simulated versions of Watson and Kate Mara.

Reddit

The legality of all this is seriously sketchy. The ethical implications are worse.

In response to recent news coverage of their technological advancements, posters on r/deepfakes have largely responded flippantly, claiming that their works are “legally protected parody.” To be clear, that’s almost certainly not the case, at least in the United States — which is why their works are often removed from the third-party sites they’re uploaded to, under the Digital Millennium Copyright Act. (Plenty of r/deepfakes videos, including a fake of Emma Watson showering that spread to third-party porn sites, have remained online.)

The Reddit users seem to be basing this claim on the “fair use” doctrine, a part of of US copyright law that is frequently applied to “remix culture” in the age of things being liberally reworked and distributed across the internet. Fair use principles are what protect your right to make parodies and remixes, or sample and copy stuff for artistic or educational purposes.

The basic concept of fair use is that works that are meaningfully “transformative” of their original source — rather than a purely derivative copy — constitute a form of parody (even if they’re not strictly humorous), and are thus protected against claims of copyright infringement.

But despite the insistence of r/deepfakes users, this is most likely not a simple case of fair use. Adult films are copyrighted material, and even if the works aren’t clearly derivative of their source material, they’re arguably a market substitute, intended to replace the original source (porn) rather than expand or transform its value. (To replace the massive profit losses these kind of online piracy sites have caused within the adult film industry, the porn business has gotten creative about its own online offerings.)

The r/deepfakes users’ defense of their work would likely be that that this type of simulation is transformative because creating the faked videos requires training a computer to “transform” the original images into a new composite. But legal precedent around Fair Use law is very clear about distinguishing technology from the way it’s being used. In this case, AI technology can’t be used to create a market substitute for porn, even if the machine learning method itself involves transforming the original image.

There’s just one (big) problem: Setting a legal precedent and actually enforcing it are two very different things — especially when it comes to the internet. The reality is that, legality aside, it’s extremely difficult to purge this kind of content from the web once it’s out there. Although the original videos and GIF files can be yanked off mainstream websites like Giphy and Imgur fairly quickly, it’s harder to get them deleted from largely anonymous porn websites, many of which were set up specifically for this kind of celebrity image-sharing, and which typically don’t regulate against “revenge porn.”

Revenge porn” is a catch-all term used to describe explicit or incriminating material that’s taken and uploaded to the internet without the consent of the subject or the owner of the materials. Revenge porn is an increasingly common form of online abuse, and although several states have enacted laws criminalizing it, in recent years victims of revenge porn have included everyone from celebrities to members of the military to members of Congress.

Reddit, as the original home of the nude photos of Jennifer Lawrence that leaked in 2014 — which constitute revenge porn, in that they were uploaded and distributed across the internet without her knowledge and consent — ultimately took far-reaching action to eradicate revenge porn from appearing on the site. But it’s maintained a hands-off attitude toward forums like r/CelebFakes, which manipulate photos of real people without the consent of the person whose image is being manipulated. So far it’s shown no indication of stepping in to halt the activity of r/deepfakes, either.

Whether or not it’s Reddit’s responsibility to do so, the cat is out of the bag. In the two months since it was created, r/deepfakes has gained 25,000 members. And it would seem that few of them are thinking very hard about the feelings or autonomy of the people whose bodies and faces they’re exploiting.

Not only are the celebrities having their likenesses used in pornography without their knowledge or consent, but the adult film actors whose faces are being replaced are having their professional work essentially de-valued and treated as interchangeable. And all of these women — and presumably some men, though those videos are so far harder to find — are having these digital manipulations spread across the internet without their knowledge, consent, or control.

The glaringly obvious concern here is, if this technology can be used on celebrities without a second thought, who’ll be next? We’ve already seen that no one, regardless of status or private or public identity, is immune from having revenge porn used against them. There’s every reason to expect that by the same token, this technology will inevitably be applied to, and used against, private citizens.

The potential for harm extends well beyond porn — and highlights the increasing need for skepticism in a digital age

Giphy

The awareness of the ramifications of this technology exists on r/deepfakes alongside the experimentation.

“Everybody is talking about celebrity fakes, but once this is easy to use there will be tons of fake porn vid of that hot colleague, kids who do their classmates, etc etc,” predicted one member of the subreddit when user Deepfakes released his algorithm tool. “It will be the golden era of creeps and digital sexual harassment. Women will suspect anyone who takes too many photos, uploading photos to social media of other people without explicit permission might be frowned upon.”

In response, Deepfakes argued that the technology was already available and free for anyone to use — and he’s absolutely correct. Around the time he started uploading face-swaps to Reddit, Pornhub announced the use of facial recognition technology on its platform to enhance database searching for users. And facial recognition search engines for porn stars already exist.

That’s because academic researchers have been proliferating advances in machine learning and facial recognition such as this for years — as with this 2016 project that allowed researchers to manipulate video in real time using face-capture modification technology. Building on this technology, Radiolab conducted a similar experiment in 2017 involving a real-time simulation of President Obama, to make the point that this technology is advancing faster than any of us realize, or are really thinking much about.

This body of research has already demonstrated the potential to manipulate video footage of politicians with relative ease; it’s not that big a leap from manipulating images of porn stars and celebrities to manipulating those of politicians and other public figures.

“It’s destabilizing,” Professor Deborah Johnson told Vice in reaction to the rapid rise of r/deepfakes. “The whole business of trust and reliability is undermined by this stuff.”

In this era of fake news — where false information is intentionally proliferated by destabilizing agents like trolls, propagandists, and bots controlled by state governments — it’s not exactly reassuring to realize that we now have to battle another potential form of reality manipulation.

So how do we do that? Cathy O’Neill, a mathematician who’s been outspoken about the dangers of putting our trust in “Big Data” like algorithmic learning without calling for accountability, told Vox via email that she wasn’t fazed by the news of the face-swapping craze, because the technology has been around for years.

“To be honest I think it’s better that it’s used a bunch for porn than in some political scandal that gives people the wrong impression and starts a war,” she said.

O’Neil described the moment as “an education issue” for the general populace.

“Until we, as a group, realize that video is corruptible, we will be shocked over and over. … In other words, it’s only a problem because we expect something else when we see a video. If we get used to it, it ceases to be a problem.”

While acknowledging that tech like this is a potentially destabilizing force, with “the nature of evidence” itself becoming “more complicated,” O’Neill argued that acknowledging the corruptibility of video should be an obvious component in the cultural recognition that all technology, and indeed the entire current technological moment — the simulations, the bots, the algorithms, the spread of fake news, and the era of Big Data — is potentially hugely destabilizing.

The answer, she said, lies in evolving our expectations accordingly.

“It’s better for us to learn how to be skeptical of what we see, what we read, and for that matter what we hear, because sound can be edited as well,” she said.

She added that she foresees “an ensuing tech war between people who doctor videos and people who spot doctored videos.” Indeed, some scientists and visual effects experts are already on the job, applying their expertise to detecting fake videos and teaching the public how to identify them, too.

In other words, it’s possible that the same kinds of technological advances that seem to be damning us could also be the agents that save us.

Update: In response to the rise of r/deepfakes, Discord, Pornhub, and Giphy all moved to ban the distribution of algorithmically-generated “fake” pornography on their sites. On February 7, Reddit banned the r/deepfakes community, as well as the longstanding r/CelebFakes community, and updated its sitewide rules regarding its ban on “involuntary pornography” and “sexual or suggestive content involving minors.”

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.