At least 10 would-be Harvard students had their admission revoked because they shared offensive memes on Facebook. The memes used jokes about racial slurs, sexual assault, and child abuse, and when they were discovered by administrators, Harvard rescinded their admissions offers.
I’ve been posting on internet forums since I was a preteen. It’s easy to look at Harvard’s punishment as being too harsh on a bunch of kids sharing things online, but there is more to this story than just kids sharing memes.
Harvard represents for these kids the first real-world consequence of posting content that is racist, sexist, and violent. When this kind of behavior is left unchecked, it can lead to bad places — harassment, normalizing offensive language, and radicalization.
It also highlights the fact that today’s internet is sorely missing content moderation, something prioritized by the message boards where memes got their start. Compared to today’s social media, the admins of early forums like Something Awful drew much clearer lines in the sand to shut down offensive posts. But for Facebook and Twitter today, the techno-libertarian ethos of free speech has allowed hateful content to flourish.
That means for some posters, it ends up falling to outside institutions like Harvard to step in to stop this behavior.
Why proper moderation is key to stopping hate speech online
On the internet, the attitude of sharing hateful content and dismissing it as “just memes” is pervasive. Racist and sexist memes fill the same space that racist and sexist jokes do. After all, a meme is just a joke in image format. And just like someone telling a hateful joke and freeing up space for these attitudes, these memes make offensive things seem normal.
Once you’ve normalized all kinds of vile speech, it becomes easy to make the jump to full-on hate speech and white nationalism. This is how the alt-right got a foothold in 4chan, and this is the way some people end up on the other side. When nobody’s there to tell you it’s wrong and when the entirety of your life comes through online interaction, the only people who can steer you are the ones you talk to online.
This is why it is absolutely crucial for social media to properly moderate what goes on between users. If you didn’t grow up knowing the differences between these kinds of proto-social media, it becomes easy to get tricked into thinking it’s a free speech issue. In reality, it’s about keeping hate speech from quietly taking root. Otherwise you’ll see more situations like Harvard, where an external entity gets a glimpse in and is shocked by just how far it has gone.
I grew up posting on forums. I understand this kind of culture.
My personal story is instructive here: While I never participated in the darkest corners of the web, the culture where racist and violent memes are shared is familiar to me. Richard Spencer, the alt-right leader famous for being punched in the face, said, “The average alt-right-ist is probably a 28-year-old tech-savvy guy working in IT.” Alt-right affiliation aside, this describes me to a T. The difference is, I ended up a Bernie Sanders-supporting, Chapo Trap House-listening urbanite. I've always wondered what put me on that path. I think it has a lot to do with the culture of the places I posted online — and, more importantly, the places I avoided.
I started posting online in the pre-social media days. Back in the early 2000s when I was a preteen, I violated the 13-year-old age restriction to post on message boards. It was different back then. You would join a forum that would align with your interests, which usually centered on pretty dorky topics like video games or Dragon Ball Z, and would slowly migrate to off-topic forums to goof off about anything else.
You would build this online personality for yourself through jokes, images, and arguing in debates. It was early enough that nobody knew what a meme was — they were just funny pictures that you’d save in a folder on your computer and post elsewhere.
There was a hierarchy to those boards — you’d have an administrator that would run higher-level things, and then moderators who would handle the day-to-day of making sure people posting didn’t violate the rules of the forum. Those moderators made sure the pot didn’t boil over into vile things, because the internet has always been an awful place.
There was one of these forums, a place called Something Awful, that became kind of a beacon for all of these other message boards. Two things set Something Awful apart — the quality of the content (I can say with certainty that you’ve laughed at memes from Something Awful, where LOLcats and Slender Man found their start), and the fact that in order to post, you had to pay $10.
It was an excellent community that was maintained by a large number of moderators of varied backgrounds that had the ability to back up the rules about quality of content with a cost of $10 to re-register if you got banned. This meant that if the moderation team wanted to crack down on content, they could. It also meant that posters had to keep things relatively civil.
This also led to too restrictive of an environment for some posters, and one day a young man who posted on a subforum called Anime Death Tentacle Rape Whorehouse under the name “moot” decided to create a new forum modeled off a Japanese site called 2chan, free form these moderating forces. He named his new site 4chan.
I was always afraid of 4chan. Even before 4chan was known for sprouting Anonymous, the morally dubious group of politically active hackers, and for the rise of the alt-right, 4chan was a place where you could go and be instantly greeted with something unspeakably offensive. It got its start as an anime forum, but quickly grew into a thicket of misogyny and racism, fed by a laissez-faire moderation approach where anything that wasn’t explicitly illegal was allowed. Anonymity and a snarling user base full of bullying, threats of violence, and encouragement of self-harm were the norm. If Something Awful was a governed online city-state, 4chan users were the barbarians at the gates.
What happened with the teens at Harvard happened on Facebook, but its roots are in 4chan. I forget what it’s like to be a teenager sometimes, but thinking about this case made me go back and remember just how much of being a teenage boy is pushing the envelope to feel rebellious. It’s how you push those boundaries to see when they give, and being online gives you the ability to push them very, very far.
The problem becomes that the internet can be an echo chamber. You curate whom you follow based off your tastes, and that can give the false impression that everything you’re doing is perfectly fine, even as you swim out into darker and more dangerous waters. You keep pushing to be edgier, and everyone else seems to be encouraging you. That’s the rabid groupthink that keeps 4chan going.
Today’s social media moderators simply aren’t as effective
The problem is that social media enables it. Something Awful was capable of some truly incredible moments of delusional groupthink, but the moderation always stopped the hordes of anonymous posters calling for someone to kill themselves that 4chan invited on a nightly basis.
But places like Facebook and Twitter come out of the libertarian ethos of Silicon Valley and believe that freedom is paramount. This creates a moderation environment that is far more reminiscent of 4chan than Something Awful. I’ve experienced Twitter’s terrible moderation policy. I’m not Jewish, but I’ve had anti-Semitic slurs and pictures thrown at me on Twitter, and Twitter has done nothing about them after I’ve reported the posts. I’ve reported my fair share of what I consider to be hate speech on Facebook, and have never had a post removed. You can try this on your own — Facebook will tell you if they don’t find it in violation of their community guidelines.
All sorts of vile things are lurking on Facebook, and the teens who were denied admission to Harvard are proof of that. When there are no consequences, and everyone is telling you that this is okay by not stepping in, I understand the environment that made these Harvard kids think they weren’t doing anything wrong.
Facebook’s guidelines for what to moderate leaked online, and the guidelines are more permissive than any other forum I have posted on in my life: Anything they do not deem as a credible threat of violence is allowed, which creates a tremendous amount of leeway for content to flourish that an outside observer would deem inappropriate. Based on the standards that Twitter has revealed through whom it censures and whom it lets go free, it’s easy to imagine that platform’s standards are similar.
I would assume that if a prospective student were caught posting (Content warning: violence) “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat,” Harvard would reconsider their admission, but that quote appears verbatim as an example of a non-credible threat in the leak of Facebook’s standards and that is perfectly acceptable for their platform. That gap between what Silicon Valley’s free speech paradigm sets up as acceptable and what society at large considers to be okay is at the crux of what allowed these kids to think they could get away with sharing these images with their peers.
The internet is a wild and largely untamed place, and despite all of its advantages and its full integration into our day-to-day lives, there’s still a lot of darkness luring around the edges. Proper moderation keeps this darkness from encroaching further, and it’s effective — it just has to have teeth. It’s a lot easier to have a moderator take down a post and learn from why they did than it is to explain why you aren’t going to Harvard anymore.
Cooper Lund is an IT systems administrator currently living and posting in Brooklyn. You can find him on Twitter @cooperlund.
First Person is Vox's home for compelling, provocative narrative essays. Do you have a story to share? Read our submission guidelines, and pitch us at firstname.lastname@example.org.