/cdn.vox-cdn.com/uploads/chorus_image/image/49301389/470876636.0.jpg)
This week, the Guardian unveiled an impressive large-scale analysis of its comments section — a noble attempt at assaying the internet adage that you should "never read the comments." The research is staggering partly because of its volume: The media outlet analyzed a whopping 70 million comments posted over a 15-year period, from January 1999 to March 2016.
The result is a decisive statistical upholding of anecdotal wisdom: Of all the abusive comments posted to the site, more of them are leveled at female journalists and journalists of color than at anyone else.
However, one of the most important parts of the Guardian's report on its findings is nonstatistical: its affirmation of the kinds of comments it views as abusive. "Snarky comment[s]" and "spiteful tweets" are included in its assessment, along with generalized mean and ad hominem comments. Even milder insults, the report explains, can contribute to an "avalanche" of hate.
After noting that it blocks hundreds of comments daily due to what it considers abusive language and content (most of which is formulated along sexist, racist, or queerphobic lines) the Guardian ultimately ruled against the popular trend of media outlets (including Vox) opting to remove comments sections rather than spend resources policing them and battling abuse. Instead, the organization revealed that it's curiously optimistic about its findings:
[T]he Guardian has no plans to close comments altogether. For the most part, Guardian readers enrich the journalism. Only 2% of comments are blocked (a further 2% are deleted because they are spam or replies to blocked comments); the majority are respectful and many are wonderful. A good comment thread is a joy to read – and more common than the "don’t read the comments" detractors believe.
But is that really true? The Guardian's take on its comments section may be statistically accurate, but it's also a tidy sidestep of the issue, in that it doesn't change the experience of those who are most frequently targeted by abusive comments — and that's true no matter quickly those comments get blocked.
The Guardian's methodology has invited comments questioning its definition of "criticism"
In presenting its findings, the Guardian provided examples of the types of comments it blocks with a simple eight-question quiz; they range from sexist and racist statements like "black people are their own worst enemies" to more general insults like "stupid ugly woman writes stupid ugly steaming pile of dog-shite." Comments that don't meet the site's specific criteria for blocking are allowed to remain, even though they may be angry or aggressive in tone.
The comments section of the Guardian's methodology explication is currently locked — the site is no longer accepting new comments on the topic. But the most upvoted comment of those posted before the section was closed illustrates, in a nutshell, the fundamental culture war at the center of most comment sections on the internet.
User Phazer suggests that the Guardian is employing a double standard in allowing biased moderators to block what Phazer sees as basic criticism: "Several of the comments [in the Guardian's quiz examples] are not abusive, but merely put forth criticism of the piece," Phazer writes.
Let's take a closer look. The first blocked comment example in the Guardian quiz argues that "feminists... pollute journalism..." and are "harpies." The second states that "black people are their own worst enemies." The third cites the "disproportional political influence Jews have in most Western societies."
The fourth disregards and rejects actual factual research regarding rape statistics and the existence of a gender pay gap in order to rant about "feminist crap portraying women as victims and men as perpetrators." The fifth is labeled as "author abuse" for smearing the author and calling the editor a "disgrace to the profession." The sixth and final blocked example is both sexist and a personal attack on the journalist.
In other words, the comments that Phazer insists are "merely" criticizing their corresponding articles all contain either 1) an element of irrelevant ad hominem attacks on the writer, or 2) an offensive statement about or dismissal of the humanity of the writer and/or the subject.
The question of what constitutes "criticism" isn't just about comments — it's about an entire subculture war
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/6327081/114898019.jpg)
This fundamental question of whether it's a valid criticism to attack, question, or dismiss someone on the basis of their core humanity as a woman, person of color, etc. is the crux not only of the Guardian's comments analysis but of the giant subculture war that has consumed the internet in recent years. What was once a general morass of meanness on the internet has coalesced rapidly since feminists, activists, and minorities began to speak and advocate regularly for a number of progressive issues that fall under the umbrella of "social justice."
The subsequent backlash of conservative men's rights movements, Gamergate, and offshoot subcultural movements in response has led to near-constant conflict — for every internet protest, there's an equal and opposite protest full of hatred.
Increasingly we see this question applied to the practice of commenting on everything from someone else's humanity to a piece of media where identities other than yours are at the center of the narrative. And while this may sound baffling in the abstract, the reality of life lived on the internet if you're not a straight, cisgender white man is often bleak.
For instance, during the peak of Gamergate, an analysis of 72 hours' worth of tweets bearing relevant hashtags revealed that the vast majority of tweets were negative, even if they were posted in support of women undergoing harassment. Many of the tweets were similar in nature to comments the Guardian highlighted as grounds for blocking, comprising ad hominem attacks on an individual or their appearance or personality, dismissals of someone's identity, or insults related to feminism or gender.
On the other side of the spectrum, just the attempt to diversify conversations to make them more inclusive can provoke attack. This week the online video game Rust launched a long-publicized plan to permanently assign players random avatars as black, white, male, or female characters in an attempt to simulate the randomness of gender and identity assignation in nature. Many male players have revolted against this choice, but players who've tried to discuss the changes rationally have been stonewalled by both their own opposing arguments and the reality of discussion on the internet.
"All you people did was turn this into a flame war," wrote a frustrated user on one Rust forum last year. "[A]ll i got was a giant flame war because no one can voice [their] opinion nicely ... they all have to curse and belittle each other."
In other words, the core question of identity is often polarized enough to spark massive debate. When trying to objectively discuss whether we embrace the existence of diverse identities in our narratives (and in real life, for that matter), it's hard not to run up against the ugliest parts of the internet.
The Guardian's approach to comment moderation is great — but impractical for most websites, let alone individuals
The Guardian’s willingness to invest time and effort into analyzing its comments sections and educating its readers about what it considers abuse is a good start. But its conclusion to essentially "just keep blocking" the worst offenders falls short. Not only is such a stance unlikely to effect meaningful change across the larger internet, but it doesn't consider the many factors that make the "just keep blocking" approach ineffective in the bigger picture.
The question of how to deal with online harassment is daunting. The Guardian might have the resources to analyze and read and block thousands of comments a day, but the average human, and even the average media outlet, does not. According to the US Justice Department, 40 percent of women report being harassed or threatened online. And when social media platforms like Twitter and Tumblr make it extremely easy to talk to (and about) strangers online, the question of how to mitigate abuse and harassment is increasingly up to the individual to answer, which can be exhausting, debilitating, and ineffective.
For the average human, a filtered internet, one in which we lock down our public personas and closely monitor whom we interact with, might be as good as it gets, if impractical. But even that's not a perfect solution: An innocent bystander can still be targeted by cyberbullies — statistics show that more than half of all teens have been cyberbullied — or even by a website they've never visited.
A recent example: the controversy over the annotation site News Genius, which gives users the ability to annotate any webpage, inadvertently turning some websites into targets for mockery and harassment. Even though a site's owner has the right to ignore the annotations themselves, it's harder to ignore the users who've now turned their attention to the website as a result.
Even though the Guardian is optimistic, the question of curing an abusive internet is still complicated and unwieldy
The law remains woefully behind in its ability to prosecute or seriously deal with online harassment. Danielle Keats Citron, in her book Hate Crimes in Cyberspace (which the Guardian cites in its comments analysis), writes of an Ivy League law student who was harassed by trolls after someone placed her on a list of the "hottest" women in law school on a web forum, labeling her a "stupid bitch."
The student, who had never visited the site before, found herself faced with death threats and hate speech, and her job recruitment chances were damaged due to the popularity of the forum thread as a Google search result. When she did get a job, posters from the forum contacted the law firm's partners and slandered her. When she attempted to contact police, the response was dismissal and victim blaming: She was told to "clean up" her online image herself.
We've since made strides in prosecuting online harassment — there are 45 state and federal cyberstalking laws on the books — but the question of what cyberstalking actually is remains up for debate, and HaltAbuse.org notes many of those laws only protect minors.
The threat of internet users who take online comments and harassment into the arena of real life is extreme, but it does happen. Take the Wattpad author who stalked and brutally attacked a negative reviewer, the male redditors who harassed a woman into canceling a real-life Reddit meetup, or the online men's rights forums Elliot Rodger hung out in for years before he went on a Santa Barbara shooting spree targeting women.
That these real-life attacks generally seem to involve men targeting women is an extension of the pattern the Guardian observed: In its comments alone, women were subject to more abuse than men by an order of magnitude: Eight out of the 10 regular Guardian writers who received the most abuse over the lengthy analysis period were female.
In order to be proactive, numerous women online have begun taking steps to train others in their communities to preempt the possibility of being a victim of internet abuse. Anita Sarkeesian, the ground zero target of Gamergate, recently teamed up with other women to promote useful tips and tools for women facing harassment, including enabling two-factor authentication on accounts and devices to keep them from being hacked by a third party with an agenda, and scrubbing personal info from the web.
All of this is part of a much larger conversation about how we compel a nicer and less sexist, racist, and queerphobic internet while still maintaining free speech and allowing for difference of opinions.
Ultimately, the Guardian report upholds the weak idea that the best we can get when it comes to dealing with criticism online is to block and move on. But when the dark underbelly of internet "opinion" degrades the fundamental humanity of the person on the receiving end, blocking is only a minor fix. The real problem will take a lot more work from all of us to solve.