/cdn.vox-cdn.com/uploads/chorus_image/image/63698749/pogo-met-the-enemy.0.1505425927.0.jpg)
It’s hard to know what to say about what happened with Twitter last week after the tragic suicide of Robin Williams.
While the death caused a massive surge of online activity on the social communications service, the bulk of it expressing shock and sadness, vicious attacks on his daughter Zelda by two trolls on the service included tasteless photoshopped images and cruel taunts.
It caused her to post this:
https://twitter.com/zeldawilliams/status/499432576872755201
Also last week, in another situation that got less attention, the editors of women’s site Jezebel called out its parent company Gawker in a public post titled “We Have a Rape Gif Problem and Gawker Media Won’t Do Anything About It.”
Wrote its editors quite eloquently, especially given the circumstances:
For months, an individual or individuals has been using anonymous, untraceable burner accounts to post gifs of violent pornography in the discussion section of stories on Jezebel. The images arrive in a barrage, and the only way to get rid of them from the website is if a staffer individually dismisses the comments and manually bans the commenter. But because IP addresses aren’t recorded on burner accounts, literally nothing is stopping this individual or individuals from immediately signing up for another, and posting another wave of violent images (and then bragging about it on 4chan in conversations staffers here have followed, which we’re not linking to here because fuck that garbage). This weekend, the user or users have escalated to gory images of bloody injuries emblazoned with the Jezebel logo. It’s like playing whack-a-mole with a sociopathic Hydra. …
In refusing to address the problem, Gawker’s leadership is prioritizing theoretical anonymous tipsters over a very real and immediate threat to the mental health of Jezebel’s staff and readers. If this were happening at another website, if another workplace was essentially requiring its female employees to manage a malevolent human pornbot, we’d report the hell out of it here and cite it as another example of employers failing to take the safety of its female employees seriously. But it’s happening to us. It’s been happening to us for months. And it feels hypocritical to continue to remain silent about it.
I wish each of these incidents were a surprise to me in any way. Because while they were about as low as you could go — prolonged and exhausted sigh — such behavior has become all too common on Twitter and Tumblr, on Facebook and in texts, on Secret and Whisper and endlessly, so endlessly, in comments.
Being a jackass online, too often anonymously, has been one of the most persistent diseases of the Internet — really, from its very beginnings. And, as each incident happens, most especially the bullying of girls and women, we have collectively wrung our hands over the issue and agonized over what to do.
And still we have embraced an ever-increasing number of tools to share ourselves and our stories with the world without pause, with social communications apps and networks growing exponentially everywhere across the globe.
This is also no surprise: Humans love to share, to chatter, to observe, to pontificate and to vent. The impulse is normal and it bleeds into the online space to become even more amplified. In its most benign forms, the negative feedback is a lot like heckling — you put yourself out to the crowd and take the heat.
Everyone groks that on some level and is used to some amount of testy back-and-forth online. I am guilty as charged of many, many — well, too many times — engaging in a fair bit of snapping and sniping with people I think are obnoxious (and vice versa). Earlier this week, for example, I had a lively argument with someone about the police action in Ferguson, Mo., that got heated, although it was never less than civil.
But civil is not the trend these days online. And when it gets to the state that it did with both Zelda Williams and Jezebel, and as it does too often on this site (I pity our editors who have to wade through some of the more vile comments and delete them), it’s clear that no one individual has an ability to push back the wave of vitriol with any modicum of success.
That is why it is clearly up to sites like Twitter, Tumblr, Facebook and others to try to be more transparent and clear about what they are doing to combat this abuse, well beyond the words of comfort that they inevitably roll out when these incidents happen.
The response from Twitter after the Robin Williams trolling was typical, with Del Harvey, head of its Trust and Safety Team, responding: “We will not tolerate abuse of this nature on Twitter.”
Okay, fine. Except it does in way too many cases, despite continually pushing its commitment to user trust, terms of service enforcement and safety advocacy.
Specific to the Williams incident, Harvey said: “We have suspended a number of accounts related to this issue for violating our rules and we are in the process of evaluating how we can further improve our policies to better handle tragic situations like this one. This includes expanding our policies regarding self-harm and private information, and improving support for family members of deceased users.”
Okay, fine again. But that too is entirely unspecific and not as detailed as it needs to be. Suspension seems obvious. But how difficult would it be to anticipate that human beings would act badly when allowed to be anonymous on social platforms and have the most ironclad rules in the first place?
Gawker, too, tightened up its approach to comments after the Jezebel staff sent up its flare, bringing back a “pending comments” system: “Only comments from approved commenters — those who are followed by Jezebel — will automatically be visible. There will also be a pending comment queue; comments in the pending queue are visible to readers, but only if they choose to see them.”
As to how these sites enforce such rules, I am not sure what the big secret is about the tools being used to stop this kind of abuse and why they cannot being more explicit about exactly how it is done.
On Twitter, the form for reporting abuse is not difficult, but neither is it very clear or easy, requiring answers to a series of questions and form-filling that feels like you might be returning a misfit sweater to Land’s End.
In fact, Twitter’s pages related to online abuse are surprisingly thin, with advice like this below, which has the feel of a nebulous high-school pamphlet handout and is not as detailed as one might imagine given how obvious the pitfalls of the service can be.
And while its abusive-behavior policy is perfectly fine, it also lacks the kind of teeth and deep information that people should see.
Could we get concrete examples of how Twitter has dealt with a variety of abuse situations?
Could we know exactly what happens when a complaint is filed?
Could we know more specifics about the tools used to combat abusers?
Could we know exactly who at Twitter is dealing with this and how they conduct their investigations?
And could it all be in plain English?
Without getting on our Re/code high horse, let me contrast our comments policy — such issues on our site pale in comparison with those Twitter faces — with that of Twitter.
Here’s Twitter’s:
Abusive behavior policy
If you need to report abusive behavior to Twitter, please file a report here.
If you believe you may be in danger, please contact your local law enforcement authority in addition to reporting the content to Twitter so that the situation can be addressed both online and offline.
User disputes and false statements
Twitter provides a global communication platform which encompasses a variety of users with different voices, ideas and perspectives. As a policy, we do not mediate content or intervene in disputes between users.
Threats and abuse
Users may not make direct, specific threats of violence against others, including threats against a person or group on the basis of race, ethnicity, national origin, religion, sexual orientation, gender, gender identity, age, or disability. Targeted abuse or harassment is also a violation of the Twitter Rules and Terms of Service.
For frequently asked questions about reporting abusive behavior on Twitter, click here. To learn more about what you can do when you encounter abusive behavior on Twitter and other websites, click here.
Offensive content
Users are allowed to post content, including potentially inflammatory content, provided they do not violate the Twitter Rules and Terms of Service. Twitter does not screen content and does not remove potentially offensive content unless such content is in violation of the Twitter Rules and Terms of Service.
If you believe the content or behavior you are reporting is prohibited in your local jurisdiction, please contact your local authorities so they can accurately assess the content or behavior for possible violations of local law. If Twitter is contacted directly by law enforcement, we can work with them and provide assistance for their investigation as well as guidance around possible options. You can point local law enforcement to our Law Enforcement Guidelines.
And here is ours, written by Walt Mossberg and me, in which we — you know — talk like regular people would to other regular people:
A Higher Standard for Comments
We invite comments on the blog posts, columns, guest blogs and news reports here on the Re/code website. We know our readers and viewers have plenty to contribute and that we are not the source of all wisdom. But we are determined to set as high a standard for comments as we are trying to maintain for our own work. To that end, we are establishing a more rigorous policy for comments and commenters than a lot of other websites maintain.
One of the great things about the Web is that it allows so many people to post what they think. But one of the worst things about the Web is that so many of those comments are juvenile, venomous, self-promotional or irrelevant. We won’t be publishing that stuff here. And, while we want diverse inputs, please understand that this is our site, not a user-owned or user-run site.
So, here are the rules for comments:
First of all, we won’t publish anonymous comments. We think people should stand behind what they post here, just as we do. While we realize that there is some value in allowing people to disguise their identities, we think that is outweighed by the tendency of people to hide behind anonymity to post flames and slander, or to make seemingly unbiased comments that actually serve their business purposes, rather than to truly advance the discussions here.
To that end, everybody who wishes to comment will have to register, even though the site is free. And every comment will be signed with the commenter’s registered name. Now, we’re not idiots. We know some people will register with fake names, and we don’t have the resources or inclination to try and verify every name. But we figure we’ll get a fairly high percentage of real names behind the comments, which is way better than just using anonymity.
We also will not tolerate comments that contain personal attacks or question the motives, rather than the views, of others. If you think we, or our guest bloggers, or the people or companies we write about, are wrong, feel free to say so and to argue why you think so. We welcome such critiques and alternate views. But we will remove comments that call people corrupt, or stupid, or contend that they are taking positions because they are evil or immoral.
For instance, if you think Walt is wrong to prefer one product to another, or Kara is wrong to predict that one company might buy another, please feel free to say so. But we won’t publish comments that say these views with which you disagree stem from bad motives or corruption. Civility is the rule.
The same thing goes for people we cover. There are plenty of places on the Web where you can accuse tech moguls of being greedy or vain. This isn’t one of them. But we’d be glad to publish your comments on why they are following the wrong strategy or why their products are flawed or great — even if we have written just the opposite.
And that also applies to other commenters. You are free to say another commenter is wrong, and why. But we won’t publish personal attacks on other commenters. So, feel free to say “I disagree with Jane that social networking will die …” but we won’t allow “Jane, you ignorant slut!”
We also won’t allow comments containing allegedly factual claims that we know to be wrong, or which we simply lack the time or resources to verify. If you think the security companies are spreading malware to boost sales, or that Macs cure, or cause, cancer, post those theories to your own blog. And there may be other kinds of comments we will pull if we find them antithetical to the spirit of the site and to civil, on-topic discussion.
We also won’t tolerate comments that we consider racist or sexist or derogatory to any religion, sexual orientation or ethnic group — or that we believe promote hate or bigotry, or are offensive.
We won’t be moderating the comments — that is, pre-screening them. But we will be monitoring them — that is, reading them after they are posted and removing those that violate our standards. We will also provide an easy way for readers to flag comments, or commenters, as being in violation of our standards, and we will look into these reports.
Finally, we reserve the right to ban from commenting anyone who is violating these standards, or who we believe is disrupting the site, or creating a tone or atmosphere we consider inconsistent with the tone and atmosphere we want here.
I am not contrasting these to make fun of the bad writers of Twitter, but to say that we need to handle this growing cancer of commentary in a much different and more human way.
That certainly was the aim of Walt Kelly, the amazing cartoonist, from whom I borrowed the title of this piece. It was uttered by Kelly’s Pogo, the affable opossum who always was the most sensible character in the swamp.
“We have met the enemy and he is us” was one of the most famous utterances of Pogo, a commentary by Kelly about how humans are, unfortunately, all too human.
That has not changed over the decades. In 1953, Kelly wrote in the foreword of a book of collected comic strips, “The Pogo Papers,” something that resonates to this day, even though he was attacking McCarthyism at the time:
“Traces of nobility, gentleness and courage persist in all people, do what we will to stamp out the trend. So, too, do those characteristics which are ugly. It is just unfortunate that in the clumsy hands of a cartoonist all traits become ridiculous, leading to a certain amount of self-conscious expostulation and the desire to join battle. There is no need to sally forth, for it remains true that those things which make us human are, curiously enough, always close at hand. Resolve then, that on this very ground, with small flags waving and tinny blasts on tiny trumpets, we shall meet the enemy, and not only may he be ours, he may be us. Forward!”
Forward indeed. As the editors of Jezebel noted at the end of their post last week: “It’s time that Gawker Media applied that principle to promoting our freedom to write without being bombarded with porn and gore. We’re real, we’re here, and we matter.”
We all matter.
This article originally appeared on Recode.net.