clock menu more-arrow no yes mobile

The fundamental change that might actually solve Twitter's harassment problem

It’s internal, has nothing to do with filtering, and it’s huge.

Social Media Site Twitter Debuts On The New York Stock Exchange Photo by Bethany Clarke/Getty Images
Aja Romano writes about pop culture, media, and ethics. Before joining Vox in 2016, they were a staff reporter at the Daily Dot. A 2019 fellow of the National Critics Institute, they’re considered an authority on fandom, the internet, and the culture wars.

Twitter has long been criticized for its frequently slow (or in the worst cases, nonexistent) response to the many users who’ve experienced harassment and abuse on social media network. At times, the company has insisted it’s doing everything it can to fix the problem, but in practice, the results have been inconsistent at best.

But a recent development in Twitter’s anti-harassment efforts may be a huge step in the right direction. On November 15, the service announced a new round of major changes to its harassment policies, while introducing new tools to help users report instances of harassment and abuse. The changes include two key new functions: muting entire conversation threads and specifying reported tweets as hate speech.

Twitter’s blog post about the changes noted that allowing users to specifically identify hate speech when reporting abuse “will improve our ability to process these reports, which helps reduce the burden on the person experiencing the abuse, and helps to strengthen a culture of collective support on Twitter.” With this statement, it seems that Twitter is directly addressing the common accusation that it places too much onus on victims of abuse and harassment to take action, and then not taking their complaints as seriously as they should.

If you’ve ever spent time attempting to report hate speech to Twitter, you may justifiably be skeptical about whether the site will actually enforce its new guidelines. In my experience, almost every tweet I’ve reported — often due to violent, gendered language — has been ruled inoffensive by Twitter’s abuse team. In the rare instance that Twitter has taken action, it has only required the reported user to delete the offensive tweets, essentially delivering a slap on the wrist.

But I’m hopeful that this might change, not only because of Twitter’s expanded abuse reporting functionality, but because targets of hate speech can actually report it as such. And I’m especially optimistic regarding one particularly illuminating statement in Twitter’s blog post: The site says that it has “retrained all of our support teams on our policies, including special sessions on cultural and historical contextualization of hateful conduct.”

In other words, it seems Twitter is teaching its support staff to think about more than whether individual words are offensive or not. “Cultural and historical contextualization” allows for a larger picture to unfold, one in which repeated patterns of behavior from specific Twitter users can be assessed and analyzed.

Twitter also promised that it will be implementing “an ongoing refresher program” to help its employees understand and stay up to date on how harassment spreads.

This could be easier said than done: As BuzzFeed pointed out in response to Twitter’s announcement, trolls on the site are evolving their language to dodge crackdowns on hate speech. For example, Operation Google was a recent attempt by members of 4chan’s alt-right haven /pol/ to substitute the names of social media platforms for actual racial slurs, in order to avoid censorship.

Then there are low-level insults like “cuck” — a relatively new, shortened form of “cuckold” popularized by the alt-right. “Cuck” doesn’t exactly fall under the category of hate speech, until you dig deeper and realize that many members of the alt-right have imbued it with meaning derived from white supremacy. And hate speech itself comes in all forms; for example, it’s not as if Twitter users can block or filter the use of parentheses when they’re used as they were in a recent wave of anti-Semitic alt-right hate on the platform.

Still, all of this is a hopeful step forward. There’s no real systematic way for most social media platforms to contextualize larger patterns of harassment and hate speech.

But it seems Twitter may have finally realized that if it truly wants to make its users feel safer, it must work harder to understand why they feel threatened in the first place.