Long-harangued as one of the most visible and toxic arenas of the internet’s harassment-fueled culture war, Twitter rolled out a number of major changes to the site on November 15 in an attempt to give users more control over what appears in their timelines.
New functions include a broad filtering ability that users can apply to their notification pages, allowing them to mute specific keywords, hashtags, and entire conversation threads.
Twitter has also expanded its anti-harassment tools to allow users to directly report hate speech — a change that might finally allow the site to make a broader cultural impact on curbing online harassment instead of the tiny superficial ones it’s drawn criticism for in the past.
There’s no question the changes were long overdue. However, not everyone is happy — including a wealth of alt-right users who are angry at the site’s attempts to “purge” itself of its more controversial users.
How to use Twitter’s new muting tools
Muting a conversation you’ve been tagged into is easy — just click on the options for any one of those tweets and you’ll see “Mute this conversation.”
To mute individual phrases and words, you’ll have to go to your account settings. There, sandwiched between options for muting and blocking accounts, you have the option to mute keywords:
Twitter’s approach to muting words isn’t all-encompassing. Muting a word means you won’t get an alert for any tweet that mentions you and uses the word, and that the tweet won’t show up in your notifications feed. But you can still see the word in your timeline or any hashtag feed where the word appears.
In other words, the new filter tools allow you to protect yourself from direct harassment to some degree, but they won’t do much to actually prevent harassment from occurring, or punish users whose tweets contain hate speech or threats of violence. This is where Twitter’s expanded reporting system comes in.
How to report someone for hate speech on Twitter
Reporting someone for hate speech and content is now easier than ever. Where you would normally report someone for other kinds of violations, you can now choose the “It’s abusive or harmful” option and be redirected to an expanded report page that lets you choose “It directs hate against a race, religion, gender, or orientation.”
Of course, reporting a tweet doesn’t mean that Twitter support staff will agree that it “directs hate.” But it’s a long overdue step in the right direction.
Twitter’s filters are a long time coming, and they still may not be enough
In 2013, at the height of Gamergate, software engineer Randi Lee Harper created the Online Abuse Prevention Initiative, which included a mass autoblock tool that Twitter users could download and deploy against a large number of Gamergate-related accounts. In February, she published a well-received list of 23 ways to “put out the Twitter trash fire”; it featured micro-level, practical suggestions for Twitter itself, like allowing users to mute all replies to a tweet and block content instead of users — the two major changes Twitter just made.
Harper also argued that when you block someone, Twitter should ensure that the block applies to everything — by preventing you from seeing the blocked user’s retweets, as well as other users’ replies to tweets made by the blocked user. She also advocated for making sure that blocked users can’t continue to read the feeds of people who’ve blocked them simply by logging out or doing a sitewide search for them, which was possible in the past. In short, she strongly advocated for tools that were generally more decisive and far-reaching than Twitter’s existing implementations.
Harper’s initial reaction to the new filtering tools was highly positive:
First reaction: Twitter is about to be a more viable place for conversations than meatspace. I don't know how to feel about that.— ❄️ mei irl ❄️ (@randileeharper) November 15, 2016
In an expanded response published on Medium, Harper noted that Twitter’s changes looked “solid, if not a little confusing.” She pointed out that while the mute options now seem to make Twitter a “viable place for conversations,” there’s at least one oversight: Many Twitter users have profile names that contain potentially unwanted words (the words that most often correlate with harassment on Twitter tend to be politically loaded terms associated with the alt-right and its progressive counter, social justice activism). But since the words appear in their profiles and not in the content that would appear on notifications pages, the new muting option doesn’t encompass them. For example, muting the word “cuck” will still allow a user with the word “cuck” in their profile name to message you. Presumably, however, muting the user would work preemptively here as well.
Harper also noted, as have many before her, that Twitter took its sweet time getting to this point. Founded in 2006, the site has been incredibly slow to make the changes its users have been requesting for years, even though its now-languishing app Tweetdeck, originally a beloved third-party client, offered these new filtering options years ago, well before Twitter bought it. “Thanks, I guess, for finally pushing out a feature that existed in one of your clients until you let it rot,” Harper commented.
Finally, Harper stated that while her abuse reports had been more effective lately, Twitter didn’t ban the users she reported, opting instead to ask them to delete harmful tweets. (According to Twitter’s abuse policy, it engages in “temporary locking” prior to the suspension of a routinely abusive account; this gives the account owner a chance to have their account unlocked by complying with specific requested actions, which could include deleting the offensive tweets.) If this becomes the routine action Twitter takes, its new tools may not actually do much to combat harassment.
And this may really be the crux of Twitter’s problem: it’s taken so long to enact real change that many longtime users, potential users, advertisers, and would-be buyers have been scared away from the site. If Twitter can’t take sweeping action to eliminate hate, it might not be able to recover from stagnating growth and shy investors.
In response to the new policies and tools, Twitter trolls are leaving the site in droves (or claiming to)
Meanwhile, a new “alternative” to Twitter, Gab.ai, has launched in beta as an appeal to members of the alt-right — a group which includes many of the users who Twitter has been trying to deal with. Many of them have either been banned from Twitter or seen their heroes, like Milo Yiannopoulos, banned from Twitter for directly harassing other users or promoting harassment. (Gab’s CEO was kicked out of Y Combinator’s alumni network on November 11 for his behavior on the private social platform.)
Members of the white supremacist alt-right and other supporters of Donald Trump have been flocking to Gab since Tuesday, with many of them describing Twitter’s policy changes as a “purge.” Though a site like Gab is certain to reinforce an echo chamber of belief distortion, it’s also somewhat interesting to see the internet equivalent of “white flight” in action.
On November 16, the most popular post on Gab mocked Twitter for having the audacity to think that a “purge” would be effective, pointing out that the alt-right had already achieved a victory over its targets by electing Trump to the White House:
Twitter could have purged the #AltRight BEFORE we memed a President into the White House. They didn't because they never believed it was possible. Banning us now is too little & too late, a futile gesture of impotence.
Gab insists that “All are welcome on Gab, and always will be,” and encourages its users to “Try to be nice and kind to one another.” But given that some of the site’s most popular tweets so far have contained gendered and homophobic slurs, it’s hard to imagine that mantra is a sincere one.
Still, regardless of its already toxic environment, Gab does already have one feature worth mentioning: the ability to mute both users and words, a concept it took Twitter 10 years to implement.