clock menu more-arrow no yes mobile

Filed under:

Twitter will let you block tweets with nasty words in its latest attempt to combat abuse

Twitter is also training its employees to better understand what constitutes abuse.

England v USA: Group C - 2010 FIFA World Cup Photo by Stuart Franklin/Getty Images

Twitter had a new idea to shield its users from abuse they face from its other users: A word filter to block out tweets with particular words people don't want to see.

Twitter is rolling out a feature Tuesday that lets people block words, hashtags and even emojis from appearing in their mentions. If you add a word to this list, it means you won't get a notification if someone mentions you in a tweet that includes said word.

It’s a feature that Instagram has already implemented and could provide a nice way for users to shield themselves from abusive or offensive tweets.

The downside, of course, is that hiding certain words doesn’t actually remove the offensive content from Twitter. It’s a masking approach Twitter implemented back in August with another recent update, which allowed users to block mentions from people they don’t follow.

Simply masking the abusive stuff isn’t Twitter’s ultimate goal, though, says Del Harvey, Twitter’s VP of trust and safety. Harvey says that Twitter is also making changes to remove more abusive material, not just sweep it under the rug.

In that vein, Twitter announced a few other updates on Tuesday. The first is a new reporting option specifically for “hateful” content, a new reporting category Harvey hopes will make it clearer for users who want to report tweets but don’t know what to label them. (Twitter already offers a similar “abusive or harmful” category, though.)

The second is that Twitter has re-trained hundreds of employees who handle moderation to better understand what to look for when analyzing reported tweets. Put simply, Twitter employees were not catching abuse the way they should, Harvey said.

“Not everyone has the same cultural background or framework to even be able to recognize why certain types of content or certain phrases or the like are actually abusive,” she said in an interview with Recode. “A lot of times, what we found was [that] whoever got that initial report didn’t have that background to understand what the content was actually referencing.”

So Twitter is trying to better educate its own employees on how to spot abusive or inappropriate content.

This is a step in the right direction for Twitter, which has a history of abuse on its platform, especially for women and minority users. The question, though, is whether these updates are too little, too late. The abuse problem is so old now that many users have already written Twitter off. It’s also clearly impacting Twitter’s business — abuse has driven away numerous celebrity users and even potential acquirers. Once people feel burned or unsafe on an internet platform, it’s tough to win them back.

But here, Twitter is trying to do just that. And it has a message for those who have started to lose faith in the company’s commitment to safety.

“I think that a lot of times people assume that this is something Twitter either doesn’t care about or isn’t a priority or that ‘you only care because it’s about money,’” Harvey said. “I just would love for people to know that there are people at Twitter who are thinking about this constantly.”

Here’s the step-by-step guide for actually adding words to your list.

Twitter

This article originally appeared on Recode.net.