clock menu more-arrow no yes mobile

Filed under:

Facebook is hiring 3,000 people to stop users from broadcasting murder and rape

Mark Zuckerberg Delivers Keynote Address At Facebook F8 Conference Photo by Justin Sullivan/Getty Images

In recent weeks, Facebook has faced a string of incidents where users have filmed shocking events — like rape and murder — and uploaded them to the site. Critics argued the company wasn’t doing enough to address the problem.

Today, Facebook CEO Mark Zuckerberg took action to address those complaints, announcing that the company was going to hire 3,000 people — on top of the 4,500 staff it already had — to help it respond more quickly to reports of abusive behavior in the platform.

It’s a laudable move. Facebook is betting that having thousands of additional bodies policing its platform will allow it to more quickly and effectively remove offensive content. If it works, it will illustrate something important about how big internet companies can deal with problems on their platforms.

For years, Twitter has faced criticism for the rampant abuse some of its users inflict on others. More recently, Facebook faced criticism for promoting fake news stories on its platform.

A common response has been that it’s too difficult to control what gets posted to a vast electronic platform. And that’s true if a company insists on taking an automated approach. But when a company really cares about addressing a problem like this, executives don’t restrict themselves to writing algorithms. If necessary, they hire thousands of human beings to apply human judgment to the problem.

Google reportedly has “a 10,000-strong army of independent contractors to flag offensive or upsetting content” in search results. One expert estimated that across the global internet, “well over 100,000” people, many of them low-paid workers in countries like the Philippines, are paid to police online content.

To be fair, Twitter said in 2015 that it was tripling the size of its staff handling abuse complaints. But Twitter didn’t say how many people that was. And judging from continued complaints about online abuse over the past 18 months, it evidently wasn’t enough.

As for Facebook’s fake news problem, it’s true that the situations aren’t strictly comparable. Determining whether a video contains graphic nudity or violence is easier than determining whether a news story is accurate. If Facebook wanted to improve the quality of news in the Newsfeed, it would probably have to hire more experienced and educated staffers — perhaps professional journalists — and think carefully about how to do it.

But the point here is that Facebook could be devoting vastly more resources to the problem if it cared about solving it. Until last year, for example, Facebook had a “trending news” section on the site that was edited by a team of 15 to 18 moderators — moderators Facebook laid off in the face of controversy about alleged left-wing bias.

One possible lesson from the incident could be that it just isn’t possible for human moderators to curate news stories on Facebook without sparking controversy from one or both ends of the political spectrum.

But another interpretation is that Facebook has drastically underestimated in the quality of the news articles promoted on the platform. An overworked and underpaid team was inevitably going to make mistakes that came back to haunt the company. If Facebook were as concerned about shoddy journalism as it is about offensive images, it would be devoting a lot more human resources to address the problem.