clock menu more-arrow no yes mobile

Filed under:

Facebook is reportedly rating users’ trustworthiness when they report fake news

Apparently, a lot of people flag news as fake just because they disagree with it.

Facebook CEO Mark Zuckerberg speaks during the F8 Facebook Developers conference on May 1, 2018 in San Jose, California.
Facebook CEO Mark Zuckerberg speaks during the F8 Facebook Developers conference on May 1, 2018, in San Jose, California.
Justin Sullivan/Getty Images
Emily Stewart covers business and economics for Vox and writes the newsletter The Big Squeeze, examining the ways ordinary people are being squeezed under capitalism. Before joining Vox, she worked for TheStreet.

Facebook has a feature that allows users to flag questionable content, including what they believe might be fake news. But there’s been a flaw in the setup: Users often flag content as false that they just don’t agree with. And so Facebook is now rating its users’ trustworthiness, according to a new report.

Elizabeth Dwoskin at the Washington Post reported on Tuesday that the Menlo Park, California-based company has started assigning its users a “reputation score” of zero to one to predict how trustworthy they are when they report problematic content on Facebook.

Tessa Lyons, the product manager in charge of dealing with misinformation at Facebook, told the Post that it’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher.” And so it’s trying to identify users who do just that.

The full criteria Facebook uses to assign its ratings is unclear. So is whether all users have scores, how they’re used, and if they’d ever be disclosed to the users themselves.

Social media companies rating people might feel a little Black Mirror-esque, and the lack of transparency about how the system works is a little unsettling. But the more Facebook discloses about the trustworthiness rankings, the easier it becomes for bad actors to game the system and for activist groups to successfully silence content they don’t like.

Facebook seems to be using the rating system more as a way to direct resources and deploy fact-checkers to review content. Just because a post is flagged as potentially false — or because a user has a high or low trustworthiness rating — doesn’t mean it will automatically be left up or taken down.

Facebook, Twitter, and other social media companies have been under heavy scrutiny in recent months. They’ve faced questions about how Russia uses their platforms in its disinformation campaigns and have seen increasing calls for regulation.

As the chorus for platforms to crack down on conspiracy theorist Jones and others, Republicans, including the president, have continued to lob accusations that Facebook and Twitter are biased against conservative voices. Facebook’s and Twitter’s efforts to police their platforms and make improvements that could affect growth have been severely punished on Wall Street.

Facebook spokesperson pushed back on the Post’s characterization of its rating system system in an email, saying the idea that Facebook has a “centralized ‘reputation’ score for people that use Facebook is just plain wrong” and criticizing the story’s headline as well.

“What we’re actually doing: We developed a process to protect against people indiscriminately flagging news as fake and attempting to gain the system,” the spokesperson said. “The reason we do this is to make sure that our fight against misinformation is as effective as possible.”