/cdn.vox-cdn.com/uploads/chorus_image/image/53873265/75298626.0.jpg)
Over the past month, Facebook has begun rolling out a striking new addition to the site: a bright red warning label that identifies fake news stories as “disputed” and asks you to think twice before sharing them.
The rollout has drawn mixed reactions. Many users feel the warning is long overdue, though some are skeptical that it might not go far enough. Others, primarily from right-wing groups, question whether Facebook’s fact-checking sources are accurate themselves.
Still, the move signals a serious effort from Facebook to quell the spread of fake news, even if it’s unlikely to actually end debate over the disputed articles themselves.
Facebook’s ongoing attempt to fix its fake news problem has evolved into a multi-faceted approach
In November, facing a barrage of post-election media coverage about how “fake news” on Facebook had influenced the 2016 presidential election, CEO Mark Zuckerberg announced that the company would be working to combat the spread of misinformation on the platform. This effort, he said, would include deploying an algorithm to detect and flag misinformation, using third-party sites to help fact-check articles that were being shared, and notifying users if a story they tried to share appeared to be false.
The third-party sites Facebook is now using to assess articles shared on the platform are all part of Poynter’s fact-checking network, and include the urban legend-debunking–turned–fact-checking site Snopes as well as news organizations like ABC News and the Associated Press.
Facebook sends articles which have been flagged by Facebook users or sniffed out by Facebook’s auto-detection systems to employees of the sites in Poynter’s fact-checking network for investigation. If at least two of the fact-checking websites dispute an article’s credibility, the label appears when you attempt to share it, as noted by Quartz in the screenshot below:
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/8204025/screen_shot_2017_03_19_at_12_09_44_pm.png)
Clicking on the red “hazard” symbol then generates a pop-up message explaining which sites have debunked the article in question:
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/8204855/Screen_Shot_2017_03_22_at_1.03.59_PM.png)
If you ignore the message and proceed to post anyway, you’ll see an additional prompt:
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/8204897/screen_shot_2017_03_19_at_12_10_27_pm.png)
If you click “post anyway,” the warning label remains attached to the article when it appears on your Facebook page, as Gizmodo captured in the screenshot below:
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/8204975/ytcncmlobaynaxmruexk.png)
The early response to Facebook’s latest effort to combat fake news has been mixed
Facebook users have been able to report hoaxes and false stories on the website since 2015, but the process didn’t give them much opportunity to actually stop the spread of such stories. An earlier iteration of the warning label inserted a disclaimer in tiny gray print above any story that had been reported by many Facebook users as false.
This new approach, in contrast, draws on the authority of “independent” fact-checkers outside of Facebook. And the warning and pop-up confirmation are much harder for would-be sharers and readers of fake news to ignore.
Facebook first previewed the warning in December, but only recently began rolling it out, not without some controversy. Since some stories may be highly misleading yet still have a basis in reality, not all stories that some people consider “fake” will end up with the tag.
Also, the tool doesn’t seem to be universal, at least not yet: The Guardian reported that its attempts to post a known fake news story triggered the warning label in its San Francisco office, but not in Sydney or London. And since the time-consuming process of assessing articles can result in delays — in February, one article remained untagged for nearly a week — false or misleading stories have a chance to spread before they’re hit with a “disputed” label.
Meanwhile, members of the alt-right, including Infowars writer Paul Joseph Watson, have criticized the Facebook’s implementation of the label because they believe the fact-checkers themselves are biased.
This is now appearing on Facebook posts. Snopes is a bias, far-left outfit. It is not a responsible "fact-checker". pic.twitter.com/IMB0RVJklz
— Paul Joseph Watson (@PrisonPlanet) March 18, 2017
Poynter describes its commitment to fact-checking as “nonpartisan and transparent,” and says its mission is to provide “accountability journalism” without bias. But because many on the far right have a contentious relationship with mainstream media, this assurance is unlikely to assuage them.
Facebook has not responded to a request for comment about whether the warning label will undergo future changes based on user feedback and evolving approaches to the question of what constitutes “fake news.”
For now, it seems Facebook is hedging its bets with the “disputed” label, which gives users just enough leeway to argue that a “disputed” story could still be real. All in all, people will probably continue to share “disputed” articles. But if nothing else, the new warning system might help mold Facebook’s culture into one where people share links and news sources more responsibly.
After all, the warning doesn’t only help you identify fake stories: It helps you identify which of your friends are willing to share them anyway, regardless of the consequences.