/cdn.vox-cdn.com/uploads/chorus_image/image/51864525/christine_quinn_dewey_defeats_truman_zps9f22e70a.0.jpeg)
Facebook has recently come under fire for allowing fake news, exaggerated news claims, and other forms of rumor-mongering and irresponsible “journalism” to circulate throughout the site, with numerous pundits speculating that its practices may have influenced the presidential election. On November 12, Facebook CEO Mark Zuckerberg responded to these claims, insisting in a Facebook post of his own that less than one percent of all content on the site could be classified as “fake news and hoaxes.”
But even if Zuckerberg’s math is accurate, that low quantity of fake content is potentially rendered irrelevant by a scary new finding: namely, that fake news may be “more viral” than real news.
The quantity of fake news on Facebook doesn’t necessarily matter — but the number of people who share it absolutely does
Earlier this year, investigations conducted by BuzzFeed found that not only was Facebook’s newsfeed algorithm promoting numerous false stories, but that nearly 40 percent of the content published by far right Facebook pages and 19 percent of the content published by extreme left-leaning Facebook pages was false or misleading. The site even found that in one town in Macedonia, a ring of teens was making money by publishing thousands of fake right-wing news stories across hundreds of fake news websites. Anytime a story went viral, they’d make money on the ads that accompanied it.
Meanwhile, recent research has found that 44 percent of all adults get their news from Facebook, and that social media can directly influence the political views of people on platforms like Facebook and Twitter. So, it’s reasonable to assume that fake news shared on Facebook — whether it’s left-leaning or right-leaning — has the potential to reach and even sway a considerable number of the site’s 1 billion active users.
This scenario is what prompted Mike Caulfield, the director of blended and networked learning at Washington State University Vancouver, to dig into the question of “whether fake news or real news is more viral.” As Caulfield explained in a blog post published November 13, he looked at Facebook’s own publicly available social news tracking API to study several news stories — from real news websites as well as fake websites attempting to pass themselves off as real news websites — in hopes of determining which ones users were more likely to share.
“If Facebook is truly a functioning news ecosystem,” he wrote, “we should expect large local newspapers like the Boston Globe and LA Times to compete favorably with fake ‘hoax’ newspapers like the Baltimore Gazette or Denver Guardian — fake ‘papers’ that were created purely to derive ad views from people looking for invented Clinton conspiracies.”
That seems like a reasonable supposition. But as Caulfield shows — again, using Facebook’s publicly available traffic APIs — it’s probably not a realistic one. Caulfield found that at least one article from a fake news site was shared far more widely and thus reached a far greater number of people than some concurrently “trending” articles from respected news sources like the Boston Globe and the Washington Post. Ultimately, the fake article garnered thousands more shares than several of the real news stories he looked at.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/7467479/shares.jpg)
The story in question, headlined “FBI Agent Suspected in Hillary Email Leaks Found Dead,” came from a website called the Denver Guardian, a fake news site created for what appears to be the sole purpose of disseminating fake anti-Clinton and right-wing propaganda. Though its posts are easily unmasked as hoaxes with minimal fact-checking, Facebook, as many have already lamented, has no built-in way to help users tell the difference between an article from a regular news website and one from the “Denver Guardian” that might show up in their news feeds, masquerading as real news.
The Denver Guardian post, which was published to Facebook three days before the election, on November 5, tacitly implies that the Democratic presidential nominee arranged for the double homicide of a fictional FBI investigator and his fictional wife, and that the couple’s deaths were made to look like a murder-suicide. It’s completely fake. But that hasn’t stopped it from racking up over 560,000 shares on Facebook, which means that potentially millions of voters had opportunity to see it, read it, or hear about it before Election Day.
Tallying fake news shares isn’t an exact science, but it’s enough that people are taking action
The correlations in Caulfield’s cursory research aren’t perfect. One of Caulfield’s readers, a man named Dan Barker, reported that at least one LA Times story that wasn’t included in Caulfield’s survey had been shared 200,000 times on Facebook at the time of Caulfield’s November 13 blog post. Vox was not able to verify Barker’s statement, but if it was accurate, the mystery LA Times story in question attained at least a third of the fake FBI murder story’s 600,000 shares — not promising, but not as bad as Caulfield’s findings suggest.
Also, the fake Denver Guardian piece had been published to Facebook over a week prior to Caulfield’s post, giving it more time to make the rounds than several of the real trending stories that Caulfield did use in his analysis, which were published much later than the fake one.
But Caulfield was quick to recognize those flaws, and his point remains the same: that Facebook needs to be held accountable for the news it’s helping circulate, and that all Facebook users need to have a better idea of what they’re looking at when they scroll through their feed. “Let’s make better comparisons,” he wrote. “Build the tools to do it, and get Facebook to open access to its data so it can be done systemically.” He also pointed out the difficulty in quantifying “content” on the social network as “news” versus other types of routine Facebook content: “Is my status update content? Each photo I upload?”
And he’s now just one of many voices calling for Facebook to take action. As my colleague Tim Lee has argued, “limiting the distribution of obviously fake news is the bare minimum Facebook can do.” And as Zeynep Tufekci wrote in the New York Times on Monday, Facebook may be in denial about the role it might be playing in miseducating the American public.
Even Facebook’s own employees are fed up; “renegade” staff have reportedly formed an internal task force to deal with the site’s fake news problem. And it looks as though Facebook found inspiration in Google: On Monday, the search giant announced that it will severely curtail the benefit of churning out fake news by banning fake news sites from receiving ad revenue from its search pages. Shortly thereafter, Facebook announced that it, too, would explicitly ban fake news sites from displaying ads on Facebook pages.
The public is also doing its part: A Merrimack College communications professor named Melissa Zimdars has released a simple Google Doc that lists “False, misleading, clickbait-y, and or satirical ‘news’ sources,” with a loose category system to help you determine the level of deceit you’re dealing with. It includes satirical sites like the Onion and the New Yorker’s Borowitz Report; misleading and hyperbolic clickbait like Infowars, Breitbart, and Political Blindspot; and fake sites that intentionally try to pass themselves as real ones, like MSNBC.com.co. Zimdars isn’t alone: The website Fake News Watch also lists categories of fake news sites to be on the lookout for, and New York magazine has created a Chrome extension that alerts the user to fake news sites based on Zimdar’s list.
At the very least, the API stats that Caulfield examined clearly show that false information shared about a presidential candidate on Facebook, by users who may or may not have been aware of the information’s falsehood, contributed to the public consumption of that false information — just days before one of the most significant elections in US history.
Update: A new analysis by Buzzfeed has born out Caulfield’s cursory findings by revealing that fake news across Facebook went viral far more often than real news. Over the course of the 10 months leading up to the election, the top 20 fake news articles being shared on Facebook skyrocketed from 3 million “shares, reactions, and comments” to nearly 9 million, while mainstream media articles declined from 12 million shares, reactions, and comments in February to just 7.3 million by Election Day. As seen in this astonishing chart, the inverse relationship between fake news and real news intensified in the crucial final three months before the campaign.
A Facebook spokesperson dismissed the findings to Buzzfeed, claiming that the “long tail” of Facebook content can make it “seem like the top stories get a lot of traction, but they represent a tiny fraction of the total.” However, Buzzfeed examined the performance of fake news against that of “the 20 best-performing election stories from 19 major news websites”; assuming the real stories also had “long tails,” fake news clearly won the war of misinformation on Facebook.