/cdn.vox-cdn.com/uploads/chorus_image/image/52103617/GettyImages_621183242.0.jpeg)
CAMBRIDGE, Massachusetts — Four days after an election in which Facebook was increasingly criticized for helping “fake news” proliferate, company founder Mark Zuckerberg put up a defensive post in which he proclaimed— without citing a source — that “99% of what people see” on Facebook “is authentic.”
But when faced with a room full of journalists and political professionals at Harvard’s Campaign Managers Conference Wednesday night, Facebook executive Elliot Schrage had a very different message: Fake news is a problem, and we know we have to do something about it — though we aren’t yet sure what.
“For so long, we had resisted having standards about whether something’s newsworthy because we did not consider ourselves a service that was predominantly for the distribution of news. And that was wrong!” Schrage said during a panel on the media’s role in the election.
He added: “We have a responsibility here. I think we recognize that. This has been a learning for us.”
This would be a major shift for Facebook, which has insisted that it is a “technology company” and not a “media company.”
But so far, it’s unclear whether this shift will be more than a rhetorical one. Because Schrage — who’s Facebook’s vice president of global communications, marketing, and public policy — also signaled that the company still had very serious misgivings about what it can do.
“Until this election, our focus was on helping people share,” Schrage said. “This election forced us to question whether we have a role in assessing the validity of content people share. And I have to tell you all, and one of the reasons I came here — that’s a pretty damn scary role to play.”
“I think we need a ‘think before you share’ program”
When it came to specifics, though, Schrage expressed deep skepticism about two potential paths for the company.
First, he said Facebook was uninterested in hiring editors who would choose certain types of content to elevate in the newsfeed. “It is not clear to me that with 1.8 billion people around the world, lots of different users and lots of different languages, the smart strategy is to start hiring editors,” he said. “That’s just not what we do.”
Second, he said that a company taking it upon themselves to determine what’s “newsworthy” is “a very dangerous road.” And that’s fair enough — a great deal of content shared on Facebook of course isn’t intended to be “newsworthy” at all, and any suppression of certain topics would have a disturbing resemblance to censorship.
Schrage did say that Facebook already had tools with which its users could report fake news, but he acknowledged that they were “not well-done” and that those tools had to be improved.
Furthermore, he argued that even if Facebook did incorporate “signals” that certain brands have “higher quality” or might have more reliable factual information, it wouldn’t “solve the problem.”
Instead, Schrage seemed to prefer potential solutions that would nudge users to act differently without playing favorites among different sites or blacklisting them. “We’re in the business of giving users the power to share. Part of that is helping them share thoughtfully and responsibly, and consume thoughtfully and responsibly.”
“I think we need a ‘think before you share’ program so that people don’t share stuff that’s stupid,” he added. “On the left or on the right.”
It’s not really clear what this would entail — or whether it would work
Yet there are downsides to any user-centric approach. For instance, any tool that improves users’ ability to flag certain articles as “fake” could (and almost certainly would) be weaponized by committed users who merely dislike a certain article or media outlet.
Can Facebook encourage users to, for instance, click through before sharing an inflammatory headline on a story they haven’t read? Perhaps. But it’s unclear that would change much. As Schrage alludes, fake news spreads on Facebook because users enjoy it, and Facebook’s newsfeed algorithm wants users to see what they enjoy.
“Facebook’s algorithm prioritizes ‘engagement’ — and a reliable way to get readers to engage is by making up outrageous nonsense about politicians they don’t like,” Vox’s Tim Lee has written.
So when Schrage says he thinks the problem is primarily with user behavior, he’s reiterating that Facebook does not want to be in the business of deeming certain particular sites or outlets “fake” and punishing them somehow.
Still, the reality is that many websites have been created for the sole purpose of spreading entirely made-up news on Facebook, and that they have benefited greatly from Facebook’s algorithm.
If Facebook’s response to this is merely passing the ball to users, then, it risks becoming something akin to Twitter’s response to its harassment problem — a constant chorus of "we hear you” that never results in much substantive change.