/cdn.vox-cdn.com/uploads/chorus_image/image/51905999/Zuckerberg_Facebook_Social_Good_Forum.0.jpeg)
A version of this essay was originally published at Tech.pinions, a website dedicated to informed opinions, insight and perspective on the tech industry.
“Fake news” is the phrase du jour across the political, media and technology domains over the past couple of weeks, as a number of people have suggested that false news stories may have swung the result of the U.S. presidential election. There seems to be widespread agreement that something more needs to be done and, though initial comments from Facebook CEO Mark Zuckerberg suggested that he didn’t think it was a serious problem, Facebook now appears to be taking things more seriously. Even with this consensus on the nature and seriousness of the problem, there’s little consensus so far on how it’s to be solved.
As I see it, there are four main approaches that Facebook, and to some extent other companies that are major conduits for news, can take at this point:
- Do nothing — keep things more or less as they are.
- Leverage algorithms and artificial intelligence — put computers to work to detect and block false stories.
- Use human curation by employees — put teams of people to work on detecting and squashing false stories.
- Use human curation by users — leverage the user base to flag and block false content.
Do nothing
This is in many ways the status quo, though it’s becoming increasingly untenable. But, through a combination of commitment to free and open speech, a degree of apathy and perhaps even despair at finding workable solutions, many sites and services have simply kept the doors open to any and all content, with no attempt to detect or downgrade that which is not truthful. Zuckerberg has offered in Facebook’s defense the argument that truth is in the eye of the beholder, and that to take sides would be a political statement in at least some cases. There is real merit to this argument — not all the content that some people might consider false is factually so, and in some cases, the falsehood is more a matter of opinion. But the reality is that much of the content likely to have most swayed votes is demonstrably incorrect, so this argument has its limits. No one is arguing that Facebook should attempt to divide one set of op-eds from another, merely that it stop allowing clearly false, and in some cases, libelous content.
Put the computers to work
When every big technology company under the sun is talking up its AI chops, it seems high time to put machine learning and other computing technology to work on detecting and blocking fake news. If AI can analyze the content of your emails or Facebook posts to serve up more relevant ads, then surely the same AI can be trained to analyze the content of a news article and determine whether it’s true or not. I am, of course, being slightly facetious here — we’ve already seen the failure of Facebook’s Trending Stories algorithm to filter out fake stories. But the reality is that computers likely could go a long way to making some of these determinations. Both Google and Facebook have now banned their ad networks from being used on fake news sites, so it’s clear they have some idea of how to determine whether entire sites fit into that category. It shouldn’t be too much of a leap to apply the same algorithms to the News Feed and Trending Stories. But it’s likely that computers by themselves will find both false positives and false negatives. The answer almost certainly isn’t to rely entirely on machines to make these determinations.
Human curation by employees
The next option is to put employees to work on this problem, scanning popular articles to see whether they are fundamentally based on fact or fiction. That might work at a very high level, focusing only on those articles being shared by the greatest number of people but it obviously wouldn’t work for the long tail of content — the sheer volume would be overwhelming. Facebook, in particular, has tried this approach with Trending Stories and then, in the face of criticism of perceived political bias, fired its curation team. Accusations of political bias are certainly worth considering here — any set of human beings may be subject to their own personal interpretations. However, given clear guidelines that err on the side of letting content slip through the net, they should not be prohibitive. The reality is, any algorithm will have to be trained by human beings in the first place so the human element can never be eliminated entirely.
Crowdsourcing
The last option (and I need to give my friend Aaron Miller some credit for these ideas) is to allow users to play a role. Mark Zuckerberg hinted in a Facebook post this week that the company is working on some projects to allow users to flag content as being false, so it’s likely that this is part of Facebook’s plan. How many of us during this election cycle have seen friends share content we know to be fake, but were loath to leave a comment pointing this out for fear of being sucked into a political argument? On the other hand, the option to anonymously flag to Facebook, if not to the user, that the content being shared was fake, might be more palatable. If Facebook could aggregate this feedback in such a way that the data would eventually be fed back to those sharing or viewing the content, it could make a real difference.
Such content could come with a “health warning” of sorts — rather than being blocked, it would simply be accompanied by a statement suggesting a significant number of users had marked it as potentially being false. In an ideal world, the system would go further still and allow users (or Facebook employees) to suggest sources providing evidence of the falsehood, including myth-debunking sites such as Snopes or simply mainstream, respectable news sources. These could then appear alongside the content being shared as a counterpoint.
Experimentation is the key
Facebook’s internal motto for developers for a long time was “move fast and break things,” though it has since been replaced by the much less iconoclastic “move fast with stable infrastructure.” The reality is that news sharing on Facebook is already broken, so moving fast and experimenting with various solutions isn’t likely to make things any worse. The answer to the fake news problem probably doesn’t actually lie in any of the four approaches I’ve proposed, but in a combination of them. Computers have a vital role to play, but they need to be trained and supervised by human employees. For any of this to work at scale, the computers likely also need training from users, too. But doing nothing can no longer be the default option. Facebook and others need to move quickly to find solutions to these problems. There will be teething problems along the way, but it’s better to work through some challenges than throw our hands up in despair and walk away.
Jan Dawson is founder and chief analyst at Jackdaw, a technology research and consulting firm focused on the confluence of consumer devices, software, services and connectivity. During his 13 years as a technology analyst, Dawson has covered everything from DSL to LTE, and from policy and regulation to smartphones and tablets. Prior to founding Jackdaw, Dawson worked at Ovum for a number of years, most recently as chief telecoms analyst, responsible for Ovum’s telecoms research agenda globally. Reach him @jandawson.
This article originally appeared on Recode.net.