Minutes before he drove a car into a crowd of people in Westminster on March 22, killing four and injuring others, Khalid Masood sent an encrypted message over WhatsApp.
British law enforcement was not initially able to get access to the contents of the message, and authorities seized on the incident, arguing that services like WhatsApp had an obligation to grant law enforcement access to encrypted messages in this kind of situation. But WhatsApp employs a technology called end-to-end encryption designed to prevent anyone — even WhatsApp itself — from unscrambling customers’ messages.
When officials eventually did gain access to Masood’s WhatsApp message, they discovered that he had “declared that he was waging jihad in revenge against Western military action in Muslim countries in the Middle East.”
In the wake of another deadly terrorist attack in London earlier this month, British Prime Minister Theresa May called for new regulations to prevent extremists from spreading their message online.
“We cannot allow this ideology the safe space it needs to breed," May said in a televised address the day after the attack. "Yet that is precisely what the internet and the big companies that provide internet-based services provide."
May didn’t say which “big companies” she had in mind here, but it’s a safe bet that it includes several companies based in Silicon Valley. Ro Khanna, a member of Congress whose district covers much of Silicon Valley, told me in an interview last week that he was “somewhat surprised” by her remarks. “Technology companies often suspend users or remove content well before law enforcement even contacts them,” he said.
The issue is complicated because there are actually two different debates when it comes to terrorist attacks and online communications. One debate focuses on whether social media sites should be more aggressive about taking down content that glorifies terrorism and attempts to recruit people to terrorist causes. The other debate focuses on terrorists’ possible use of encrypted messaging services like Facebook’s WhatsApp for planning terrorist attacks themselves.
If she survives the fallout from last week’s election — her Conservative Party lost its majority but May is trying to negotiate a deal with the Democratic Unionist Party to remain prime minister — May plans to press technology companies to do more on both fronts. But critics say that the proposals would be ineffective and counterproductive.
Social media sites do a lot to fight online extremism
The obvious area where technology companies have faced criticism concerns public posts glorifying terrorism or seeking new recruits. The UK government has been pressing technology companies to be more aggressive about taking down this kind of content — though the sites already do a significant amount on this front.
Facebook has had a policy since at least 2014 of removing content from terrorist groups, and the company says it’s working to improve further on those policies.
“We're starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization,” wrote Facebook CEO Mark Zuckerberg in his February manifesto, “Building Global Community.”
Twitter says it takes the issue seriously, too.
“Terrorist content has no place on Twitter,” Nick Pickles, Twitter’s UK policy chief, affirmed in an emailed statement. “We continue to expand the use of technology as part of a systematic approach to removing this type of content.”
Twitter pointed to a recent report showing that in the second half of 2016, “a total of 376,890 accounts were suspended for violations related to promotion of terrorism.” Twitter brags that three quarters of those suspensions were triggered by its own “internal, proprietary spam-fighting tools.”
There is reason for optimism that technology companies will be able to do even better in the future. Machine learning algorithms are allowing computers to develop a more sophisticated understanding of the context of written communication.
Still, this probably isn’t a problem technology companies will ever be able to fully solve. The line between legally protected free speech and outright incitement to terrorism isn’t clear cut. So aggressive efforts to weed out terrorist-recruitment content could wind up squelching legitimate political discussion. That, in turn, could drive people with anti-Western sympathies to shift to overseas sites that are harder for Western governments to regulate.
And Jillian York, a free speech advocate at the Electronic Frontier Foundation, notes that on top of other concerns, technology companies also have to grapple with the fact that different countries have different ideas about who counts as a terrorist. Some authoritarian regimes consider anyone who opposes the current government to be terrorists. For example, the Turkish government considers some Kurdish separatist groups to be terrorist organizations, while many Kurds disagree.
While York sees no problem with taking down content from ISIS or al-Qaeda, she says that global technology companies will struggle to balance free speech values against the demands of various countries around the world.
The UK government wants access to encrypted messages
After Masood’s March 22 message was decrypted, British authorities determined that the recipient of the message had no prior knowledge of his plans and was not connected to any terrorist network.
In other words, there’s little reason to think that British law enforcement could have intercepted the March attack — or others that have happened recently — if they’d had access to encrypted messages on WhatsApp or other services.
In her speech after the June attack, May expressed optimism that terrorist planning in particular could be thwarted by targeted cooperation. "We need to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremist and terrorism planning," she said.
But Khanna warns that compelling technology companies to provide access could have big downsides. To comply with law enforcement requests, WhatsApp would have to re-design its technology to give itself a “back door” to user communication.
“Providing a back door to encryption will make our world less safe and compromise international security by weakening the strength of encryption, which in turn would make it easier for hackers and criminals, as well as foreign governments, to steal people’s personal information and technology,“ Khanna told me in an emailed statement.