clock menu more-arrow no yes mobile

Filed under:

WhatsApp is at risk in India. So are free speech and encryption.

India is proposing new content laws that could be a “sledgehammer” for free speech.

Members of the Rajan band Kotla Mubarakpur checking their cellphones.
Members of the Rajan band Kotla Mubarakpur, check their cellphones at a wedding on November 22, 2011, in New Delhi, India.
Daniel Berehulak/Getty Images

India is WhatsApp’s biggest market. It’s also suddenly one of the company’s biggest threats.

Regulators in India, where both WhatsApp and parent company Facebook have more than 200 million users, are proposing what amounts to a radical change to the country’s internet privacy and liability laws.

The new set of rules, first published in late December and still under consideration, would require, among other things, that internet companies proactively screen user posts and messages to ensure that people don’t share anything “unlawful.” It’s an attempt by the Indian government to hold technology companies accountable for the content that appears on their platforms — content that can be misleading, create confusion, and has even led to real-world violence in India.

But the new rules would also create something else: A system where technology companies are suddenly the gatekeepers to what can be shared online. It would be up to Facebook — or Twitter or WhatsApp or YouTube — to determine what content is acceptable and what content is “unlawful” before it’s ever even shared.

The new rules would be “a sledgehammer to online free speech,” wrote Apar Gupta, the executive director of the Internet Freedom Foundation, an international nonprofit in India.

The rules would force tech companies to make technical changes. Companies that don’t have technology to monitor content would need to build it (though one issue with the proposed Indian rules is that it’s unclear, for now, what the punishment will be for failing to comply).

WhatsApp, for example, encrypts all of its messages so the company can’t read them, which makes it impossible for WhatsApp to monitor user posts. It would likely have to eliminate encryption to comply with a law like this.

Not everyone thinks this is a bad idea.

“A lot of the people defending WhatsApp in this particular context ... they say WhatsApp doesn’t kill people, people kill people,” said Prashant Reddy, a resident fellow at the Vidhi Centre for Legal Policy, a nonprofit political think tank in India, in an interview with Recode. “But the fact is, WhatsApp enables people to do this at a much larger scale.”

But creating headaches for tech companies isn’t the biggest issue: The future of internet privacy and encryption is suddenly on the line for the world’s second-largest country, and digital rights advocates are terrified that privacy will lose out. These new rules could open the door to widespread government censorship and surveillance.

“I think honestly the biggest [technology] story around the world is India trying to bring these intermediary guidelines,” said Jayshree Bajoria, a researcher with the nonprofit organization Human Rights Watch, in an interview with Recode. “We are talking about China-style surveillance here.”

The end of end-to-end encryption?

This proposed law, known colloquially as Intermediary Guidelines, isn’t specific to WhatsApp. If passed, it would apply to all internet companies that host, publish, or store user information, including social networks, messaging platforms, and even internet service providers.

Understanding WhatsApp’s problems in India, though, is key to understanding why a law like this is suddenly on the table.

WhatsApp, the private messaging service that Facebook bought in 2014 for $19 billion, is incredibly popular in India. More than 200 million people use the app every month, in part because WhatsApp has traditionally been simple and reliable in markets where internet connectivity is weak.

One of WhatsApp’s key features is end-to-end encryption. Enabling that level of encryption means that messages sent using the app are only visible to the message’s sender and receiver. WhatsApp can’t read them and therefore can’t reproduce them if ever required to by government or law enforcement agencies. And if the company can’t read them, it can’t proactively monitor them, either.

But WhatsApp also has a content issue in India. The app has become a vehicle for widely distributing misinformation and fake news — in some cases, false stories that have gone viral on WhatsApp have led to actual offline violence and deaths. In 2017, rumors of a band of child kidnappers made the rounds on WhatsApp, sparking an angry mob that ultimately killed four innocent men, according to the New York Times. Later, three more men were killed by another mob. There was no evidence any of the men were actual kidnappers.

WhatsApp has been in discussions with Indian officials for months about how to stop information like this from spreading; one approach has been limiting peoples’ ability to forward messages to a large number of groups at one time. But policing content is a real challenge, given WhatsApp’s encryption, and a real concern, given that India’s national elections are just months away. WhatsApp has become a central service for spreading campaign-related news and updates, and is a key part of the campaign strategy for politicians.

The new proposed rules would eliminate that encryption challenge, but at a real cost. They would require companies in India with more than 5 million users to make a number of changes, including incorporating the company in India and maintaining an office in the country, with a physical address.

But the two most important and concerning rules are related to censorship and encryption.

  • The first is that tech companies would be required to hand over any information demanded of them by government or law enforcement agencies, and would also need to “enable tracing out of such originator of information on its platform.”

In other words, tech platforms need to be able to trace content back to the original users who shared it on the platform to begin with. You can’t do that if content and messages are encrypted.

  • The second is that tech companies “shall deploy technology based automated tools” — like artificial intelligence algorithms — with the purpose of “proactively identifying and removing or disabling public access to unlawful information or content.”

In other words, tech companies will be required to use algorithms to scan user posts and prevent them from sharing anything deemed “unlawful.”

Many tech companies, such as Facebook and YouTube, do some form of this monitoring today, but that’s to catch content like child pornography or copyrighted music. Indian regulators are asking companies to also search for things that are much harder to define, like content that is “grossly offensive” or “blasphemous.” Some of this was once considered illegal in India, but a law forbidding “grossly offensive” content was deemed unconstitutional in 2015 by India’s Supreme Court.

These new proposed rules include a long list of content that wouldn’t be permissible, an apparent attempt to reinstate some of that content censorship.

“[Beyond] breaking the end-to-end encryption requirement, which is horrible, there are other horrible proposals also in these proposed changes,” said the IFF’s Gupta in an interview with Recode. He’s worried that the new rules would “turn the internet in India into an incredibly censored place.”

BJP leader Narendra Modi Prays At The Famous Dashaswamadeh Ghat On The Ganges River
Indian Prime Minister Narendra Modi in 2014. Modi is up for reelection later this year.
Kevin Frayer/Getty Images

A “safe harbor” no longer

There’s a strong incentive for tech companies to comply with these new rules: If they don’t, they could be held liable for “illegal” content that their users share online.

Internet services like WhatsApp and Facebook in India have so far enjoyed what is broadly referred to as “safe harbor” laws — laws that say tech platforms won’t be held liable for the things their users share or post online. If you defame someone on Facebook, for example, the company may be required to take down your post following a court order, but it probably won’t be held legally responsible for what you say.

Safe harbor laws also exist in the United States — under the Communications Decency Act — and India’s version of the protections are part of the 2000 Information Technology Act. But these new rules have been proposed under the section of India’s IT Act that includes the safe harbor distinction — which means that companies would have to follow the new rules if they want to continue to receive those protections.

That’s a big deal, as no tech company wants to be held legally liable for the stuff their users post. In fact, these laws are a big reason tech giants became giant to begin with.

“It’s important to remember ... these laws had nothing to do with free speech. It had to do with commercial supremacy of the early US internet companies,” said Alex Abdo, an attorney at Columbia University’s Knight First Amendment Institute. “It was meant to give them an edge, to make sure they weren’t hobbled by liability as they were trying to compete to develop the latest and most interesting service.”

Big tech platforms like Facebook and YouTube have used that protection to their advantage. Facebook has 2.3 billion monthly users worldwide; YouTube has nearly 2 billion.

But safe harbor protections have also allowed tech companies to move slowly when it comes to taking down content like hate speech or threats. Until recently, many weren’t proactively searching for that kind of content — primarily because they didn’t have to. Most of that content is not technically illegal, at least in the US; in other countries, like India, the law only requires that they take it down following a court order.

Twitter, for example, still doesn’t proactively search for these kinds of posts, and instead waits for users to flag them for the company. (CEO Jack Dorsey says that needs to change.)

The proposed Indian rules are presented as a way to keep internet users safe and force big tech companies to work harder to prevent bad stuff from making its way online.

But the side effects of these new rules are easy to spot. “Proactively” monitoring content means tech company surveillance of everything that users share. Gupta calls it “proactive censorship.”

Monitoring content also means being able to read it — and that combined with the “tracing” requirement would likely mean that products like WhatsApp would need to break encryption to operate in India at all.

WhatsApp is against that change. Here’s part of a statement sent to Recode from a company spokesperson:

What is contemplated by the rules is not possible today given the end-to-end encryption that we provide and it would require us to re-architect WhatsApp leading us to a different product, one that would not be fundamentally private. Imagine if every message that you sent was kept with a record of the fact that you sent it and with a record of your phone number. That would not be a place for private communications.

Sill, some in India believe that the extra oversight is necessary, given the state of the internet today.

Prashant Reddy, the resident fellow at the Vidhi Centre for Legal Policy, has come out in favor of the guidelines, primarily because he believes tech companies are big enough and make enough money that they should be held accountable for the information they carry.

“The traditional legal rule has always been that you’re liable for the actions that you allow other people to do and for the profits that you make off such actions,” Reddy said in an interview with Recode. “[These companies] have grown, they have the money; I think it’s only logical to withdraw these subsidies or special immunities.”

Reddy isn’t worried about tech companies monitoring user content — they do that anyway, he argues. This would just mean that they do it before a post is shared, not after.

“These safe harbour provisions ... led to the world’s greatest experiment with mass communication that was not moderated by editors,” Reddy wrote in a story for India’s the Wire. “The results have not been good. The troll armies, the toxic hate targeted against women have shown us the consequences of handing over the internet to the mob.”

What happens next?

India’s current safe harbor protections appear to be teetering on a knife’s edge. The Intermediary Regulations have already been put to public comment, a process that officially closed last week. At this point, it is believed that the rules could be implemented by the government at any time.

India’s general election, meanwhile, is just two months away. The New York Times reported that some people fear that these changes may give the Indian government and Prime Minister Narendra Modi more “power to remove social media posts by political opponents in the coming election.”

There is also fear that what happens in India could set a precedent for other parts of the world. Digital privacy groups are closely watching new laws that are being proposed in Europe, too, like the “Terrorist Content Regulation,” which would require tech platforms to quickly identify and remove terrorist content. Members of Parliament in the UK have also suggested that Facebook “assume legal liability” for what their users post.

Opponents of that regulation believe that, like India’s proposed rules, the sweeping requirement could create too much government and tech company surveillance.

The reality is that once tech companies comply with laws in one country — if WhatsApp breaks encryption to survive in India, for example — it’s clear to the world that they can and could comply with similar laws in other countries.

“Once they have that machinery in place, it becomes a lot easier for them to apply it to all countries, not just the one where they happen to be required to,” Abdo said.

There is still much to be decided. It’s unclear, for example, what the punishment will be for companies that fail to follow these rules, should they be implemented. It’s also likely that, if these rules are enacted, they may be challenged in Indian courts. Some believe these Intermediary Guidelines could be shut down the way that other censorship laws were struck down by the Indian Supreme Court in 2015.

“Often these kinds of regulatory moves by governments are framed as ‘the government versus big technology companies’ — this is like a fight between Facebook and WhatsApp and the government,” said Amba Kak, a lawyer and policy adviser for Mozilla who has come out against the proposed India legislation. “That kind of headline is very misleading.

“At the end of the day, sure, WhatsApp and Facebook ... they will have an additional burden,” she continued. “But we will be the eventual losers, because it will be our freedom of speech which will be restricted.”

This article originally appeared on

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.