/cdn.vox-cdn.com/uploads/chorus_image/image/63072514/944448572.jpg.0.jpg)
The cries to regulate Facebook are getting louder and louder.
This week they’re coming from a familiar foe: The UK’s Digital, Culture, Media and Sport Committee, which has been investigating Facebook’s role in spreading disinformation. The Committee issued a new report on Sunday that said, “Facebook intentionally and knowingly violated both data privacy and anti-competition laws” in the UK — a conclusion the committee came to based on a cache of internal emails it collected last fall.
The report also calls for more regulation for Facebook, and described the company as “digital gangsters” for how it handles its users’ data.
The committee suggested a number of ways to regulate Facebook:
- It suggested that UK regulators “investigate whether Facebook specifically has been involved in any anti-competitive practices.” In other words, is Facebook a monopoly?
- It suggested that Facebook be regulated as “a new category of tech company” that is “not necessarily either a ‘platform’ or a ‘publisher.’” The committee would like Facebook to “assume legal liability for content identified as harmful after it has been posted by users.” Today, tech platforms like Facebook and Twitter are not held liable if their users post something illegal as long as they remove it.
- It suggested that a “Code of Ethics” be created to identify what is considered “harmful content.” Facebook and other platforms would then be regulated to ensure that they don’t spread that content.
In a lengthy statement from Facebook’s UK Public Policy Manager, Karim Palant, the company says it is “open to meaningful regulation” and also “supports effective privacy legislation that holds companies to high standards in their use of data and transparency for users.” You can read the whole statement below.
The UK report was an update of a previous report, but it still got a lot of attention on Sunday, in part because it was scathing — “digital gangsters” makes for a great headline — and in part because those internal emails the committee gathered from Facebook last fall were a big deal. They showed, among other things, how Facebook uses personal user data to strengthen or weaken its competitors.
But the DCMS Committee is not the only group that wants to regulate Facebook. In fact, Facebook is facing calls for regulation across the globe.
In India, regulators are hoping to pass new rules that would threaten end-to-end encryption for Facebook-owned WhatsApp, and require Facebook to more aggressively monitor user posts for “unlawful” content. Germany has ordered Facebook to change its data collection practices. And in the United States, the company is in talks with the Federal Trade Commission, which is investigating Facebook, that could lead a “multibillion-dollar fine,” according to the Washington Post.
Facebook is no longer just battling US regulators upset about the 2016 election or a perceived conservative bias. Facebook is battling regulators everywhere. Sunday’s report from the DCMS Committee is yet another reminder of how global the social giant’s problems have become.
It seems that no one believes Facebook can or should be able to police itself, so everyone is trying to do the job for it.
Here’s the full statement from Facebook’s Palant:
“We share the Committee’s concerns about false news and election integrity and are pleased to have made a significant contribution to their investigation over the past 18 months, answering more than 700 questions and with four of our most senior executives giving evidence.
“We are open to meaningful regulation and support the committee’s recommendation for electoral law reform. But we’re not waiting. We have already made substantial changes so that every political ad on Facebook has to be authorised, state who is paying for it and then is stored in a searchable archive for 7 years. No other channel for political advertising is as transparent and offers the tools that we do.
“We also support effective privacy legislation that holds companies to high standards in their use of data and transparency for users.
“While we still have more to do, we are not the same company we were a year ago. We have tripled the size of the team working to detect and protect users from bad content to 30,000 people and invested heavily in machine learning, artificial intelligence and computer vision technology to help prevent this type of abuse.”
This article originally appeared on Recode.net.