clock menu more-arrow no yes mobile

Filed under:

The Facebook free speech battle, explained

The debate over what Facebook can and can’t moderate is more about politics than law.

The Facebook logo is displayed during the F8 Facebook Developers conference on April 30, 2019, in San Jose, California.
Justin Sullivan/Getty Images

Editor’s note 5/14: This article has been updated significantly to correct the record and include more information about Section 230 and the fight over Facebook’s ability to moderate content and users.


Facebook booted a hodgepodge of extremist figures recently, inflaming a faction on the right that is challenging the prevailing legal consensus on what is and isn’t protected speech on digital platforms.

Some conservatives argue that Facebook is unfairly targeting conservative voices, or voices that seemingly abut conservative ideas, like Infowars’ Alex Jones. And a few politicians, like Republican Sens. Ted Cruz and Josh Hawley, are questioning whether it’s even legal for Facebook to do this.

Facebook says it’s not targeting the right, but rather responding to a broad public push demanding that it crack down on extremism and misinformation — and that it has specific legal protections to do so.

Behind the debate is a bigger question about what social media companies are and, therefore, to what extent they can moderate content. The prevailing legal consensus among scholars, the courts, tech companies, and the lawmakers who wrote the relevant law is that Facebook is well within its rights to exclude users as it has. This camp sees the challenge from a handful of voices on the right, like Cruz, as politically — not legally — motivated.

For its part, Facebook is relying on Section 230 for legal protections, but trying to cast itself as a place that “welcomes all viewpoints,” despite some moderation, as CEO Mark Zuckerberg said at a Capitol Hill hearing earlier this year.

The result of all this is that Facebook is standing in the middle of a debate that’s being shaped by politics and public relations more than by the law that protected the growth of sites like Facebook in the first place.

How Facebook’s ban on extremist users exploded online

Facebook announced last Thursday that the platform was banning a grab bag of users: failed white nationalist House candidate Paul Nehlen, pundit Milo Yiannopoulos, right-leaning YouTube personality Paul Joseph Watson, alt-right political activist Laura Loomer, Nation of Islam leader Louis Farrakhan, and Alex Jones and his Infowars media outlet.

In a statement, Facebook said, “We’ve always banned individuals or organizations that promote or engage in violence and hate, regardless of ideology. The process for evaluating potential violators is extensive and it is what led us to our decision to remove these accounts today.”

But that’s not how some conservatives and right-leaning observers have interpreted Facebook’s decision. And they do have some high-profile support for their position, at least in part. Last year, technology reporter Charlie Warzel (then at BuzzFeed News) called Facebook’s decision-making “vague content rules and arbitrary enforcement” regarding bad actors on its platform.

Combined with recent Twitter suspensions of right-leaning figures like actor James Woods and ongoing drama over alleged “shadowbanning” of conservative social media users, Facebook’s decision last week just added to the sense among some on the right that social media companies unfairly treat right-wing users. (This is debatable.)

That’s prompted outrage from not just the media figures and outlets who have been banned or suspended but prominent conservative politicians as well, including President Trump, Newt Gingrich, and Cruz.

What is Facebook, anyway? And does it matter?

Facebook and other social media companies have been careful to present themselves as platforms, not publishers. Though Facebook is one of the biggest sources of news media for Americans and has encouraged media companies to produce content specifically for Facebook, Zuckerberg has assiduously argued that it is just a platform that hosts users’ content.

This matters because being treated as the publisher — like Vox is the publisher of this article — opens up websites to lawsuits over what they’ve chosen to put out into the world. Whereas the current legal consensus is that platforms have protections that allow them to moderate the content users put on their sites, without being legally responsible for it.

But there’s a big problem with how this dichotomy is being interpreted by laypeople and lawmakers alike. The difference in legal responsibility for content isn’t between platform and publisher, but between sources of content — whether the content is created by a user of Facebook, or by Facebook itself.

But how far those protections — and the ability to moderate the content — go is under debate.

The prevailing view on Section 230

The legal standard at issue is Section 230 of the Communications Decency Act. The law passed in 1996 with the goal of protecting websites and social networks from being sued for what third-party users (i.e., you) say or do on those sites.

In a floor speech given last year, Sen. Ron Wyden (D-OR), who helped write Section 230, explained that the point of the section was so that websites could moderate content. “I wanted to make sure that internet companies could moderate their websites without getting clobbered by lawsuits. I think everybody can agree that’s a better scenario than the alternative, which means websites hiding their heads in the sand out of fear of being weighed down with liability.”

The idea of the law is to give Facebook editorial control over its content: the ability to monitor, edit, and even delete content (and users) it considers offensive or unwelcome according to its terms of service. These rights theoretically existed before Section 230 (thanks to the First Amendment), but Section 230 clarified it:

No provider or user of an interactive computer service shall be held liable on account of—(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected ...

Legal scholars and courts have interpreted this to mean that Section 230 gives social media companies the power to moderate not just content but users.

In the past three years, Section 230 has protected Twitter against two lawsuits, one filed by white nationalist Jared Taylor, the other filed by noted far-right internet troll Chuck Johnson. Both sued Twitter after the service deleted their accounts and banned them from using Twitter (Johnson was banned in 2015, Taylor in 2017).

Taylor argued that his lawsuit was aimed at ending the “highly controversial practice by social media companies of banning users simply because they do not like what those users say”; Johnson, who was banned for asking Twitter users to help him “take out” a prominent Black Lives Matter activist, told BuzzFeed News, “This is going to be a very serious case over the freedom of the internet.” Because of Section 230, Twitter won both cases.

According to legal scholars like Eric Goldman, a professor at Santa Clara University School of Law, it doesn’t really matter why Twitter decided to ban either man in the first place.

“We’re not in a position to decide why [a service] published something and why they chose not to publish something else. That’s a losing game,” Goldman told Gizmodo in an interview in March. “Section 230 says let’s not do that, let’s just have a categorical rule: It says third-party content, the online services aren’t liable for it; unless it sits along the statutory exceptions.” (Those exceptions include intellectual property violations and federal crimes, like child pornography.)

Meanwhile, in a 2014 case involving Facebook, the United States Court of Appeals reaffirmed a social media’s site prerogative to ban users how it deemed fit: “It would make nonsense of the statute to say that interactive computer services must lack the capacity to police content when the Act expressly provides them with immunity for doing just that.”

Under the current legal framework, Facebook has the power to kick off users it doesn’t want and remove content it doesn’t like, and it doesn’t need to be “neutral” to enjoy the protections of Section 230. As Gizmodo’s Dell Cameron wrote in that March article, “Section 230 protects all websites, whether they’re politically oriented or not.”

The conservative case against Facebook

Some conservatives are mounting a challenge to the prevailing view of Section 230. It’s an uphill battle.

One of the most prominent critics is Cruz, who has championed a different interpretation of Section 230. While questioning Zuckerberg in a hearing, he argued the opposite of what the courts have so far decided: “The predicate for Section 230 immunity under the CDA is that you’re a neutral public forum. Do you consider yourself a neutral public forum, or are you engaged in political speech, which is your right under the First Amendment?”

David Greene, a senior staff attorney and civil liberties director for the Electronic Freedom Foundation and a critic of Cruz’s view, says he hears this argument a lot. “I don’t know who started it, but we hear it constantly. Not just from Cruz but from tons of people who should know better.”

Cruz’s office didn’t return requests for comment.

Sen. Josh Hawley made the same argument regarding Twitter in November. (Hawley’s office did not respond to a request for comment.)

Conservative media is making the case, too.

“The dominant social media companies must choose: if they are neutral platforms, they should have immunity from litigation,” wrote lawyers Adam Candeub and Mark Epstein for the conservative City Journal last year. “If they are publishers making editorial choices, then they should relinquish this valuable exemption.”

Following the Facebook bans of Paul Joseph Watson and others, the right-leaning magazine Human Events made a similar argument about neutrality:

Both Facebook and Twitter are currently considered to be open “publishers” under Section 230 of the Communications Decency Act, which exempts them from legal liability for the content posted on their websites. As proven above, however, it’s hard to argue that they are acting merely as bystanding “publishers” when they are in fact operating as the exact opposite — promoting the ideas, posts, and people they agree with by allowing them to be viewed on their platforms, and censoring the ideas, posts, and people with whom they do not agree. These platforms would be more accurately classified as content curators, and thus liable for the content they are allowing on their platforms, rather than publishers who enjoy Section 230 immunity.

The future of Section 230 might be in jeopardy

Interestingly, many of those most enraged by Facebook’s decision to ban some users it views as “extremist” don’t seem to actually have a problem with Facebook moderating content (as is their legal right) — unless it applies to them.

Watson, a conspiracy theorist who was among those banned, posted a video on YouTube in which he appeared to argue that some people clearly deserve to be banned from Facebook, but not him:

They put me on a list with terrorists, human traffickers and serial killers, because I criticize modern art and modern architecture. Because I dare criticize mass immigration. Because I dare criticize a belief system, yet you still host Antifa accounts which threaten to assassinate the president. You still host accounts that belonging to the sicko who sent death threats to Ben Shapiro’s family.

To be fair, others are making a better case: that the rules Facebook is using to decide what is “hate speech” and what isn’t constitute “selective silencing,” particularly since figures like Farrakhan and Jones have been using Facebook and other platforms for years to spread anti-Semitic bile and conspiracy theories, raising the question of why these figures were banned now and not, say, in 2012, when Jones was arguing that the Sandy Hook shootings were a “false flag.”

What was newly offensive about Laura Loomer, best known for spreading conspiracy theories and chaining herself to the doors of Twitter’s New York headquarters while wearing a yellow Star of David, that wasn’t offensive in 2016?

What’s new is that companies like Facebook, YouTube, and Tumblr are trying to be more, as Zuckerberg put it, “responsible” and moderate more content, in response to users tired of being beset by Nazis and trolls.

Tumblr’s recent ban on nudity, Twitter’s continued back-and-forth on suspending and banning extremist users, Facebook’s recent efforts to curtail misleading ads that may have contributed to misinformation surrounding the 2016 presidential campaign: All these moderating efforts are attempts to get out ahead of users who are dismayed by a constant cavalcade of bad actors and bots that make these sites less enjoyable to use (and less profitable for ad companies that post on these platforms, and thus, for the platforms).

But that’s landed them in a supercharged political environment, drawing the ire of the figures they’ve deemed dangerous and many others. For companies like Facebook, they’re damned if they do moderate content — if not legally, then politically — and damned if they don’t.

Their increased moderation has annoyed American politicians, some of whom think Section 230 was a big mistake and want to change it. They include Hawley, who told The Verge earlier this year that his objections to Section 230 were more about the power he believed Section 230 gave Facebook and other social media companies than about an argument that the legislation required Facebook to be “neutral”:

Section 230 is significant because it is such a broad exemption from traditional liability. It’s a really sweet deal. It’s allowed these companies to get really, really big and it was supposed to be pro-competition and pro-innovation. But it’s allowed a lot of these mega-companies to get really big, really rich, and really powerful and to avoid competition. And it has allowed these companies to exert editorial influence without being subject to the usual controls on editorial activity.

But Hawley and others aren’t just angry with Section 230; they want to reform the legislation. Hawley told The Verge, “I think we need to consider what reforms need to be made there in order to prevent viewpoint discrimination.” When The Verge responded, “The concern is that, once you start tinkering with Section 230, it would lead to even more censorship, directed by the government instead of the platforms,” Hawley said, “The problem is that the dominant platforms are engaging in censorship now, there’s just no recourse.”

And reforming Section 230 to mandate that online platforms must be “objectively neutral” — though one would wonder what the standard for “neutrality” would be — isn’t a far-fetched idea.

Last summer, President Trump signed a bill package — the Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act, known as FOSTA-SESTA — into law. As my colleague Aja Romano detailed, FOSTA-SESTA “creates an exception to Section 230 that means website publishers would be responsible if third parties are found to be posting ads for prostitution — including consensual sex work — on their platforms.” And with the rise of social media companies focusing more on “engagement” and algorithms that bring more extreme content to the top of users’ newsfeeds, it’s not just conservatives arguing that it’s time for Section 230 to change; some on the left are too.

During congressional hearings last year, Sen. Wyden, who wrote Section 230, told Facebook’s Sheryl Sandberg and Twitter’s Jack Dorsey that if their companies weren’t willing to do more to moderate content, they might lose the protections of Section 230. “If you’re not willing to use the sword,” he told them, “there are those who may try to take away the shield.”

But others are making a far more direct appeal, one that doesn’t reference Section 230 but is aimed squarely at conservative social media users: If Facebook and Instagram can take down pages for Infowars and Loomer (who went on Infowars after the ban and complained about how “ruined” her life is now), what will stop the sites from removing pages for Trump supporters, or for conservatives more generally? That’s the argument being made by Donald Trump Jr., who said on Twitter that the “purposeful & calculated silencing of conservatives” should “terrify everyone,” adding, “ask yourself, how long before they come to purge you?”

Or, as Yiannopoulos said in an email when I asked for comment on his ban from Facebook, “You’re next.”