/cdn.vox-cdn.com/uploads/chorus_image/image/68681574/GettyImages_1230454320.0.jpg)
Facebook has temporarily banned ads for gun accessories and tactical gear through at least January 22, two days after President-elect Joe Biden’s inauguration.
The decision comes after BuzzFeed News reported that, despite concerns from employees, the tech company had advertised body armor, gun holsters, and other military equipment alongside content about election misinformation and the Capitol riot.
It’s just one of several actions Facebook has been pressured to take after the January 6 insurrection at the Capitol building that left five people dead. Several senators called upon the company to halt its military gear ads following the insurrection, but a number went further, arguing it wasn’t just the advertisements that ought to draw concern but also the platform itself. Rep. Alexandria Ocasio-Cortez (D-NY) said during a virtual town hall on Friday that “Mark Zuckerberg and Facebook bear partial responsibility for Wednesday’s events.”
Though the attacks seemed like a surprise to Capitol Police — who were severely underprepared to stop the mob of Trump supporters, QAnon believers, neo-Nazis, and Proud Boys from storming the Capitol — security experts had warned of the potential severity of the protest. After all, the plans were being hashed out in plain sight for weeks on social media. Platforms like Facebook and Twitter are now being forced to reckon with how they allowed extremist rhetoric and the organization of a violent protest to thrive and spread online.
“Everyone who was a law enforcement officer or a reporter knew exactly what these hate groups were planning,” DC Attorney General Karl Racine told MSNBC. “They were planning to descend on Washington, D.C., ground center was the Capitol, and they were planning to charge and, as Rudy Giuliani indicated, to do combat justice at the Capitol.”
Facebook has attempted to minimize its responsibility for what happened, instead blaming niche social networks such as Parler, where far-right content goes unchecked. “We again took down QAnon, Proud Boys, Stop the Steal, anything that was talking about possible violence last week,” Facebook COO Sheryl Sandberg said January 11 in a livestreamed interview with Reuters. “ Our enforcement is never perfect, so I’m sure there were still things on Facebook. I think these events were largely organized on platforms that don’t have our abilities to stop hate, don’t have our standards and don’t have our transparency.”
Yet evidence suggests Facebook was crucial for organizers to spread misinformation and awareness of the protest. Eric Feinberg, vice president of content moderation at the Coalition for a Safer Web, told the Washington Post that 128,000 people were using the hashtag #StopTheSteal on the site in the days leading up to the attack. Media Matters also reported that two dozen Republican Party officials and organizations used Facebook to coordinate bus trips to Washington, DC, for the rally that led to the insurrection.
The platform was at least critical enough in organizing the event that one senator asked Facebook to keep records of all the related content for use as potential evidence in legal action against the rioters. And it wasn’t until days after the attack that Facebook said it would remove related #StopTheSteal content.
“If you took Parler out of the equation, you would still almost certainly have what happened at the Capitol,” Angelo Carusone, president and CEO of Media Matters, told Salon. “If you took Facebook out of the equation before that, you would not.” Parler has also faced consequences for its ultra-radical approach to free speech. Amazon Web Services, which previously hosted the app, took it offline, and Parler has yet to find a new service provider. Google and Apple also removed it from their app stores.
Facebook and other social platforms face increasing pressure for more oversight
Perhaps the most effective method social networks have used to combat misinformation and hyperpartisan information is the simplest: blocking Trump. Twitter permanently banned Trump on January 8, and online misinformation about election fraud dropped 73 percent in the next seven days. Now that nearly every major social network — as well as companies like Salesforce, which hosted the Trump campaign’s email listserv — has taken similar action, it’s likely the trend continues.
To some, these actions may come as too little, too late. Facebook has been used as an organizing tool for the 2017 white supremacist rally in Charlottesville, Virginia, the nationwide anti-mask protests in 2020, and the QAnon conspiracy theory. When asked why it hasn’t done more to stop the spread of extremist beliefs and groups, the company has typically deferred to the idea that it is protecting users’ free speech.
But Dipayan Ghosh, co-director of the Harvard Kennedy School’s Digital Platforms & Democracy Project and a former adviser to Facebook and the Obama White House, argued in the Washington Post that Facebook can no longer be trusted to moderate its own content without outside regulation:
Ultimately, we must have better protections in place, protections that counteract the opacity of social media algorithms with radical transparency and the uninhibited collection and use of personal data with consumer privacy rights. Meanwhile, we must rethink the legal mechanisms — namely, Section 230 of the Communications Decency Act — the industry has employed to shield itself from the content moderation debate.
Though the Trump presidency has been marked by its anti-regulatory approach, the FCC under Biden could theoretically pass a sweeping legal agenda that includes bringing back net neutrality, expanding internet access, and retooling Section 230, though the latter may not be a major priority for his administration. Biden’s inauguration certainly won’t fix the massive polarization of social media, but it’s possible the internet could become more stable after he takes office.