clock menu more-arrow no yes mobile

Filed under:

Facebook and Twitter’s midterm task: fighting off online efforts to suppress the vote

The companies want to get out the vote — and stop those who don’t.

Sen. Chris Coons (D-DE) displays a “Miners for Trump” event Facebook page set up by Russian operatives during the 2016 election.
Sen. Chris Coons (D-DE) displays a “Miners for Trump” event Facebook page set up by Russian operatives during the 2016 election.
Drew Angerer/Getty Images
Emily Stewart covers business and economics for Vox and writes the newsletter The Big Squeeze, examining the ways ordinary people are being squeezed under capitalism. Before joining Vox, she worked for TheStreet.

Bad actors on Twitter and Facebook are doing their best to use the platforms to discourage voters from heading to the polls in the 2018 midterm elections. The companies are trying to stop them.

Tony Romm at the Washington Post in a report on Friday laid out the measures Twitter and Facebook are taking to keep false information about the elections from spreading: They’re deleting and suppressing posts with lies about how to mail in ballots, doctored pictures showing long lines, or posts falsely telling people they can vote online.

State election officials can flag misinformation to Facebook for potential removal, and the company is also using its machine-learning tools to identify false content, such as posts with the wrong election date. Twitter is partnering with the National Association of Secretaries of State and National Association of State Election Directors; both parties can send disinformation reports to Twitter for review.

The Post’s story hones in on one specific campaign that Twitter removed: a false rumor that US Immigration and Customs Enforcement officials will be at polling stations to check citizenship. It appears to be an effort to intimidate immigrants and keep those who can legally vote from doing so. ICE went as far as to put out a tweet clarifying it does not monitor polling locations.

But patrolling for misinformation is complicated for Facebook and Twitter, both of which have emphasized that they don’t see themselves as arbiters of truth and have historically shied away from taking too much responsibility for the content they host. Romm lays out the example of a different ICE tweet that cleared the bar for acceptable content:

But both tech companies are proceeding cautiously, trying to find the right balance between combating perceived voter suppression and preserving free expression. “STOP VOTER FRAUD WEAR A ICE HAT ON ELECTION DAY,” suggested another tweet that was still viewable on Twitter as of Friday.

Reuters reported separately on Friday that Twitter has deleted more than 10,000 automated accounts that discouraged people from voting after they were flagged by the Democrats. One of the tweets discouraged Democratic men from voting because it would “drown out the voice of women,” the report said, citing two sources familiar with a flagging operation that the party set up to root out fake content.

A Twitter spokesperson in an email to Vox said that the company has established “open lines of communication and direct, easy escalation paths” for state election officials, the Department of Homeland Security, and both parties. “Our singular goal is to enforce our policies vigorously and protect conversational health on our service,” the spokesperson said. “We removed a series of accounts for engaging in attempts to share disinformation in an automated fashion — a violation of our policies. We stopped this quickly and at its source. ”

Facebook and Twitter have a big task in trying to police their platforms

Facebook and Twitter have come under heavy scrutiny for the roles their platforms play in spreading fake news: Disinformation and trolling on Facebook may have swayed parts of the 2016 election, for instance. Both companies have taken some steps to remedy the situation — neither wants a repeat.

Facebook has made multiple announcements in recent months about taking down fake pages and accounts run out of Iran, Russia, and the US, and it’s publicized its election security efforts. In October, Facebook said it was “expressly banning misrepresentations about how to vote,” including claims that people can vote with an online app or misstatements about whether a vote will be counted.

Twitter has been purging fake accounts and has also outlined its efforts to ensure election integrity.

But their efforts have been imperfect: The New York Times in October reported that ads were showing up in a US House race in Virginia calling the Democratic candidate an “evil socialist” and showing her next to Nazi imagery, and the disclaimer on the ads did not identify who was behind them. Instead, they read: “Paid for by a freedom loving American Citizen exercising my natural law right, protected by the 1st Amendment and protected by the 2nd Amendment.”

Vice reported it was able to place ads in the names of Vice President Mike Pence, Democratic National Committee Chair Tom Perez, and ISIS.

To be sure, Facebook and Twitter have a complex task in fighting fake information — bad actors are developing new tactics all the time, and they host enormous amounts of content. They’re making strides in the 2018 midterms, and they’ll likely have an even bigger battle heading into 2020.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.