clock menu more-arrow no yes mobile

Filed under:

Twitter’s answer to election misinformation: Make it harder to retweet

It’s a way to slow down the spread of false information and buy fact-checkers time, experts say.

Twitter CEO Jack Dorsey holding a microphone onstage and standing in front of the Twitter bird logo.
Twitter CEO Jack Dorsey speaking at a university in India in 2018.
Amal KS/Hindustan Times via Getty Images
Shirin Ghaffary is a senior Vox correspondent covering the social media industry. Previously, Ghaffary worked at BuzzFeed News, the San Francisco Chronicle, and TechCrunch.

Twitter announced on Friday — less than 30 days ahead of the US election — that it’s enacting a series of significant changes in order to make it harder to spread election misinformation on its platform. It’s one of the most aggressive series of actions any social media company has taken yet to stop the spread of misinformation on their platforms.

The changes include prompting people not to retweet without adding their own commentary, turning off automatic recommendations for other people’s tweets, and adding more context to its Trending section. Twitter will also start putting more warning labels on misleading tweets by US politicians and accounts with more than 100,000 followers, and block users from “liking” or replying to those tweets. And if a politician declares premature victory before it’s verified by independent sources, Twitter will label the tweet and direct users to its voter information page.

Taken as a whole, the moves represent the sort of significant systemic change that some misinformation experts say is necessary to slow the spread of viral lies on the platform, especially those about the election process and results. Facebook has similarly tried to limit voting misinformation, but its most recent action to ban political ads following the election has received less praise than Twitter’s new policies. But the true test will be whether Twitter and Facebook can execute on their promises, and if the changes rolled out just a few weeks ahead of the election will actually be effective.

“As always, the big question for both platforms is around enforcement,” wrote Evelyn Douek, a researcher at Harvard Law School studying the regulation of online speech, in a message to Recode. “Will they be able to work quickly enough on November 3 and in the days following? So far, signs aren’t promising.”

Twitter already has a policy of adding labels to misleading content that “may suppress participation or mislead people” about how to vote. But in recent cases when President Trump has tweeted misleading information about voting, it’s taken the platform several hours to add such labels. Facebook has similarly been criticized for its response time.

Already, President Trump has criticized Twitter for its new policy, with a campaign spokesperson calling it “attempting to silence voters and elected officials to influence our election” in a statement to the Washington Post. And the move comes at a time when Trump and Republican lawmakers have threatened to repeal a critical law, Section 230, that shields internet platforms like Twitter from legal liability over concerns of alleged, and unproven, anti-conservative bias.

Douek said that platforms “need to be moving much quicker and more comprehensively on actually applying their rules.” But, she added, if “introducing more friction is the only way to keep up with the content, then that’s what they should do.”

The concept of “friction” to which Douek is referring is the idea of slowing down the spread of misinformation on social media to give fact-checkers more time to correct it. It’s also an ideal that many misinformation experts have long advocated. Overall, misinformation experts, including Douek, lauded Twitter for introducing friction by nudging users to think twice before sharing misleading content.

Twitter’s changes also target its fact-checking efforts on users who have real influence: the “blue checkmark” public figures, politicians, and power users with more than 100,000 followers. It’s a lot more effective to curb misinformation by concentrating on those users rather than ones with smaller audiences, experts say.

“I think there are a lot of positive things in these policy changes,” said Renee DiResta, a researcher on misinformation ecosystems at the Stanford Internet Observatory. “Of course, you’re never going to fix misinformation online or get rid of it all — people have always been wrong on the internet since the internet appeared. But this is more of a matter of, can you mitigate the directly harmful challenges associated with virality.”