clock menu more-arrow no yes mobile

Filed under:

Twitter is considering warning users when politicians post misleading tweets

Leaked design plans reveal that the company is thinking about putting bright red and orange labels on false tweets by politicians and public figures.

Twitter CEO Jack Dorsey.
Twitter is experimenting with labeling lies and misinformation posted by politicians and public figures.
Amal KS/Hindustan Times via Getty Images
Shirin Ghaffary is a senior Vox correspondent covering the social media industry. Previously, Ghaffary worked at BuzzFeed News, the San Francisco Chronicle, and TechCrunch.

Twitter is experimenting with putting bright orange and red labels underneath false statements and misinformation posted by politicians and public figures. According to a new report, the company included tweets from Sen. Bernie Sanders and Republican House Minority Leader Kevin McCarthy in its design mockups.

If implemented, the designs would be a significant expansion of the company’s policies for moderating specific kinds of misinformation, such as around anti-vaccination conspiracies and false information about voting — and more recently, deceptively edited videos.

Twitter is considering adding the warning labels, which appear roughly as big as a tweet itself, as one component of a bigger set of policies around combating misinformation on the platform. In a mockup of the feature, which NBC News obtained and first reported, “harmfully misleading” misinformation would be fact-checked directly underneath the tweet by fact-checkers, journalists, and potentially other users participating in a points-based community moderation system similar to Wikipedia. It’s unclear how Twitter would determine which posts to flag as misleading or exactly how the points-based system for community moderators would work.

A spokesperson for Twitter told Recode that the specific mockups NBC reported on are just one example of several ideas in the early research stage, and they have not been approved for rollout.

“We’re exploring a number of ways to address misinformation and provide more context for Tweets on Twitter. This is a design mockup for one option that would involve community feedback. Misinformation is a critical issue and we will be testing many different ways to address it.”

Regardless of whether or not Twitter ends up implementing these warning labels, the mockups are the latest example of how social media companies are trying to combat a torrent of online misinformation, particularly leading up to the 2020 US presidential election. Facebook already labels some content as false using third-party fact-checkers. And beginning March 5, Twitter is planning to start removing or labeling some “manipulated media,” which would include deceptively edited videos like the one that went viral last summer that had been tweaked to make House Speaker Nancy Pelosi appear intoxicated.

On Thursday, Twitter confirmed that a widely circulated video of former New York City Mayor Mike Bloomberg onstage at the recent democratic presidential debates that was deceptively edited would be labeled as manipulated media under those new rules.

While NBC News reported that these design mockups were a “possible iteration” of the previously announced manipulated media rules, a Twitter spokesperson disputed that these designs are tied to the March 5 rollout.

“The mockups that you see are completely early-stage research, and it’s just one of a variety of options,” a spokesperson for Twitter told Recode. “There’s no timeline. But we’re obviously always trying to get ahead of what we’re seeing and not leave any stone unturned.”

The leaked designs give some examples of cases where Twitter might apply the label, including one attached to a claim Bernie Sanders tweeted about how 40 percent of guns in the US are sold without background checks (that percentage is from an outdated study; more recent estimates say it’s closer to 22 percent).

Another debunks the false conspiracy theory that the novel coronavirus was a man-made virus. (While the exact origins of the novel coronavirus are unknown, the Centers for Disease Control says it seems to have emerged from an animal source.)

And the third is a tweet by House GOP Leader Kevin McCarthy claiming that the US intelligence community secretly removed whistleblower rules prior to the Trump Ukraine whistleblower coming forward (there is no known evidence to support that).

Joan Donovan, who studies online media disinformation at the Harvard Kennedy School’s Shorenstein Center, said she found the new plans interesting, but that they also raised questions. Donovan cautioned that a community moderation system could be exploited by “highly motivated and coordinated groups” who could “get another battleground” with the misleading label feature.

Donovan also questioned whether Twitter would actually enforce the fact-check policy in practice, particularly against prominent people, and asked how many times a prominent figure would be able to tweet a misleading statement before being banned. “The problem all along has been a failure to moderate elite influencers and politicians.”

A spokesperson for Twitter declined to answer questions about how Twitter planned to enforce such a policy and how many misleading posts it would take to get banned, saying that the mockups were tentative.

Although Twitter says it’s far from the possibility of introducing the labels on its platform, the leaked plans today are a fascinating insight into how one of the world’s most powerful platforms for sharing political speech is thinking about how to better monitor the half-truths and all-out lies that so frequently spread on its service.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.