clock menu more-arrow no yes mobile

Filed under:

#KamalaHarrisDestroyed debate signals how much we still don’t understand about social media manipulation

2020 is going to be a doozy online.

Kamala Harris, Andrew Yang, and Tulsi Gabbard on stage during the second round of Democratic presidential debates.
A clash between Kamala Harris and Tulsi Gabbard at the second round of Democratic debates ignited some telling controversies online.
Jim Watson/AFP/Getty Images
Emily Stewart covered business and economics for Vox and wrote the newsletter The Big Squeeze, examining the ways ordinary people are being squeezed under capitalism. Before joining Vox, she worked for TheStreet.

There’s still a lot we don’t understand about misinformation, social media manipulation, and online election interference after the 2016 election. In 2019, that’s becoming increasingly apparent.

Case in point: the second round of Democratic debates, which have prompted widespread but unfounded speculation that bots were influencing which candidates trended on Twitter and in Google search results.

After the second evening of debates on Wednesday, a conspicuous hashtag referring to candidate Sen. Kamala Harris (D-CA), #KamalaHarrisDestroyed, picked up steam on Twitter, and Rep. Tulsi Gabbard (D-HI) was the most-searched candidate on Google.

Harris and Gabbard clashed onstage on Wednesday evening when the Hawaii congresswoman confronted Harris, who previously served as California’s attorney general, over her criminal justice record. “She put over 1,500 people in jail for marijuana violations and then laughed about it when she was asked if she ever smoked marijuana,” Gabbard said, referring to a February interview in which Harris admitted to smoking pot.

The exchange was a notable one — especially on the second night of debates, when the discussion often seemed muddled and confused and no candidate really stood out. But to many observers who are wary after the role social media manipulation played in the 2016 election, it seemed odd that the debate moment would have that much of an impact. Gabbard, according to a RealClearPolitics average, only has about 1 percent support in the polls. And while it wasn’t Harris’s best night, it wasn’t the death knell of her campaign, either.

Pundits, strategists, and some of Harris’s allies noted that Gabbard has a record of being unusually friendly to Russia, and suggested that Russia might be pushing propaganda to promote her bid for the Oval Office. After the debate, Harris’s national press secretary Ian Sams tweeted out an NBC News story from February about how the “Russian propaganda machine” is promoting Gabbard.

Did #KamalaHarrisDestroyed just go viral naturally, or did Russian bots have something to do with it? Were viewers really so interested in Gabbard on Wednesday that they flocked to Google to learn more about her? The answer isn’t clear, and that’s part of the problem: Despite everything we learned in 2016, there’s still a lot we don’t understand.

“There’s an increased awareness that social media manipulation exists and is a thing, but that also has the effect of making people look for it everywhere,” Renee DiResta, a 2019 Mozilla fellow in Media, Misinformation, and Trust and expert in social media manipulation, told Recode.

The #KamalaHarrisDestroyed hashtag debate is about technology and politics

In a post-debate analysis, Maureen Linke and Eliza Collins at the Wall Street Journal found that hundreds of social media accounts with “bot-like traits” promoted information and content that sought to inflame racial divisions during the debates. They found that certain hashtags originated with conservative activists, but then it seemed like bots were helping to spread them.

For example, on Tuesday, a user named Susannah Faulkner first shared the hashtag #DemDebateSoWhite, and then Ali Alexander, a Republican operative who sparked a racist birther-like campaign about Harris during the first round of debates, retweeted it. And on Wednesday, conservative commentator Terrence K. Williams started the #KamalaHarrisDestroyed hashtag after Harris’s onstage exchange with Gabbard.

In both instances, the Journal, citing data from analytics company Storyful, found that a high number of the accounts that interacted with and spread the tweets and hashtags had “bot-like characteristics.”

Twitter told Recode that its initial investigations into the matter did not find any significant evidence of bot activity amplifying hashtags around the debates, including the #KamalaHarrisDestroyed hashtag. Ahead of the first round of debates in June, the social media company laid out its plans for 2020 debates in a blog post and said that its rules and algorithms would combat attempts to manipulate content.

While Twitter and the other platforms certainly have improved their practices post-2016, they’re still not perfect. Rachel Cohen, communications director for Sen. Mark Warner (D-VA), who has been one of the leading critical voices on social media manipulation and Russian propaganda in the Senate, pointed to Facebook’s flawed efforts to remove inauthentic accounts as an example.

“They don’t catch everything,” Cohen told Recode. “There’s a lot they don’t catch, and there’s an incentive for them not to. Twitter does even less than them, frankly.” (Social media companies thrive on content that goes viral and engages users; bots can help spread the kind of content these platforms are engineered to promote.)

Harris’s campaign declined to comment on the matter and instead referred Recode to the Journal’s story about manipulation during the debates.

Some people held up the fact that Gabbard was the most-googled candidate during the second debate as further evidence of some sort of bot- or Russia-driven campaign.

But there’s no evidence to support that claim. It makes sense that people would see a relatively unknown candidate like Gabbard on their televisions on Wednesday, wonder who she was, and search her name to find out (as they did during the June debate). Author and spiritual leader Marianne Williamson was the most-googled candidate on Tuesday, perhaps for a similar reason.

DiResta pointed out that Google would likely want to make sure that their trends results were accurate. “Google is publishing the results themselves, and they are, one would hope, fully aware that there’s incentive to game results to get a good news headline,” she said.

But there are also more common ways to manipulate certain metrics around debates, such post-debate internet polls, where bots and internet communities can generate campaigns to make a certain candidate win.

“The reality is, [Russian media outlet] RT and Russian propaganda is better at SEO,” Cohen said. “The googling is likely a much more organic tool.”

If Google really wanted to prove its results aren’t somehow manipulated, it could be more transparent about how it arrives at its search trends data. Google representatives did not return a request for comment.

It’s also worth noting that Gabbard sued Google after the June debate, claiming that she had been censored by the company when it temporarily suspended her campaign’s advertising account because of unusual activity. Gabbard’s campaign dramatically increased spending around the first debates to try to capitalize on search interest in her, and that triggered Google’s systems to temporarily flag her account.

2020 is going to be a doozy online

This week’s rampant speculation over potential foreign interference related to Harris and Gabbard signals a conversation we’re going to keep having in American politics: Have platforms and the government done enough to prepare for and prevent social media manipulation and misinformation campaigns, and how are we ever going to know?

Companies such as Facebook, Google, and Twitter have made efforts to improve their platforms to avoid a repeat of the 2016 election. They have rolled out ad transparency tools — though such tools aren’t generally as useful as one would hope — and devoted more resources to trying to prevent election interference.

But Russia is expected to once again try to manipulate US elections: During former special counsel Robert Mueller’s testimony before Congress in July, he made abundantly clear how concerned he is about the Russian threat. “They’re doing it as we sit here,” Mueller said, “and they expect to do it during the next campaign.”

Sometimes, the goal of these campaigns is to promote a specific candidate or take down another; in 2016, for example, Russia had a clear preference for Donald Trump and against Hillary Clinton, but it also worked against Sen. Marco Rubio (R-FL) and promoted Green Party candidate Jill Stein. Other times, the goal is simply to sow partisan division. For example, there is evidence Russian bots stoked the flames around the NFL kneeling controversy.

The controversial messages can start from Russia or other bad actors, but sometimes they begin with American accounts and then are amplified by bots. Based on the Journal’s analysis, that’s what might have happened with the #KamalaHarrisDestroyed tweets.

But it’s not just the misinformation itself that sows division, it’s also the debate about it. People are confused about what social media manipulation is, how it works, and whether it’s happening. And they’ve also got their own political motivations to believe whether or not it exists. Harris’s camp has an incentive to claim that a negative hashtag about her is Russian propaganda. Her opponents have an incentive to brush it off. And the more the debate rages over what is and isn’t fake news, the harder it becomes for voters to determine the truth.

”The more hullabaloo we have around whether the conversation we’re being exposed to is real or inauthentic, the more people say, ‘I’m not going to trust these conversations are real people and not actors, so I’m going to retreat into my basement where I know my friends are going to tell me the news I want to hear,’” one Democratic strategist told Recode.

The platforms can and should do more to help on that front; they can improve their efforts to make sure there’s an actual person behind every single account so that users aren’t forced to wonder whether their interactions are real. They’ve taken some strides in that direction with ad transparency and better bot detection, but there is still a long way to go.

These concerns are specifically strong within the Democratic Party, because after the 2016 election, Democrats are much more concerned about the Russia threat. According to a recent Pew Research Center survey, 65 percent of Democrats say Russia’s power and influence is a major threat to the United States’ well-being, compared to 35 percent of Republicans. (Republicans and Democrats share a similar level of concern about cyberattacks from other countries.)

Democrats, still stinging from 2016, are already on high alert for any evidence of Russian interference or anything else that looks a bit fishy online. Those anxieties are likely to get worse, not better. Election Day is still more than 450 days away.

Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.