/cdn.vox-cdn.com/uploads/chorus_image/image/57022123/629851770.0.jpg)
Washington is clashing with Silicon Valley once again.
U.S. lawmakers investigating Russia’s interference in the 2016 presidential election have focused their attention on some of the world’s most influential internet companies, including Facebook, Google and Twitter, where Kremlin-backed agents sought to spread misinformation and provoke political discord.
These tech giants are now under fire for failing to stop — or even realize — the ways their services had been used to these ends. Now they’re scrambling to update their policies to ensure that Russia, or any other foreign government, can’t weaponize the web again.
So how, exactly, did this even happen? Here’s a rundown:
Who’s investigating, and why?
For one thing, there isn’t just one investigation into Russia’s interference in the 2016 election.
The primary probe has unfolded at the Justice Department under the watch of Robert Mueller. A former FBI director, Mueller began serving as special counsel this May at the request of Deputy Attorney General Rod Rosenstein — an appointment that came after Jeff Sessions, the president’s attorney general, recused himself on matters relating to the Russia investigation.
Mueller’s mandate is broad: It focuses on determining to what extent, if any, Russia interfered in the 2016 election, from the Kremlin’s contacts with key Trump officials to the ways in which Russian forces may have spread misinformation on social media.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/9392785/857547930.jpg)
As that official inquiry proceeds, though, Congress is forging ahead with its own investigations. Leading the charge is the Senate Intelligence Committee, which has heard testimony from the likes of ousted FBI Director James Comey earlier this year. The panel is helmed by Sen. Richard Burr, a Republican lawmaker from North Carolina, and Sen. Mark Warner, a Democrat from Virginia.
Their counterparts in the House — fittingly, the House Intelligence Committee — similarly are scrutinizing Russia’s role in the election. Its chairman has recused himself from the investigation, so the top Republican there is Rep. Mike Conaway from Texas, and the leading Democrat is Rep. Adam Schiff from California.
These panels have a number of special, powerful privileges, not the least of which includes access to classified documents. To that end, a third congressional committee studying Russian interference — the Senate Judiciary Committee — has played a less obvious role. Nevertheless, it’s chaired by Sen. Chuck Grassley, a Republican from Iowa, and Sen. Dianne Feinstein, a Democrat from California.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/9392835/490832291.jpg)
What have tech companies found?
Facebook in September said it had discovered 470 profiles explicitly tied to Russian agents. Those profiles purchased approximately 3,000 advertisements ahead of Election Day, which had been viewed by about 10 million U.S. users before and after Trump’s victory.
Many of those ads — none of which have been made public — sought to stoke racial, religious or other social and political tensions. Sources have said the Russian-backed ads even took both sides of contentious issues, like Black Lives Matter or gun control, in a bid to intensify public debate and foment discord. Sources speaking with Recode, along with a series of reports from other outlets like CNN, have suggested these ads and other forms of Russia-supplied content explicitly targeted crucial election swing states.
Additionally, Facebook found another 2,200 ads of interest that did not violate its policies.
Initially, the social giant provided the full information on Russian-backed content only to Mueller’s team at the DOJ, frustrating congressional investigators who felt they had been circumvented.
By Oct. 1, though, Facebook delivered copies of the 3,000 ads along with other data to the House and Senate Intelligence Committees as well as the Senate Judiciary Committee. Later in October, the company said some of those ads appeared on Instagram. And it revealed that Kremlin-tied trolls also attempted to communicate with some Facebook users through its chat app, Messenger.
Twitter also reviewed its platform for Russian interference, thanks in no small part to threat data shared by Facebook. And it found 200 accounts tied in some way to the Russia-backed profiles that Facebook previously had flagged.
Twitter took that information to House and Senate investigators in a briefing at the end of September led by Colin Crowell, Twitter’s vice president of global public policy. But the company’s efforts initially drew sharp criticism from Warner and Schiff, who felt Twitter should have done a more exhaustive search of its sales records for potential Russian meddling. (The two have since tempered their tone.)
Nevertheless, Twitter also turned over to the committees the text of ads — in the form of promoted tweets — purchased by the Russian government-backed media network RT ahead of the election. The network spent $274,100 on Twitter ads last year, according to the tech company.
But Twitter has not stopped RT from continuing to advertise on its site. (Nor has Facebook.) Previously, the U.S. government’s top intelligence agencies have flagged RT as a Kremlin propaganda arm.
Google has not revealed much about its ongoing internal investigation. For now, though, sources told Recode the search giant has found about $4,700 in search and display ads purchased by Kremlin-backed sources. Google has located an additional $53,000 in ads that are connected to Russia, through markers like a local billing address, but may not be explicitly tied to the country’s government. It is unclear what, if anything, Google may have found on YouTube.
Other tech giants have been less forthcoming about what may have happened on their websites ahead of the election — or if they’ve even searched for potential misuse. Oath, formerly Yahoo, and Reddit, both declined to answer detailed questions about their sales records and user accounts. Reddit, in particular, served as a major source for some of the conspiracy-minded, hate-tinged, alt-right content that proliferated on social media in advance of the election.
Snap, however, did query its data, and a spokeswoman acknowledged to Recode it found no Russian-bought ads on its app. Microsoft opened its own investigation, focused on Bing and its other platforms and services, in October.
How did Russian-tied agents do this on Facebook, exactly?
Facebook sells virtually all of its advertising with self-serve software programs, which means that anyone with a couple of dollars can buy a targeted Facebook ad — without any help or oversight from a company employee.
In the case of the election, the owners of numerous Pages with Russian ties bought ads on Facebook without anyone noticing. Facebook made almost $27 billion in advertising revenue last year, so $100,000 worth of ads, especially spread out among hundreds of buyers, wouldn’t raise any flags unless Facebook knew what it was looking for.
This automated sales process is basically the same at other advertising companies like Google and Twitter; it helps the companies — which have hundreds of millions or billions of users — scale more quickly. The problem is that it creates an opportunity for abuse, including a situation earlier this year in which ProPublica was able to target an ad campaign to “Jew haters.”
How have tech companies responded?
Facebook has promised a number of changes to its advertising policies in the wake of this investigation.
It has pledged to hire 1,000 more ad moderators who will review and remove inappropriate ads, and it claims it will roll out a new feature that will allow users to see all of the ads that any organization is promoting on the service. It has also promised to invest more in machine learning and artificial intelligence software to find these kinds of ads automatically.
Perhaps most importantly, Facebook said it will now require all advertisers that wish to buy political ads to provide “more thorough documentation." That should, theoretically, prohibit foreign entities from paying for any kind of political message.
Of course, one of the reasons the Russian ads were never detected in the first place is that they weren’t technically political ads promoting any specific candidate — they pushed social issues to create animosity among voters.
Facebook admits this kind of content won’t go away entirely. “Even when we have taken all steps to control abuse, there will be political and social content that will appear on our platform that people will find objectionable, and that we will find objectionable,” Facebook wrote in a blog post this week. “We permit these messages because we share the values of free speech.”
Twitter claims it does a number of things to try and prevent spam and bots from circulating information online, and it promised to “roll out several changes” in how it handles bots and “suspicious tweets.” It has not yet announced any specific changes to its advertising or safety policies since we learned about Russia’s use of the platform during the election. The company did say it “supports making political advertising more transparent,” though, again, it hasn’t said what that means in practical terms.
Google has not finished its internal review, so it has not announced any changes to its practices.
What is the U.S. government doing to address the problem?
Federal law is clear: Foreign nationals cannot donate to candidates or purchase political ads — “electioneering communications,” as the government calls them. The term “foreign nationals,” of course, includes governments officials and agents, in Russia or elsewhere.
But the 2016 presidential election proved that foreign malefactors can circumvent U.S. law, no matter what it says. Automated ad purchasing systems — plus limited rules around political ad disclosure — can make it easy for foreign political dollars to slip through the cracks on social media sites.
Lawmakers, like Burr and Warner, remain as concerned as ever these companies might again serve as weak points for potential future election meddling, an alarm they sounded at a press conference on Wednesday. Warner’s own state of Virginia has major elections next month.
To that end, there are two efforts under consideration in Washington, D.C., to tighten political advertising rules.
The first is an attempt from lawmakers on Capitol Hill to impose new regulations on digital ads. It’s called the Honest Ads Act, and it’s the brainchild of Warner as well as fellow Democratic Sen. Amy Klobuchar. They introduced the measure this October along with the support of Republican Sen. John McCain.
Under the plan, tech giants, ad networks and other large platforms — including Facebook, Google and Twitter — would have to save copies of all ads running on their sites and make them available for public inspection. With it, they’d have to provide key information about the audiences that those ads targeted. The requirements mirror a similar, much older rule that already governs political ads in newspapers and on TV networks. Warner and Klobuchar seek to require disclosure of more information about the origins of those ads, too.
At the same time, the country’s voting-and-election regulator, the Federal Elections Commission, has kicked off a renewed public debate over the information that advertisers should disclose about themselves and their campaigns on social media.
The battle actually dates back to 2011, when Facebook sought an exemption from FEC rules that require political advertisers to disclose who paid for their efforts.
At the time, Facebook argued ads on social media sites would be so small that disclosure in the text of the ad itself would be impossible. The social giant felt the regulatory burden, if it applied to ads on its platform, would stunt digital advertising altogether. And Facebook cited a similar, earlier petition from Google for permission to skirt the FEC regulations.
As with most matters at the FEC, its commissioners never came to a decision. But as a number of campaign-finance watchdogs have noted — and Bloomberg, among others, has reported — Facebook allowed ads without significant disclosure on its platform anyway.
The FEC has a chance to specify more precise, tougher rules of the road. But once again, it may depend on whether the partisan-hobbled commission can overcome its tendency for gridlock. For now, it is accepting public comments until Nov. 9.
What happens next?
For Facebook, Google and Twitter, their next major challenge comes Nov. 1, when they’re set to dispatch senior executives to testify before the House Intelligence Committee and the Senate Intelligence Committee.
The hearing is set to be a parade of lawyers. Facebook is sending its general counsel, Colin Stretch; Google is tasking its GC, Kent Walker; and Twitter is sending its acting general counsel, Sean Edgett, the three companies confirmed to Recode.
Meanwhile, all three have started lobbying aggressively on Capitol Hill, as they face the prospect of new political ad regulations.
This article originally appeared on Recode.net.