/cdn.vox-cdn.com/uploads/chorus_image/image/58457751/869090782.jpg.0.jpg)
Google and Twitter told the U.S. Congress on Thursday that they did not spot any attempts by Russian agents to spread disinformation on their sites when voters headed to the polls in Virginia and New Jersey last year.
Facebook, on the other hand, sidestepped the matter entirely. (Update: Facebook later told Recode that it isn’t aware of such abuse on its platform, either.)
The admissions — published Thursday — came in response to another round of questions from the Senate Intelligence Committee, which grilled all three tech giants at a hearing last year as lawmakers probe the extent to which Russian-aligned trolls sowed social and political unrest during the 2016 presidential race.
Specifically, Democratic Sen. Kamala Harris asked the companies if they had “seen any evidence of state-sponsored information operations associated with American elections in 2017, including the gubernatorial elections in Virginia and New Jersey.”
In response, Twitter said it is “not aware of any specific state-sponsored attempts to interfere in any American elections in 2017, including the Virginia and New Jersey gubernatorial elections.”
Google, meanwhile, said similarly that it had “not specifically detected any abuse of our platforms in connection with the 2017 state elections.”
Facebook, however, answered the question — without actually answering it. “We have learned from the 2016 election cycle and from elections worldwide this last year,” the company began in its short reply.
When asked for clarity, though, a Facebook spokesperson later said the social giant similarly is not aware of any Russian-driven disinformation campaigns on its platform during the 2017 races.
All three companies’ replies to Congress may offer only limited consolation to lawmakers who remain worried that the tech industry is unprepared to combat state-sponsored propaganda as an even larger election looms in November 2018. To that end, lawmakers like Democratic Sen. Mark Warner, who sits on the Intelligence Committee, have sought to address the issue through regulation — specifically the political ads that appear on major social media sites.
During the 2016 election, Facebook said that more than 126 million U.S. users had seen some form of Russian propaganda over the course of the 2016 election. That includes ads purchased by trolls tied to the Kremlin as well as organic posts, like photos and status updates, that appeared in users’ feeds. Similar content appeared on Instagram, affecting an additional 20 million U.S. users.
And Russian trolls sought to steer Facebook users toward events, even protests, around contentious issues like immigration. In its response to Congress, published Thursday, Facebook elaborated that Kremlin-aligned agents created 129 events on 13 of its pages. Roughly 338,300 unique accounts viewed these events, while 25,800 accounts indicated they were interested and about 62,500 said they would attend. “We do not have data about the realization of these events,” Facebook explained.
Google, meanwhile, previously informed Congress that it had discovered that Russian agents spent about $4,700 on ads and launched 18 channels on YouTube, posting more than 1,100 videos that had been viewed about 309,000 times.
And Twitter told lawmakers at first that it found 2,752 accounts tied to the Russia-aligned Internet Research Agency. Last week, however, the company updated that estimate, noting that Russian trolls had more than 3,000 accounts — while Russian-based bots talking about election-related issues numbered more than 50,000.
Included in Twitter’s latest response to Congress is more information about the company’s bot problem. Twitter clarified that it is “detecting and blocking approximately 450,000 suspicious logins each day” — accounts trying to log on through some form of automation. The company also said it “identified and challenged an average of four million suspicious accounts per week” last September, meaning that Twitter might require a user to verify a phone number or offer some other form of verification to prove he or she is not a bot.
Twitter maintained that bots make up less than 5 percent of the company’s total user base, though outside groups have pegged that number as high as 15 percent.
Facebook, Twitter and Google each has promised improvements in the wake of the 2016 president election. All three tech companies have committed to building new dashboards that will show information about who buys some campaign advertisements, for example. Facebook also pledged to hire 1,000 more content moderators to review ads.
This article originally appeared on Recode.net.