Facebook has found another way to try and curb the spread of fake news: Hunting down and deleting spam accounts that aren’t actual humans.
Then on Friday, Facebook posted again about its efforts to fight spam accounts, this time in the form of automated Facebook “Likes” and comments generated by bots.
Facebook wouldn’t say how many “inauthentic” “Likes” and comments it had removed, but some accounts with more than 10,000 “Likes” could see their totals fall by a couple percent, the post reads.
These accounts are different from the batch Facebook removed in France; the company wrote that the fake “Likers” and commenters are located in Bangladesh, Indonesia and Saudi Arabia, among other countries.
But in both instances, the thinking is pretty much the same: Facebook wants to keep spammers and bots from influencing the spread of news, fake or not, on the service.
“This effort complements other initiatives we have previously announced that are designed to reduce the distribution of misinformation, spam or false news on Facebook,” the company wrote in a post Wednesday. Its post on Friday also mentioned its efforts to stop the spread of “inauthentic material.”
Facebook is using algorithms to try and detect these fraudsters, and like almost all of the company’s other algorithms, it’s not sharing much information about how they work.
What it will say, though, is that it’s using patterns to detect bot behavior. “For example, our systems may detect repeated posting of the same content, or an increase in messages sent,” Facebook wrote.
The company is not relying only on bots to fight fake news. It has also enlisted humans, including fact-checking editors and its own users. The company has started to flag fake news on the service and recently rolled out a “tips” sheet for users to spot fake news and report it to the company.
This article originally appeared on Recode.net.