Twitter said today it had fixed a “bug” in its platform that could have allowed advertisers to target users with racial epithets and terms like “Nazi.”
The change follows a report by the Daily Beast — which found that potential ad campaigns using those derogatory terms could have reached millions on the site — and a broader controversy this week about inappropriate algorithmic ad targeting on big internet platforms.
"We determined these few campaigns were able to go through because of a bug that we have now fixed,” a spokeswoman said in a statement. “Twitter prohibits and prevents ad campaigns involving offensive or inappropriate content, and we will continue to strongly enforce our policies."
Earlier this week, Facebook faced its own barrage of criticism after ProPublica discovered the social giant allowed advertisers to target users based on categories like “Ku-Klux-Klan” and “Jew hater.” It has since similarly implemented changes to its targeting platform.
And Google for a time also appeared to allow ad campaigns based on racist or otherwise hate-inspired terms, BuzzFeed found, prompting the search giant to do its own fine-tuning last week.
This has spurred a new round of debate as to whether these giant internet companies — which many feel already have too much control — should add more proactive human oversight to their algorithms, especially to filter inappropriate and hateful speech.
On one hand, more human involvement, earlier, could prevent embarrassing discoveries like these, and potentially worse outcomes. On the other, trying to form — and police — a line of what’s appropriate is highly imperfect, and puts even more control in the hands of the platform companies, which tend to prefer to hide behind their algorithms.
This article originally appeared on Recode.net.