clock menu more-arrow no yes mobile

Filed under:

Google is expanding its AdSense hate-speech policy

More pages could be in hot water for disparaging content.

Google’s top business executive Philipp Schindler
Asa Mathat

Google is expanding its hate-speech policy for publishers that use the company’s ad network in order to address concerns of ads funding inappropriate content online, including complaints from advertisers that messages were appearing within inappropriate content on YouTube.

The changes fulfill plans announced by the company in March in response to the YouTube controversy, and follow earlier controversies about Google’s ad network supporting fake news and other controversial and misrepresentative content.

The policy additions are meant to address a more divisive and toxic online environment, where an increasing amount of content “is frankly right at the edge of what we consider traditionally to be hate speech,” Rick Summers, who oversees the development and implementation of Google policies impacting publishers, told Recode.

“We have decided that this is not in the interest of our advertisers,” said the global product policy principal.

The policy is now expanded to include more groups like immigrants and refugees, and to apply to discriminatory pages that before would not have been covered.

Previously, the policy was more narrow. It addressed speech that was threatening or harassing against defined groups, including ethnic and religious groups, and LGBT groups and individuals.

Now, the language of the policy also addresses pages that deny the Holocaust or advocate for excluding certain groups, such as an ad for a room where the poster says they will not rent to Chinese people. This type of content, while discriminatory, was not previously covered by the policy.

And the definition of protected groups and individuals has been expanded to include those who share any “other characteristic that is associated with systemic discrimination or marginalization.”

That expanded definition means that harassing and disparaging speech against groups like immigrants and refugees will be in violation of the advertising policy.

“That status is used as proxy for attacking people in what we’ve called protected groups,” said Summers. An example would be disparaging content that refers to “refugees” when the true subject is Muslims.

The revamped policy will also newly apply to specific pages that include content in violation of the policy, meaning that ads will not necessarily need to be removed from an entire site or account.

This could hypothetically mean that an article on Breitbart that uses a derogatory term for transgender people won’t get any ad money, but the site will still receive ads on other pages. Google declined to comment on whether Breitbart would be affected.

The changes are global, but will take time to implement, and won’t necessarily be immediately noticeable, a Google spokesperson said.

In March, Google’s top business executive, Philipp Schindler, wrote a blog post outlining how the company was “taking a tougher stance on hateful, offensive and derogatory content” to more effectively remove ads from inappropriate content.

Google parent company Alphabet reports its first-quarter earnings on Thursday, but the advertiser pullout from YouTube is unlikely to affect the period’s profit, as the incident happened toward the end of that quarter.


This article originally appeared on Recode.net.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.