Google’s European head apologized Monday to anyone whose ads have appeared next to questionable content on YouTube. He said the company would look into simplifying tools for advertisers as part of a solution.
“We’re sorry to anybody that’s been affected,” Matt Brittin, head of Google’s operations in Europe, told an audience during an advertising conference in London.
Last week, a number of British advertisers pulled ads from Google and its giant video site over concerns ads could be appearing next to YouTube videos containing hate speech and extremist messages.
The ads in question were placed via software which allows advertisers to automatically buy ads within YouTube targeted to specific demographic groups.
The Guardian, which has removed ads, reported discovering some had appeared next to inappropriate content such as “videos of American white nationalists, a hate preacher banned in the U.K. and a controversial Islamist preacher.”
The British government and media buying firm Havas, which represents clients including O2, EDF and Royal Mail, have also pulled ads, and more advertisers announced this weekend they were suspending ads.
Havas said Google had been “unable to provide specific reassurances, policy and guarantees that their video or display content is classified either quickly enough or with the correct filters.”
“The decision of our U.K. team to pause activity with our partner Google is a temporary move made on behalf of our U.K. clients and their specific needs. The Havas Group will not be undertaking such measures on a global basis. We are working with Google to resolve the issues so that we can return to using this valuable platform in the U.K.,” Havas told Recode in a statement.
The recent controversy comes on the tail of an incident last month where Google removed a popular YouTube creator from its preferred advertising program over issues with hate speech in the creator’s videos. The Wall Street Journal found that Felix Kjellberg, known as PewDiePie, had posted several anti-Semitic videos.
Brittin said Google was in the process of reviewing ad policies and controls, and that this involved questions around how to properly define hate speech. He said the company may simplify tools provided to advertisers as part of its solution.
“I think that if the tools are there but they’re too complex, that’s our problem,” he said.
Currently, Google reviews content flagged as inappropriate either by users or technology including machine learning algorithms, he said.
Reporters at the conference asked whether Google planned to take a more proactive approach by having employees actively identify content not necessarily found through these filters. It remains unclear whether that change will be part of Google’s solution.
While Brittin said Google takes responsibility for recent brand safety issues reported by its advertisers, he also said the problem was an issue for the entire ads industry.
“It’s not just a Google/YouTube issue, though we’re in the spotlight,” he said.
This article originally appeared on Recode.net.