A top Google executive says the company’s YouTube ad controversy — where big brands have discovered that some of their ads have run next to videos promoting hate and terror — is overblown.
But he says Google is making progress at fixing it anyway.
“It has always been a small problem,” with “very very very small numbers” of ads running against videos that aren’t “brand safe,” said Google’s chief business officer, Philipp Schindler. “And over the last few weeks, someone has decided to put a bit more of a spotlight on the problem.”
Google has been scrambling to react over the past few weeks, as newspapers like the Times of London, the Guardian and the Wall Street Journal pointed out ads running next to videos from hate groups and other extremists. Those reports prompted big brands like AT&T and Verizon to pull their ads from YouTube.
Now, Schindler says, improved software has been able to track down 5x more videos that it wants to keep clear from advertisers.
He says YouTube has also improved its response time when someone flags an inappropriate video, and is improving its user interface to make it easier for advertisers to steer clear of dodgy videos. Schindler says YouTube will also start letting outside companies like DoubleVerify and comScore audit its efforts to keep ads away from controversial clips.
But in an interview on Sunday, Schindler also described a tricky line for the company to walk: It wants to reassure advertisers who want to know it’s doing something about the problem. But it doesn’t want to say that it has a widespread problem, either — because it doesn’t think it has a widespread problem.
Here’s an edited transcript of my chat with Schindler. At a couple points in our conversation, Schindler and a Google rep discussed whether he could discuss something on the record. In those cases, a Google rep followed up with statements after the interview. I’ve noted those below.
Peter Kafka: You said you’ve increased your detection by 5x. How big was the problem to begin with?
Philipp Schindler: If you look at it from an advertiser perspective, the error rates we’re talking about — I’m careful in saying this, because I don’t want to take away from the importance of the problem and that we need to get it right — but the numbers are tiny, tiny.
[Update from Google rep: “When we spoke with many of our top brand advertisers, it was clear that the videos they had flagged received less than 1/1000th of a percent of the advertisers’ total impressions. Of course, when we find that ads mistakenly ran against content that doesn’t comply with our policies, we immediately remove those ads.”]
But it’s enough of a problem that we’re talking about it now.
It should always be smaller. It’s our responsibility to make it smaller. Let’s not take away from that. But remember, we’ve had that problem, at scale, for a long time. The whole industry [has], even traditional. The problem comes from the fact that somebody is aggressively putting it onto the front page.
Do you think someone is actively campaigning against Google and YouTube?
That’s not how I would say it. There’s a lot of spotlight on the problem at the moment. And advertisers just don’t like something like this to be dragged out into the public. And they’re unhappy with that, and I can fully understand that they’re unhappy with that.
They’re unhappy with two things. Let’s be honest:
Number one, that the mistake even happens. That’s what we have to get better at. Again, as before, we cannot promise a perfect system. [But] whenever it happens, it’s bad, and it shouldn’t happen.
The second piece is, apart from the mistake happening, that there’s so much focus being put on it publicly. They obviously don’t appreciate that.
What’s changed between now and a year ago? Is there more hate speech on YouTube, or are more people talking about it?
The first thing that changed is that more public attention has been put on what is, percentage-wise, a pretty small problem. Again, not to minimize it.
The second thing that has changed is that the problem has become a bit more multifaceted. It’s relatively easy to [block] clear “hate” — clear, specific words, that are very clearly triggering something. A lot of things historically have been very black or white. And things are becoming more gray-ish. A lot more shades of gray.
Take the N word. If you would just block [videos] when people refer to the black community with N word, you would take out a pretty significant percentage of all rap videos. You would probably take out a lot of pro-black activist groups. But obviously you want to take it out when somebody says “we hate all N words.”
The problem is now, the machines have to start understanding context in a much different way.
YouTube has always had issues with advertisers being uneasy about the content. A few years ago, you addressed this by creating Google Preferred, where advertisers could buy safe stuff, at a premium — which meant you were saying that everything else was riskier. Why not just sell the cleared stuff?
The focus on Google Preferred historically hasn’t been brand safety. It’s “what is the most engaging content that users are using.”
But it was also telling advertisers that they knew what they were getting — videos from people they’ve heard of, like AwesomenessTV or Smosh.
The reason why trying to do what you’re basically suggesting — whitelisting everything [would be difficult for] a couple reasons:
Think about the scale of the problem that we’re dealing with here. The last thing that you want, if you lean back a little bit here — if you asked the whole digital world, independently of YouTube, to whitelist and review everything going forward, at the scale we’re running at. What you would do is fundamentally disconnect advertisers, brands, companies, from the ability to interact with their audiences. It’s not a world we want. That’s not a world you want.
Think about the problem, in a world where, more or less over time, every interaction will be some sort of digital interaction, and brands and companies want to participate in real-time conversations with their consumers ...
But that is how traditional media works. The New York Times knows exactly what’s on a New York Times page. NBC knows what NBC is broadcasting. There’s no question about it, and that’s now a selling point for them: This is brand-safe stuff. We know what we’re doing. This seems like it’s a vulnerability for you.
[A Google rep notes that advertisers are now, by default, set up to run only on safe content.]
Have any advertisers who announced they were halting their YouTube ads come back?
[Update from Google rep: “Many advertisers never pulled out and many have decided to come back based on the actions we’ve taken over the past week. Our customers’ confidentiality is paramount to us, and it wouldn’t be appropriate for us to comment on their behalf.”]
Last month, you also said Google would review the standards for the content it allowed on YouTube, and might change its guidelines. What’s the latest on that?
We’re carefully evaluating this. That’s where we are. I can’t say any more about it at the moment.
This article originally appeared on Recode.net.