In the early 2000s, Google wanted its search engine to be three things: Comprehensive, relevant and fast.
“Then in the mid-2000s, when social networks and behavioral advertising came into play, there was this change in the principles,” former Google lawyer Nicole Wong said on the latest episode of Recode Decode. “The dynamics were around personalization, which is not relevance. ‘What’s more stuff that you like?’”
Wong, who after Google worked on Twitter’s legal team — and then was called to serve as the deputy chief technology officer of the United States in 2013 — told Recode’s Kara Swisher that now might be a good time to think about a “slow food movement for the internet.”
“What if we change the pillars again?” she asked. “What if we say, ‘That’s not the internet we want to live with’? What if the pillars were accuracy, authenticity and context?”
She doesn’t offer the suggestion up carelessly: There would be consequences to slowing down how a tech company grows.
“Maybe that means that things like Black Lives Matter or Tahrir Square have a little bit more trouble getting off the ground quickly,” Wong said. “Maybe the Ferguson thing, you don’t hear about that as fast as you do now.”
Below, we’ve shared an edited transcript of Kara’s conversation with Nicole. You can read a full transcript of the interview here.
Making the government more technologically literate
Why did you want to [work at the White House]? Policy, right?
They had actually caught me right after my grandfather had passed. And he had instilled in us the sense that like, “You have every right to be here, and you have a right to participate in your community and your government. And if you get the opportunity, it’s a responsibility, and you’re privileged to do so.” Knowing that, when the call came in from the White House, there was no way that I can’t walk that walk, right?
So, the opportunity to serve, and candidly, and I don’t know if you know Todd Park well ...
Yes, I do.
He’s phenomenal. The vision that he had for bringing all of government services up to another level, to be able to participate at such a high impact, that was huge, and it wasn’t something I could turn down.
Right, and you focused on what there?
That’s kind of big.
Huge, and like when you go into government, they tell you ...
“Fix the government, Nicole.”
Right? “Don’t break the internet.”
They tell you like, “Focus on three things. You gotta have three priorities, ’cause every day’s an emergency, and you’re gonna get distracted if you don’t.” So, going in April, I was getting ready to move, and I was like, “Where are the kids going to go to school?” And I was like, “Okay, so I’ll do maybe internet governance and free expression, which is sort of where my passions lie, and maybe I’ll do some privacy.” We were planning to move in July of that summer. And June 9th, if you remember, is when the Edward Snowden disclosures started.
And they were like, “How soon are you getting here?”
So, I started the next week. And honestly, the first half of the time I was there was spent on privacy, surveillance issues and most importantly, I think, the work that I’m most proud of was the public policy implications of big data.
Right, right, which has been the story ever since.
And also, it was interesting, I was writing a column today for the New York Times that I write for, and one of the things that I left out was the damage the Snowden thing did to the relationship between the government and Silicon Valley, which had been relatively cooperative until then. And then it was broken rather badly ...
I think that’s right. Like the notion that I would go and talk about internet governance or free expression at a time when the Snowden disclosures were happening, that was done.
Yeah, and they’d been spying on you.
Like you weren’t going to talk about anything else.
“You’ve been spying on us.”
Which the government assisted the government, the companies were cooperative with. They said they were not.
It just went on, it just seems like a lot of what’s happened in the Russia thing, you can draw a very bright line, is that they were not cooperating in the way they used to, which is interesting. But that’s a topic for another day. We may get into that. So, you did that, and what was the experience like? You’re in the middle of that, that’s all there is. That was all there was.
Yeah, yeah. No, it’s all-consuming. It was fascinating.
And there’s no other experience like it.
What did you take away from it?
The most important lesson and the one that I continue to try and work on is, I think that we have not done a good job of filling the ranks in government and in the public sector with technologically capable, savvy people.
Yes. Or they’re in a backseat position, or they’re brought in like the help, like the air conditioning repair.
They think that like you’re there to fix the email servers, right?
As opposed to thinking about forward-thinking policy and the ramifications of what it means to use all this technology in our world. And I think we are doing ourselves a disservice by not doing that. I think that there are big and small things that I’ve been talking with people about. Like what should be the tech agenda for social impact? Part of it is government. Part of it is like, “Just make shit work,” right?
Right, right, right.
Like if you can order a chocolate cake on Amazon and have it delivered that day, then you ought to be able to get your Social Security benefits just as easily, right?
So there’s that. Then there’s the broader vision, which is like, “What’s our moon shot?” Knowing what all of our capabilities are, how do we power the next ...
Why is that, from your experience in government? I have some thoughts about it, but I mean, that they just don’t think about it. They’re not technologically literate people for many of the people that go into it. They operate in a very old system that resists that kind of change. And also, they’re fearful of it and wary.
Well, because they don’t understand it very well.
Right, so there’s some wariness. I think that’s going to change over time. I think that the growing ranks of those at the staff level and at the congressional level are much more savvy than they used to be. If you even just look at, not all the questions in congressional testimony are great, but they’re way better than they were five years ago.
Okay, if you say so. Oh, the Zuckerberg ones, I was literally like, I was screaming at the TV set.
They’re like, like they’re reading Wikipedia page? Yes.
But they’re way better than they once were. And I think we’re getting better at educating them. I think that we still face a lot of competition. If you’re coming out of school with a CS degree, are you going to a private company or are you going into the public sector?
Right, you’re not going ...
And there’s a money thing, right?
We haven’t instilled in people the sense that they should serve. Like, that this is our government, and it’s only as good as we are.
Tech leaders testifying in Congress
These hearings, I think, are actually more important because they’re actually ... there are serious politicians who actually know a few things about things.
I think that’s true, but I think it’s important to think about what are hearings good for and what are they not good for?
And I’ve done five, right? Hearings are super good for putting executives’ or companies’ feet to the fire, right?
Yes, and it’s always a good thing.
It’s always a good ... and educating the policymakers and their staff as well as the public about a thing, right?
Right. “Look at this.”
And so, like educating on Russian disinformation campaigns, that’s good for all of us, and that’s an important reason to have them there.
It’s good for, again, holding them to a schedule. So, my assumption is, if Jack and Sheryl are showing up, they have some good news to report, right? Like they’ve made progress since the last hearing that they did.
Right, so what they want to do is say, “Here’s what we did wrong, and here’s what we’re doing to fix it.”
“And here’s what we’ve done to fix it.”
And that’s also a really good ... knowing that they’ll be held accountable, that’s a good function of a congressional hearing. What I think it sometimes get used as is a platform for assigning blame, a platform for a political grandstanding. That’s un-useful, and anyone who thinks someone’s going go and they’re like, “We found the silver bullet,” that’s so not going to happen, because this problem is so complex and so beyond just what the tech companies can do, right? Any expectation that that’s going to happen, we should kill that part now.
Right? To me, optimal results would be that you find some agreement about, “What’s the easy stuff?” Alex Jones, whatever it is. Find the easy stuff and decide we have agreement on how we’re going to handle that easy stuff, whether it’s by legislation or something else. And then lean into the hard questions because there’s lots of hard questions, and figure out what can we make progress on, even if imperfect? What can we not ... that this is not a tech-company solution, it’s a different solution.
So Mark was the first big one. How do you assess that encounter? I think it’s just the beginning, this is gonna go ... I was like, “Strap on your wooden chairs, people.”
That he was gonna have to do it many more times?
All of them, all of them, on everything. And by the way, it’s not just Russia, it’s gonna go to AI, it’s gonna go to IOT, it’s gonna go to ... everything, everything. Cars.
Oh yeah. For sure. But whether it’ll be a CEO or not I think is up for debate. And here’s the thing, you don’t create solutions in a hearing. And so all the hard work and all the ... you can use the hearing to get a commitment that something will be done, but you can’t actually devise the solution.
My point being is that these companies have operated largely unfettered for a long long time. And they would say not, but I don’t know. If you were a broadcast company or a media company you’d be like, “Hey, get on the legal train that we’ve been on for years.” So, how did you assess the Mark hearings?
How he did?
I thought he did well, only ’cause they were terrible. That’s really pretty much a low bar.
Certainly the first day. He did worse the second day and their questions were better the second day.
Yes, yeah. For some reason the House, yes, I agree.
Which like, 10 hours in that seat, that sucks for anybody, right?
So I thought he did fine, and I thought ... the thing that I think really kept sort of bothering me during that was, I think on the second day, and maybe it was ’cause he was a little bit more down, he kept referring to AI as the solution. Like, “Oh, we’re gonna start handling this using more AI,” and this notion that we can resolve content and disinformation problems just by throwing some AI up at it, yeah it can help, but it’s not gonna solve the problem.
I think that if you were not well-informed about how AI works, just how machine learning works, you thought that was supposed to be a silver bullet and it’s not.
It’s not. Absolutely.
And it could go really wrong, right? If we do it poorly, we will replicate all the mistakes we’re currently making.
Absolutely. My issues with them were they, the water under bridge ... let’s put the water under the bridge and let’s focus on solutions. I’m like, “Let’s reflect on the problems.” I think there’s something very good in thinking about why you went wrong.
Yeah. Although, I also remember at the beginning, Mark I think said, “”We accept responsibility for ...”
“We have a broader responsibility.”
Yeah, but there was also, it was as if he accepted the outcome of elections both here and in Europe as Facebook had ... that was Facebook’s issue, and it’s not. Let’s not, and you and I may end up disagreeing over this, the way this election turned out is not because of tech or Russian disinformation. We had 63 million people vote for a man who was blatantly misogynist, racist ...
Right, agreed, agreed.
... anti-Semitic, intolerant, right? All of those things, and he wasn’t hiding it from us, and 63 million people didn’t find that disqualifying.
Content moderation and “slow food”
Content moderation, which is a nice way of saying censorship, or possibly not? Or First Amendment. You’re a First Amendment lawyer. How do you look at this incredibly fraught situation.
Super complicated. No silver bullet.
And I think I’m somewhat frustrated by the level of conversation in each of the countries that is trying to wrestle with it because they are all dealing with it as if it were not global. Right?
So, Facebook’s in this really awkward position where it’s trying to have a global platform and one set of rules imposed consistently. And the fact of the matter is that every understanding of content is incredibly nuanced from a perspective of what it is in the culture, what it is in the political system, how the legal environment handles a content problem. And so I know what they’re trying to do, and I understand that that’s the only way to scale it, I just think it’s really hard.
And I will say that as someone who gets to say that in hindsight, ’cause I’m not that decider anymore. And in the days when I did it, it was millions, not billions of users, right? It was hundreds of ... I don’t remember, it was like ... in the tens or scores of hours per minute on YouTube, not in the hundreds of hours of content on YouTube. And so, I actually had the time to say ... my folks would level up something for me to see, and I would get a day to sort of think about it and get some more information about like, “Well, what does this mean in India? What are the ramifications?” and to touch base with people in India to say, “Should I do this or that?” They appear not to have that latitude anymore, and what I’m hearing is that they have four or five seconds per piece of controversial content to make a decision.
You are gonna get so many mistakes doing that.
It’s the life they chose, Nicole.
It is the life they chose.
And the billions they accepted for doing that job.
So the question is, do we wanna slow that down? Is this the moment where we have kind of like a slow food movement for the Internet?
Oh, that’s a great idea.
... and just slow everything down.
So how does that work?
Well, so, here’s ... I was thinking about, and I’m not sure I’m gonna directly answer that question, but when I first started at Google, I remember having conversations around the pillars of design for search. I don’t think they called it exactly that, but it was like the principles on which you design search. And it might have been Matt Cutts that said there’s comprehensiveness, we want all the information we can get; there’s relevance, meaning we deliver the right response when someone asked a question; and speed. Those were the three pillars of search.
And then in the mid-2000s, when social networks and behavioral advertising came into play, there was this change in the principles that ... we just weren’t as concerned about search anymore, instead we were focusing on this other part of the platform. And the dynamics were around personalization, which is not relevance, right?
Right. It’s what you wanna see.
Not what answers your question, but what’s more stuff that you like? Personalization, engagement ... what keeps you here, which today we now know very clearly. It’s the most outrageous thing you can find. And speed. Right, so speed’s still there, but the first two have changed, and that has, I think, propelled this crazy environment that we’re in now.
You’re absolutely right. That’s an incredibly intelligent way of putting it.
So what if we change the pillars again? What if now everything that we’ve learned in the last two years, we say, “That’s not the internet we want to live with”? So this is just personal for me, like, what if the pillars were accuracy, authenticity and context. And maybe that slows it down. Right? So maybe that means that things like Black Lives Matter, or Tahrir Square have a little bit more trouble getting off the ground quickly. Right? Maybe the Ferguson thing, you don’t hear about that as fast as you do now, and as quickly among your ...
Which some would say is a terrible thing.
Right? So some things are gonna, there’s gonna be cost to refocusing those principles, but maybe that’s a different world that we actually ought to be trying to build.
Yeah. Yeah. Do you think they’re thinking about it like that?
I have no idea. I hope they are.
The danger of blunt instruments
Jack will be in front of the House members who will only talk about Diamond and Silk and everything, like being pushed down. Laura Ingraham talked about nationalizing Google and Facebook. I know, you’re rolling your eyes, I’m rolling my eyes, too. But, these are being talked about by people who have serious impact on — potential people who could have serious impact. So, I’m not just gonna roll my eyes, I’m troubled by this.
I’m totally troubled, and here’s what I’ve ...
Can you just ... Google and Facebook do not discriminate like that.
They just don’t. PageRank, right? From what I can understand.
Exactly. It’s algorithmically-based and it is not about, like, “Hey, I like this political decision better than that political decision.” No one’s got time.
Right. Right, right. So how do you get rid of that if you’re the tech companies, besides just saying it over and over again, without saying, “You’re an idiot, stop saying that.”
Yeah, yeah. I don’t know, and in this environment, and because they are put on their back foot I think it’s gonna be super hard. And what that might mean is that folks like me or you or others outside of that environment say, “Hey, that’s not actually the thing that we think is happening, nor are we worried about that.”
Do you feel there’s an actual risk when you have Orrin Hatch all of a sudden, who didn’t talk about antitrust, [now] he’s talking about antitrust, or maybe the president tweeting that Google’s trying to skew his search results?
I do worry about it, but I also think that we have to have an honest conversation about what they are looking for. Right? Because some of the solutions that I see get bandied about by folks who are not as sophisticated about understanding what’s happening. Like, “Well, there should only be verified users on these systems.” Right? Or, “We should have these really blunt instruments ... we just don’t allow that type of content at all.” Verified users, large blunt instruments of censorship, those are authoritarian government tools. Right?
And so at some point, both we and these companies are gonna need to stand up and say, “The things that you are asking us to build in service for this democracy are tools that will be used in China, in Russia, in Turkey, in Saudi Arabia,” right? “Appreciate the fact that we are the global platform, and that what we build, everybody will have the right to demand.”
So you talk about this documentary called “The Cleaners.” Explain that, and then in the next section we’re gonna talk about techlash and diversity and social issues.
There’s a new documentary by two directors from Germany who wanted to explore what is this content moderation industry. And it was born of I think research done by a professor down at UCLA named Sarah Robertson. So she is the one who uncovered, there are thousands of contract workers in the Philippines cleaning up all of these social media platforms, right? And making calls that like, if we were here in the United States, we might not quite make the same call.
So, this documentary, they actually go back and interview a bunch of contractors about what they understand to be the rules, the decisions they’ve had to make on terrorist content, child pornography, self-harm content and all this stuff. It’s fascinating to hear from the contractor’s perspective what they think their obligations are.
And what do they think?
They are trying really hard to follow the rules, they have seconds to make these decisions on thousands pieces of content in an hour. Right? And the interesting overlay to me was like, this is a very Catholic country, they bring a lot of their person and their identity to work with them, about making decisions about this content.
So when you think about the complexity of these takedowns, it is actually really hard to create rules that, when an American user posts something that’s visible in Turkey and reviewed by a Filipino contract worker ... like, what is that? Right? Who’s winning in that scenario?
Okay, we should just shut down Facebook ... right? And Twitter. Let’s just do that.
It’s really, really hard.
Let’s just go back. I had Jaron Lanier in, and he was like, “There’s never been a human experiment on this level, of people talking to each other in this fashion.” But when you think about it, the idea of “The Cleaners” is a really interesting one because it’s sort of that back room ... like, you don’t wanna see how it’s made. You know, my son just got a job, he’s a chef, and he’s young, but he cooks and he’s in a restaurant and he was like, “Mom, you don’t wanna eat there.”
How the sausage gets made.
No, I was like, “What are you talking about?” It was over a small pickle incident, but it wasn’t that bad, I was like. “Oh, I’d eat there, it’s no problem.” But it sort of was like, when you see how things get made, it’s really ... and they are trying to rely on AI when they don’t even know what that means, I think.
Yeah, and I think getting that right is gonna be so important. So, two thoughts on that. One is, when you understand the complexity of this, I think that it is super hard to hold the tech companies fully responsible and insist that they make no mistakes, which I feel like some of the rhetoric is definitely ... like, you never get to make a mistake about that piece of content, and that paralyzes a company that’s trying to do the right thing.
Yeah, they’re still back on that My Lai, that mass ... Vietnamese picture of the little girl running.
Exactly. And so that’s one thought out of that. The second thought is on the AI piece, which is ... I do worry, and this might just be rhetorical, I worry about leaders who are saying, “We’re gonna have AI fix that,” and they may be like, “As protection, this is my flak jacket ... the machines will fix it.”
My experience is that you need to see the content. You need to make the moral call on the Rohingyas or the child porn or whatever, because if you don’t you have delegated your morality to a machine and that is wrong. We shouldn’t do that.
This article originally appeared on Recode.net.