clock menu more-arrow no yes mobile

Filed under:

Don’t break up Facebook — replace Mark Zuckerberg, says former security boss Alex Stamos

Zuckerberg is passionate about Facebook’s products, but he has too much power and needs to give some of it up, Stamos says.

If you buy something from a Vox link, Vox Media may earn a commission. See our ethics statement.

Facebook’s former Chief Security Officer Alex Stamos
Facebook’s former chief security officer Alex Stamos talks with Recode’s Kara Swisher at the Collision conference in Toronto on May 21, 2019.
Stephen McCarthy/Sportsfile via Getty Images

Alex Stamos, the former chief security officer at Facebook who left the company last year for a role at Stanford University, isn’t convinced that breaking up Facebook will actually solve the problems it has created.

“You cannot solve climate change by breaking up ExxonMobil and making 10 ExxonMobils, you have to address the underlying issues,” Stamos said on the latest episode of Recode Decode with Kara Swisher. “I think there’s a lot of excitement for antitrust because it feels good to be like, ‘I hate this company, so let’s break it up.’ Having three companies that have the same fundamental problems doesn’t make it any better.”

Instead, he told Recode’s Kara Swisher at the Collision conference in Toronto, Facebook should model its future on the “internal revolution” at Microsoft that began in 2002, in the aftermath of the antitrust case United States v. Microsoft. And part of that revolution should be the replacement of CEO and founder Mark Zuckerberg (who currently has an untouchable majority of voting shares).

“There is a legitimate argument that he has too much power,” Stamos said. “He needs to give up some of that power. If I was him, I would go hire a new CEO for the company. He’s already acting as the chief product officer with Chris Cox gone, that’s where his passion is. He should hire a CEO that can help signal both internally and externally that the culture has to change.

“My recommendation would be Brad Smith from Microsoft. Some adult who has gone through this before at another company.”

You can listen to Recode Decode wherever you get your podcasts, including Apple Podcasts, Spotify, Google Podcasts, Pocket Casts, and Overcast. On the new episode, you’ll also hear a live interview with Twitter co-founder Ev Williams, who’s now the CEO of Medium and a partner at Obvious Ventures.

Below, we’ve shared a lightly edited full transcript of Kara’s conversation with Alex.

Kara Swisher: So, Alex and I know each other really well. We like to argue with each other about a lot of things, and we were most recently up in Napa Valley at this stunning resort, and all we did ... we were in a wine cellar and we started screaming at each other, essentially. We’re hoping to replicate that here, a little bit. So, let’s talk about some ...

Alex Stamos: There’s a lot less wine ...

I’m going to give you ... I’m going to go off from our discussion. One of Alex’s contentions, even though he talked a lot, he’s been one of the more forthright people in tech, about what happened at Facebook and other places, is that it’s misunderstood. One of your arguments is that Facebook is misunderstood. Is that correct?

What I think is ... I think there’s a lot of directionally correct criticism, where the details are wrong.

Directionally correct, they ruin democracy, and?

No. Well, so, no. I don’t think Facebook has ruined democracy. I think there’s a couple things going on here. One, there’s a whole class of tech criticism that is actually criticism of other people. Right? The saying “Hell is other people”? Facebook is “other people.” When you talk about anti-vaxxers and crazy parents today recommending bleach ... this is the collective decisions of millions of billions of people, when you give them a freedom they never had before.

Now, that doesn’t mean that the company doesn’t have responsibility, but I think one of the problems is we’re not teasing apart what the companies are doing actively and what kind of societal issues have been unleashed by the fact that we have gotten rid of the information gatekeepers. I think that’s one of the core disagreements between the Valley overall and the media, is sometimes for those of us in tech ... it feels a bit like there’s a lot of media people who want to go back to the world where 38 middle-aged white guys decide what is the political ...

No, come on. That’s bullshit. That’s not true. That’s not the case.

Okay. Well, I mean, you ...

What you’re essentially arguing is that “Facebook doesn’t kill people, people kill people.” Right, or not? It’s humanity, essentially.

What I’m saying is that people will utilize speech sometimes to do really good things, and a lot of times to do bad things. We’ve got to think about what responsibility we want the companies to have, because when we give them responsibility, we also give them power. That’s the other thing that I think I disagree, actually, with some of my friends on Facebook. I think, in a lot of ways, Facebook is too powerful. I’m very afraid of this moment when we are assigning responsibility to half-trillion to trillion dollar companies, with no democratic accountability and no limit to that power, and then asking them to fix societal-wide issues.

Let’s get first to the beginning of this. I want to talk about some of your fixes. I think some of them are ... we do agree on some of them. The misunderstood part is directionally, it was correct. So, what direction was it correct and what direction has it been incorrect?

Okay, so a great example of something that I’m still really active on ... a little pitch. My colleagues and I at Stanford are releasing a report on June 6th about what to do after the Mueller report on election security, and one of our big recommendations is both for regulation from Congress and self-regulation from the tech companies to really expand the definition of political advertising and to reduce the ability to use targeted ads against individuals. So, the direction of allowing people to use hyper-targeting to manipulate societies, absolutely a correct issue ...


Criticism, issue with tech, so they need to be fixed.

So, direct targeting of people, this ability to target people in unprecedented ways and presumably manipulate them.

Right, right. Whether or not it’s Russians, or not, it’s just a bad thing. It is a bad thing that we are now ... the truth is that this has been a problem for a while. The 2012 election, where Obama’s team totally kicked the butt of the Republicans online, that is probably the first US election that was “thrown by Facebook,” that was at least significantly affected by online advertising. But most of the people in the media were okay with the outcome, so they didn’t get pissed about it.

Or they didn’t know it.

Or they didn’t know about it. I don’t know, it was pretty well-covered. I saw the Obama tech team give a big interview at an Amazon Web Services conference where they talked about how much data they were pulling from Facebook and how finally they were targeting the ads. They did nothing... Everything that is complained about in Cambridge Analytica really happened in 2012.

There’s a difference between an Obama team and a group of Russians in Saint Petersburg manipulating an election, I think.

Right. Well, yes and no. I would say ...

Yes and no?

There’s a difference there, but the problem is that the actual most effective use of Facebook to manipulate people in 2016 was almost certainly the core Trump campaign in a variety of RNC-related groups. It wasn’t the Russians themselves. The biggest thing the Russians did was the hack and leak campaign. For which, the actual conduit of that was the mass media. The fact that the media covered the Podesta leaks and the DNC leaks and changed the way it covered Hillary around her emails, based upon the Russian hacking.

If you look at, just from a size perspective, both from an organic and an advertising content, the Internet Research Agency output pales in comparison to the amount of money spent, and the quality of the targeting, by the Trump campaign itself and by a variety of Republican groups.

All right, so targeted advertising is directionally a problem. What about the involvement of the Russians? I’ve just been spending a lot of time in Washington, and those intelligence people think it’s increased exponentially.

Oh, yeah.

Not just at Facebook, but on our electric grids, telephone systems, all kinds of things.

Right. We are entering very, very, very scary period. For a couple reasons. One, the Russian playbook for 2016 is out there. It is not that technically difficult to hack the email of a couple of grandfathers and release the contents. Nor is it that difficult to build a team of people that basically are edge-lords, pushing memes all day, and then doing a little bit of targeted advertising.

So what I expect is that the Russian playbook is going to be executed inside of the United States by domestic groups, in which case, some of it, other than the hacking, is not illegal. All the Russian Internet Agency stuff, if done by a group hired by an American billionaire, and they’re careful around existing law ...

A Russian billionaire.

No, an American. If you had an American like the Koch brothers or Sheldon Adelson or George Soros, pick your billionaire. Reid Hoffman got kind of caught paying for a company, do some of this stuff on the Democratic side. My real fear is that in 2020, it is going to be the battle of the billionaires, of secret groups working for people aligned on both sides, who are trying to manipulate us at scale, online.

One of the hard parts is that it’s very difficult to draw the line about what’s acceptable political speech, especially in the United States, where Citizens United gives these guys almost no legal barriers at all. How do we tell the companies that we want them to stop it? Because it’s easy to say we want to stop Russians. It’s an easy rule to write. It’s a lot harder to say we want to stop some kind of super PAC that is secretly manipulating people at scale. Especially if what they’re doing might violate some of Facebook’s terms of service but don’t violate federal law.

Because of Citizens United.

Because they’re citizens, yes.

What would be the solution? You have the attack of the billionaires on the election, you have influence by the Russians, who will continue to do it because it works, presumably. Other countries, China, Iran, others.

Right. I think we’ll see less China in the West. The Taiwanese election in January of 2020, the PRC is going to be all over that.

Right, and in India, which is happening now.

In India, the election is over, the counting is happening right now. India is a fascinating issue, because most of the disinformation is driven by domestic actors. And they’re doing it on WhatsApp. WhatsApp is the exception that proves that some dumb criticism about Facebook and algorithms is poorly built, because WhatsApp has no algorithmic ranking. It has a huge amount of privacy for people, and yet there’s a disinformation problem, because the problem is people. In this case, in India, you have the ability to enlist hundreds of thousands of people to push propaganda on behalf of your political party.

All right, so the problem’s people, which you’ve said again, but they’ve been armed with tools that are unprecedented and possibly not controlled. Would you argumentatively say, I have said this, I think you know this, that I felt like these companies have weaponized everything and amplified it at the same time. It’s like going from a gun that shoots six bullets to a semi-automatic machine gun. How do you look at it? That’s how I look at it.

I like your use of the term “amplification,” because I think one of the things we start to forget is that these companies are actually many products at once. If you take just the Facebook product, the big blue app, you can decompose it into 10 or 12 different applications that have different levels of amplification. At the top of — I use on a chart, an inverted pyramid for this — at the top of that inverted pyramid is advertising and recommendation engines. I think that’s where you start, because that’s where the most risk is. There’s also, I think, the least amount of free expression concern when you block somebody’s access to advertising.

If we’re going to start from a regulatory, but self-regulatory, federal regulatory perspective, you start at the top there of regulating political ads. We have a bunch of detailed recommendations in this report that’s coming out, but we’re going to talk about how we would expand the Honest Ads Act, the kind of transparency requirements. There’s been a bunch of transparency changes, but they’re totally voluntary, so these companies can just drop them at ...

So a bill, this is Amy Klobuchar’s bill...


To both here and elsewhere in the globe, not just here in the United States.

Yes. The real game in town seems to be other countries. It seems to be ... post, after the Christchurch shooting, the most interesting regulatory moves have all been in the non-US anglophone countries. Right? Australia, New Zealand, Canada. Here in Canada, there’s lots of discussion about regulation. Because of the lack of the First Amendment, parliamentary systems where you have an implied constitution but no constitutional right to free speech, these countries can move much more quickly than the United States can.

My proposal would be, I think the US should lead on this, so we should have a US ... the US should lead on regulating ads, we should have a US federal privacy law. Our reluctance to play in the space has opened the door for other countries to regulate in a way that is, in some cases, not helpful. In this case, maybe we could set a standard that becomes an international standard.

Would it? Because they don’t have the same ...

I think so. I think what would happen is, if the US came up with a broader definition of political ads and then came up with requirements around who you have to be to run them, what kind of transparency is provided, and how much micro-targeting can happen, I do think that would encourage a number of other countries to adopt the same standards. If only because it is much more likely that they will be enforced.

But the US just decided not to join the online extremism proposal from Jacinda Ardern.

Yes. That’s a problem. I think that’s a different issue than online ads. I think on the content moderation extremism, the United States is out of the game. Partially because of the First Amendment, partially because the Trump administration is never going to do anything that undercuts their ... It’s pretty clear that one of their 2020 strategies is to build a huge amount of cultural dislocation among their supporters based upon saying that internet companies are suppressing them.


Conservatives control Fox News, they have a huge online ecosystem.

So the Russian strategy, I like to call it.

I’m not sure that’s a Russian strategy, but they ...

Well, it’s creating discord, dislocation, feelings ...

Yeah, and creating the idea that you’re a suppressed minority, even if you control two branches of government and a huge media system. So there’s no way they’re going to sign up for Christchurch or anything else that could possibly undercut the argument that Trump supporters are this suppressed group. That there’s any kind of moderation that means anything.

All right, so privacy bills, something with fines. What about fines?

On privacy bills?

No, fines by government agencies all around the world.

One of the interesting problems we have in the US is that we don’t have a competent privacy regulator that translates rules into ... One, we don’t really have privacy laws. The FTC comes up with ... they move the goalposts, and they say you have to follow these goalposts, and there’s not a lot of interpretation to what that actually means, technically.

This is a problem you see a lot in Europe, because European GDPR is being interpreted by 28 different data prediction authorities, so you have 28 goalposts you’re trying to shoot through. I think in the US we can do a better job, if we had an organization that could fine without going to court.

That’s what some of these other countries do have, like the Irish, is that they have organizations that can do things before they go nuclear. Right now, the FTC’s options are that they threatened to go nuclear, and then that allows them to have a negotiation, and if that doesn’t work out, then they have to go to court and it’s a five-year fight. I think we do need something ...

And then antitrust.

Antitrust. I think there are legitimate antitrust arguments for breaking up Facebook and breaking up YouTube from Google. Those arguments, legitimate arguments are the ones on competition policy. If you want to argue that these companies have reduced competition through their incredible ability to predict the future, to use their cash reserves to buy companies and then take out competition, I think that’s a legitimate argument.

Breaking up the companies does not solve the fundamental issues. You cannot solve climate change by breaking up ExxonMobil and making 10 ExxonMobils, you have to address the underlying issues. I think there’s a lot of excitement for antitrust because it feels good to be like, “I hate this company so let’s break it up.” Having three companies that have the same fundamental problems doesn’t make it any better.

What is the solution? You’ve had a couple that are interesting.

Right. If Mark Zuckerberg called me, which he doesn’t, and asked my advice on this, I would say a couple things. I think Facebook needs to have an internal revolution on the culture of how products are built. There’s actually a model for this, which was Microsoft 2002, in which Microsoft was facing the same level of pushback on core information security issues and they completely changed how the product works.

He, directionally, has some ideas on that, but I think it’s hard without making significant leadership changes to do that. So, if I was Mark, my suggestion would be, especially because the antitrust stuff is very personalized on him, he should hire ...

It’s personalized on him because he controls the company.

Because he controls the company. Right, because there is a legitimate argument that he has too much power. He needs to give up some of that power. If I was him, I would go hire a new CEO for the company. He’s already acting as the chief product officer with Chris Cox gone, that’s where his passion is. He should hire a CEO that can help signal both internally and externally that the culture has to change.

My recommendation, I’m not a recruiter, but if I can get 20 percent of this guy’s salary in the first year, that would be fantastic. My recommendation would be Brad Smith from Microsoft. Some adult who has gone through this before at another company. The second big thing that I would do ...

So, change the management structure.

Change the management structure, yeah, to have not the technologist who owns the product be at the top. I think it is important to have a manager up top.

Should you go a step further and not allow tech companies to have this kind of stock where they have complete kingship over it?

This is where I disagree, because I worked for a tech company that had an activist investor that cared mostly about what Wall Street thinks, at Yahoo.

But that came at the end.

Yahoo was dying for 10 years. There was a lot of problems at Yahoo. I actually think the companies are too beholden to Wall Street. The second big thing that I would change is that they need to get rid of stock compensation at big public companies. Eighty-something percent of my compensation at Facebook was set by Wall Street. This is a crazy world if the CEO comes to you and says, “You’ve done a really poor job at making our products good for the world, I am very upset with you. Congratulations, you got a raise because our stock went up.”

Using RSUs as the main form of compensation for executives takes away any kind of discretion from the CEO to motivate people with money. I think that’s a fundamental issue in Silicon Valley. For startups, stock makes sense, because people are taking a risk. If you work for a huge profitable company, they already have to fully expense it. There’s no tax benefit, you’re paying full marginal tax rates on stock compensation. You should be bonusing people, you can pay them as much, but the bonuses should be based on a basket of metrics that measure whether you’re doing good long-term things for the world, not based upon whether Wall Street liked your recent numbers, because Wall Street doesn’t give a crap of whether the tech’s good.

It would give you incentives to do the worst thing, presumably.

Yeah. Right. What kind of mixed message is it to say that we’re pivoting the company to care about long-term issues, yet most of your compensation is based upon what happens right after the quarterly numbers are released? Facebook stock is way back up.

It is.

It’s not because any fundamental things were fixed, it’s because they’re making more money than ever. How do you manage a team when you can’t control what you’re paying people?

Do you think — we just have a few more minutes — do you think they are actually committed to fixing things at Facebook, or is it just a lot of talk?

I do think Mark is serious, that he really cares about what people remember him for and what this company does. I think one of the fundamental issues is that the metrics the company has been measuring to manage tens of thousands of people that make decisions have been the wrong ones. That is a hard thing to change, that’s why I’m saying changing out leadership on the product side might be necessary to help also signal that. The things we used to measure around engagement, around time use, time spent, all that kind of stuff, that’s not what we care about anymore.

All right, last question. If you were running Facebook ... Do they call you? Do they talk to you?

I talk to lower-down people who are actually working on these problems, yeah.

But they don’t ...

At the top? We’re not close, no.

You’re not close. If you were running Facebook right now, in the product area, we’ve got only two seconds to do this, give me three or four things you think have to be done to make the product healthier for humanity.

I agree with some of the directional stuff of moving the company towards smaller groups, moving it towards ephemerality, moving it towards encryption. End-to-end encryption gives you ability to mathematically guarantee people’s privacy. That allows you to put data out of your reach, and out of the reach of your advertising team and anybody who wants to use it.

The thing that has to happen with that is that they have to do some fundamental rethinking of how they do safety in this situation, which, the problem is, there’s a future in which Facebook encrypts everything, everything moves to small groups. A lot of these problems go away from the press, because the press doesn’t see it anymore, but they actually exist and the societal impact is still bad. I think that is a very seductive future for the company, and the company has to resist that. They have to work on ...

He’s doing that already.

I believe in encryption and privacy, but we have to balance the privacy responsibility with the safety responsibility. There are lots of good technical work we can do to do better on both of those. I’d like to see them spending the next year doing that.

Anything else?

No, great to be here, thanks for ...

No more “I’m Sorry” tours, right? You wouldn’t stop, say ...

Well, the other big things, I think they just need to be honest about what they can and can’t do. One of the big problems of the company is that they make these content decisions based upon external pressure and trying to react with immediate issues, and they don’t base it upon a constitution of, “Why does Facebook do certain things?” The truth is, there should be limits to the company’s power. I think it would be better for the company to say there are certain things we are not going to do, even if we get yelled at a lot by the New York Times, and explain to the media ...

Or Ted Cruz.

Or Ted Cruz, right, exactly.

Or Donald Trump.

What’s happened is they’ve vacillated back and forth and, to use a recent metaphor, there’s a little bit of a James Harden action going on here, of working the refs. They have indicated, as the refs, that they can be worked. They will give you the call if you flop, so everybody’s flopping now. That’s not good for democracy, for these decisions to be made in a secret conference room, based upon external pressure.

It needs to be made with a public discussion based upon really core fundamental values, of what they want to do. I don’t understand what those are. I worked there. I don’t understand what is the goal of Facebook’s content moderation. Is it to keep people safe? Is it to help make the product better for society? Is it to help community? They haven’t explained what is the fundamental goal, and they haven’t said this is the limit that we will not go past. That is a critical thing that they need.

I’m going to give you a pro tip: Maybe they haven’t thought about it.

Yeah? It’s possible.

Anyway, thank you, Alex Stamos.

Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.