clock menu more-arrow no yes mobile

Filed under:

Full transcript: LinkedIn co-founder and Greylock partner Reid Hoffman on Recode Decode

His Decency Pledge is just a first step to improving online interactions.

If you buy something from a Vox link, Vox Media may earn a commission. See our ethics statement.

Reid Hoffman Kimberly White/Getty Images for New York Times

On this bonus episode of Recode Decode, hosted by Kara Swisher, Kara interviewed Reid Hoffman onstage for the Never Is Now event in San Francisco. The two discussed, among other things, how social media biggies like Facebook, Twitter and Reddit could improve their platforms and whether augmented reality can trigger empathy.

You can read some of the highlights here, or listen to the entire interview in the audio player below. We’ve also provided a lightly edited complete transcript of their conversation.

If you like this, be sure to subscribe to Recode Decode on Apple Podcasts, Spotify, Pocket Casts, Overcast or wherever you listen to podcasts.


Kara Swisher: Recode radio presents Recode Decode. Coming to you from the Box Media podcast network.

Hi I’m Kara Swisher, executive editor at Recode. You may know me as the author of a sequel to Reid Hoffman’s “Decency Pledge,” which I call, “Try Not to Be Awful for One Day Dudes Pledge.” But in my spare time, I talk tech and you’re listening to Recode Decode, a podcast about tech media’s key players, big ideas and how they’re changing the world we live in. You’ll find more episodes of Recode Decode on Apple podcasts, Spotify, Google Play Music, or wherever you listen to your podcasts, or just visit Recode.net/podcasts.

Today we’re going to play an interview I conducted with Reid Hoffman, the co-founder of Linkedin and general partner at Greylock partners. He also hosts his own podcast, Masters of Scale. We spoke at Never is Now, an event organized by the Anti-Defamation League in San Francisco in mid November. Let’s take a listen.

So, Reid and I have literally done this 426 times. It’s like a little act we have. We’re going to do it here for you today, and we’re going to hope that you get maximum information about what’s going on. It’s not like pro-wrestling, I’m not going to actually really hit him, but I’m going to hit him pretty hard on some issues because I’ve been really furious lately about Silicon Valley and its lack of responsibility, and its tarnishing of the American democratic system.

Reid Hoffman: And that’s the kind statement, by the way.

Yeah. What have they done? I think I want to start with that. You and I have talked about this. I’m not going to blame you, because Linkedin is not the font of social media cesspool these days.

Or ever, actually. Not just these days.

I do get a lot of emails but it is just vaguely annoying and not ruining of our American system. So let’s talk about that because you’re sort of seen as the, I don’t want to say Godfather, but you really are the one people who everybody likes in Silicon Valley, and nobody likes each other here. Let’s talk about the idea of what the responsibility of Silicon Valley is, in this system.

So, I’m going to break the answer into two parts. The first part is essentially a light plea for some understanding and compassion, which is, a bunch of these folks that built these systems didn’t actually think about hostile attack on them, didn’t actually think about what Russians might do in order to hack in, or trying in their own minds to figure out, okay, how are we as democratic and inclusive as possible, and how do we build the algorithms that way? I don’t think there is any malintent. I think that’s an important thing, and I know all the folks so I think I can assert that with some vigor.

However, as you begin to get big and as you begin to be having a very strong influence, your responsibility ... With power comes responsibility. With great power comes great responsibility.

The Spider-man ethos of Silicon Valley.

Actually, simple and super important. Even if you kind of say, lots of things contributed to negative results, Fox News, etc., etc. There was a contribution, both to the kind of malformed political system, the turbulence we’re having, and also of course one of our topics will be, kind of like internet trolls and hate speech. I think what’s important is to start thinking proactively about what to do. I think one could argue that they’re doing it a little too slowly, although as long as it’s resolute and you’re fixing it, I think that’s important.

I think ones of the things ... And so people are talking about, can AI technology help with this stuff? How can you do multi standards across multi countries? What is the degree to which you say you should block speech? And other things.

One thing I would say, as a way of thinking about, of adding to this discussion, is an idea that I’ve been kicking around for the last couple of months, has been, well, what if we could generate a way of saying, give us a report along with whatever reporting, diversity metrics, other kinds of things, about what you’re trying to do to promote positive speech. What’s the way that you’re trying to create these engagement loops? As opposed to saying, “Whatever’s engaging, that’s what we’re going to do.” So it’s a burning person falling out of a building, well, that’s what it’s all going to be. It’s like, no. How is it you’re actually creating positive engagement loops, and what are the things that you’re doing on that?

You’re appealing to people’s better natures.

Exactly.

Presumably.

It isn’t just block really terrible stuff, it’s promote good stuff.

I get that. I’ve see Mark’s 9,000-word essay on this issue but ... I stayed up all night reading it. Riveting. But let us begin. You said, “They didn’t mean to do it.” Let’s pick on Facebook because you were there in the early days, and we’ll get to the cesspool that is Twitter in a minute, but the early days, when this was happening they didn’t mean it. It was move fast ...

Move fast and break things.

Break things.

Yes.

I think they’ve broken a lot of things so let’s talk about that. What in that ethos didn’t anticipate any of human nature?

To be precise, the break things that Mark and other folks were referring to was wrong code, and that sort of things. Part of that’s because what they realized was the company that was going to be able to set the social network, social media standards in these things, was the one that moved fastest to grow, and it’s the nature of the society that we live in that has competition between companies, including between these tech companies, and so he was articulating a competitive strategy, which was how to win, and he turned out to be right.

He got to be the biggest.

To be the biggest, and turned out to be right.

But in those early days, was there discussion about where it could lead? Or just not at all?

To some degree in the early days with technologists, what it is is a little bit like a Roschach test. If you’re an optimist, you see optimism. If you’re a pessimist, you see pessimism, and so forth. Zuck’s an optimist, so the kinds of things in the early days ... Talking about Zuck, look, we can connect people and they can share experiences. They can get to know people, not just ... They can have a greater neighborhood than the neighborhood they’re in. Even revivify their neighborhood. All of these things bringing in the human fabric of life, of seeing other people as people. Seeing pictures of them on holiday, or with their kids and that kind of stuff, that was the stuff that was being discussed. It’s all the positive use cases.

So cat videos to the end. They just thought that. Cat videos.

Yeah, they thought cat videos. Great. Adds a little joy to your life, right?

But there was nobody in the room that went, “Ah, skinning a cat videos, maybe.”

No.

I’m saying, but who was in the room doing it? Was there anybody thinking that?

I think all of that has been retroactive. Has been, “Oh shit, people are putting up skinning cat videos, we should do something about that.” Right?

Right.

A little bit of the reason why I opened this set of commentary with a ... Look, it’s not unreasonable for them to have said, “Right now we’re just this little tiny startup.” Starting at colleges, going to locations, and we’re trying to weave our way to a place where companies like Google, and Yahoo at the time, were trying to build versions and then squash us, so when we’re trying to work through that we didn’t think this was the top priority."

But it is equally important now that we say, “Now that you have a massive number of daily active users —” I don’t know what the current number is, 1.5 billion or something “— now that you have that, you now have responsibility and you need to lean into that and you need to figure out how do you help solve this problem.”

I think they ... You know, look, people are getting grumpy with them for not having done it more proactively sooner, but I think the fact that they are responding to it, I think that speaks to good character.

I want to get to the solutions of that but I want to talk a little bit more about the ethos, because I think it goes hand in glove with this free speech ... Two things, one, it’s a lack of diversity in Silicon Valley, so most of these people never had this happen to them, essentially, and they’re surprised by it.

Fair.

I’m like, my middle name is Bitch, so that’s what it is on social media, or something like that. So women understand it, marginalized groups understand it. People who are subject to anti-Semitic, everyone gets it except for the people designing it. That’s one part.

The second part is this free speech movement. I want you to talk about each of those. It’s like the lack of diversity, and then also this idea that people’s ... There is an ethos to Reddit, you always end up arguing with them that people should be able to say what they want. You argued this earlier, let’s start with that one. You were giving me a story backstage about being on the firing line.

So I was reflecting to Kara that my very first substantive television experience was being on William F. Buckley Junior’s “Firing Line,” which was a little strange, for those of you who remember it. The reason I was, was because they were hunting around for a technologist who would defend the thesis that the government can and should regulate some speech online. I said, “I’ll defend that position.”

Because one of the things I find people being fairy idiotic about is, they don’t realize we regulate speech all the time, and the simplest argument is truth in advertising. It’s like, no, we’re actually in fact ... Well, then you say, “We’re going to hold people accountable for falsities in advertising.” That’s a regulation of speech.

The real question is where do you set the line in order to have the broadest possible range for good, intellectual and political diversity in discussion, and when does that cross the line for bad things. Obviously, for example, speech that incites violence is always considered to be beyond the line too. This particular thing, everybody wanted to argue about regulating obscenity, which is obviously a tar pit and one that you should stay out of, but one does regulate speech.

I think one of the key things about this is to say, what are the ways that we can either increment what is in the current parlance, is terms of service. Because we’re all private businesses and you can say, “Look, if this is your business, you can take your business elsewhere.” That happens in hotels, happens in online stuff, and say, “Look ...” Articulate something around that. We are actually in fact ... This is the way that we articulate our opposition to hate speech and hatred. We enable discussion but we don’t enable, essentially, oppression or violence — of course, which they all do.

You might even change that somewhat, because you can when you’re huge, by country, or by country group. And then the other thing I think is important when you look at the stuff is to say ... One of the things I think, the underlying thing that causes some weird navigation for these companies, is the DMCA, because it’s super important when you say, we’re going to create a broad medium in which lots and lots of people can participate. Which I think can have some great positive outcomes, you need to shape it the right way, that you then not need to be easily sued for copyright material, and that you have a proactive obligation for stopping that copyright material, and that kind of thing.

That’s the reason they all adhere to the DMCA very strictly, to make that happen, but then you need to make sure the DMCA gives them some exceptions to say, we are doing some editorial along, for example, anti-hate speech reasons, or that kind of things, and we could do that.

[ad]

The ethos in the Valley, do you think it has shifted? When the recent Russian thing happened, I don’t think you were surprised, and neither am I because they let other things go. The bullying went on forever. The hate speech went on and on and on. The abuse of the platforms by the alt-right went on and on and on. Really abuse. Just backstage I was showing Jonathan, when you search for ADL on YouTube, you get white supremacist stuff the first 10, and you finally get Jonathan on MSNBC at No. 10. You don’t get anything about ADL, at all.

I know this sounds crazy, but YouTube is owned by this big company called Google that is really good at search.

Yes, so I hear.

The thing is, when we went over to Google, as you noticed and you search ADL, all the correct things came up. ADL, you didn’t even see it. I didn’t even find white supremacist stuff right away. Same thing, if you do it on Twitter, if you ADL, white supremacists. Facebook, I have not done that one yet but I’m just saying, this is this idea of not messing with what’s happening on the system.

I think that’s changing, hopefully changing fast and aggressively enough. I do think that your earlier thing, which is the fact that people who are building this, many of the people who argue that Silicon Valley is a pure meritocracy, haven’t actually experienced, like myself, I haven’t experienced the kind of things you can end up with in hate speech.

I’ll be doing that tonight to you.

I’m delighted. I think that’s part of it. Now, that’s again part of the reason why I was thinking about what are the kinds of things that’s important for the industry to do? I was like, well actually in fact to show a serious commitment should be the same kind of thing that we’re doing with diversity hiring, which is to say, “Here is our published report. Here’s the things that we’re saying. Here’s the things that we’re doing.”

I don’t think the things we’re doing should simply be, “We blocked X-thousand pieces of bad content.” Great, that’s fine, but also what are you doing proactively? What are you doing to try and create more compassion, more interaction, more mutual understanding? That kind of thing, and what are the things ... I actually think creating a simple report structure, that could then be part of how companies report is actually, I think, a good idea.

(applause) Yes, absolutely. One of the things with the numbers, though, I got something from Twitter, we blocked this many, 150,000 nests or whatever. I’m like, what’s the goal? So? And? What does that mean? Where does it fit in a system? When you get to that they’re like, “We blocked them.” I’m like, “Yeah, but what’s the goal?” Where’s the actual metrics that they want to get to, and what is the goal to do it.

When they do these things, like you had something, which I have a problem with you with, with the decency pledge. I think it’s a great, nice thing to do but it’s not ...

It’s a first step.

I know, but to say, “We should be decent,” shouldn’t be sort of ...

It’s the first step.

Right, I get it. Having to say ... It often becomes mutually exclusive where it’s, we want to promote happy speech, like good speech because it’s better for our businesses, it’s better for humanity, everything else. But it’s always mutually exclusive. Why not stop ... Let’s talk about solutions to stop this speech, and where do companies take a stand on it? They seem to not want to. To be the arbiters of certain things, and maybe they have to? Maybe it’s because they’re in charge, really.

Well, so part of the ... I think there’s a couple of parts to challenge. There’s easy hate speech that incited violence. That’s just a, you should have zero tolerance, completely get rid of it. Then there’s other kinds of hate speech that’s essentially derived at suppressing people, and that you should also get rid of. Now it gets blurrier because your ability to full track that, and distinguish between that and political discourse, and you know like when people post something about, here I’m trying to argue why hate speech is a problem and I’m posting this thing. Then you look at my thing and you’d block it as hate speed. All of that sort of stuff.

That was the reason I was going to kind of a report mechanism, as I agree with you, with a stated goal saying, these are the things we’re trying to get to, and this is how we’re iterating in that direction, is I think, a good tool to think about in this arena. I don’t think it’s possible to get to ... Human beings are messy. People do say insulting and inflammatory things, and that will happen within a democracy and politics and so forth.

However, what we should try to do is say ... And this is part of the reason I was saying these are private businesses, they can say, “Look if you want to do this kind of, for example, racist hate speech, do it somewhere else. You can create your own social network for that.”

Right, so why don’t they do that?

I think ...

Let me make a business argument, it’s like broken glass in the suburb. You cannot go on Twitter now and not feel bad immediately. It’s a really interesting thing. I think their business is — and that’s just reading Donald Trump’s tweets — but it’s really interesting because it ruins their business. It becomes a thing where you don’t feel good about it, you don’t feel ...

Do you think about it that way? I mean, you’re on a lot of these boards, they consult with you. Do they think about, wow, this could really hurt.

I’m not an insider in these conversations at Facebook or Twitter. I’m on neither board, just to be clear. I am friends with ...

You can include Reddit in there, and some of the others.

I’m not on the Reddit board. I know Steve and one of the things that’s interesting, Steve ... My own proponent in these areas tends to be real identity as part of the network platform, and Steve’s actually taught me a bunch about, well, there’s places that anonymity is useful in terms of being able to share certain types of experiences and so forth. Those are really precious, and that happens on Reddit.

I was like, that’s interesting. I’m such a proponent of real identities I’m kind of more of a student there. I think the short answer is now, what’s happening is people are realizing that the previous thing was, how do we establish this new medium, and now it’s, how do we make it healthy? I know enough of the folks that I know that they’re at least thinking about it intensely. Are they moving as fast as we’d like? Maybe not. I don’t know, I’m not an insider on it.

Let’s talk about making it healthy, because right now ... Let me ask you just a basic question. Do you believe social media deserves the kind of attacks they’re getting for amplifying the politics of destruction that are going on, the divisiveness in our country?

Yes, as part of a dynamic process for improving it.

I’m sorry. I don’t even understand three of those words.

Look, the short answer is, there’s lot of anti-technologist folks who go, “Look at that bad technology thing. That should be stopped.” That’s not my approach. I actually am enough of a techo-optimist that I think you can shape it in a way that’s very positive.

The reason I’m positive on the attacks is I want to create a dynamic of, “Oh shit, this is really important.” Okay, we’re going to do something about it. We’re going to start making product managers whose metrics are the right metrics in this. We’re going to start reporting it to the world, about what we’re thinking about doing, and what we’re doing, and who were are, and what we’re about. I think that will be the end result of the criticism.

Can they shift their business plans? Because their business plans are about engagement, really a lot of people have been talking about this issue of the addiction of it, the engagement of it. Someone called it the slot machine of attention. What causes attention, and pulling in is negative emotions essentially. That’s the best way to get people in.

Or designing. They have dozens and dozens of people at Facebook making you push a button. That kind of thing. Why did you push that button? Is there a rethinking, an overall rethinking of their business plans? Is it possible, given how much money they make from all this?

Well, I think that’s again something where you say, what are the other metrics that should be added in and maximized too? Obviously they will focus on maximizing attention metrics for advertising. By the way, this whole discourse was 30 years ago with television. It’s 40 ... you know Marshall McLuhan and Neil Postman, and so forth. It’s not new to say ...

It’s not the same. It’s amplified to the extreme at this point. Not everybody had a television network.

But we went to a highly televised society.

Right.

It also influenced political elections.

Absolutely.

The Kennedy-Nixon side. I don’t mean to state the parallels to discount the importance of working on it. I mean to simply say that we can figure out a good way of shifting the medium in real but kind of like a focused change because, among other things, we all get addicted in various weird ways, but our kids growing up learn the antibodies. Just like with television.

I don’t think it’s as big as an issue as the drum roll of “this is the end of the universe.” I do think that it’s an important thing to make modifications to.

Okay, so talk about some of those modifications, because then I want to talk about where technology’s going because AR and VR will also, could either amplify it, or make people more empathetic, and we’ll talk about that.

So talk about some of the solutions, in the here and now, what they should be doing. They’re under siege over the Russia stuff. They made their systems porous, they do still think of themselves as benign platforms. I know, Reid, you don’t think this but I hear it all the time: “We’re doing good for the world.” Same with AI: “It’s all good,” and they don’t have a sense of ... They’re sort of offended when you say, “Maybe you have some responsibility in this.” Or they feel hurt, mostly it’s hurt, sort of like delicate flower mentality.

You have to go ... Do you imagine they understand this now, and how do you get ... What are some of the solutions you think Facebook should do, Twitter should do, Reddit should do? I’m just picking three of the most important ones.

I think the reasons that they respond is because they go ... You hear the, “You terrible person, you’re responsible for this terrible, shitty election result.” And they go ...

Also, you’re billionaires.

Yes. Probably that, too.

Stop. Suck it up.

And that kind of attack does engage natural defensive reactions. If I was attacked that way, I’d have a defensive reaction in the first blush as well. I think the right way to actually have the conversation is to say, “Look, you’re super powerful. You have responsibilities for increasing the health of society. There are good things that you’re doing. We think that there is good to have people more in touch with extended family and friends, and what’s going on with their lives and so forth, and helps rebuild communities. We love all that stuff, it’s awesome. But you also need to figure out what to do to make sure the natural thing is to see the burning buildings and other negative influences.”

Let’s go to each of the platforms. What is the immediate thing they could do. Start with Facebook, two immediate things they could do?

Well, so like I’m not an insider.

You as a user of it.

Well, so I think ... Me, as a user, I don’t think they’re going to do this but I’d love it is I would actually love to see kind of like counterpoints, like here’s the bubble you’re in, and here is some highly published things that are outside the bubble. Just to get a chance to see them, because I’d love to see more building of bridges across diversity and discourse lines in various ways, and I’d like to see something along that line.

I’m not saying I wouldn’t see it. I just think it’s such an oddly shaped feature, I don’t know if that would be the thing to do. I think the kinds of things they will be doing is saying, “Okay, how do we ...”

Here’s something I think they could do, which is, okay, which are if you, say, vices in like, anger ... One of the things I say partially as a joke when I’m talking to MBA students is I invest in one or more of the seven deadly sins as part of investing into the consumer internet.

Your mom must be proud of that.

Yes, very. And so people ... Because people respond to negative emotions like this, and they respond to positive emotions like this, or some equivalent of that. Well, what I’d really love to see is, say, which positive emotions and which kinds of amplifications of that can we do to create an overall mix. Study the results of things we have and say, look, we’re just putting our pinky on the scale to rebalance towards more of those because that’s more of the society we want to be, that’s more of the community we want. That would be the kind of thing I would actually really like them to do.

All right, for me for Facebook is they get human editors and stop firing them because that would be good.

You know their volume is huge, right?

They could still. They didn’t have to fire all of them. That kind of thing, they have some human element to this because AI is certainly not going to ...

I get it. I get there are billions and billions of transactions. But you know, Linkedin agrees because we do actually have this.

Yes, exactly. You can do it. There’s a human element, that’s one thing that they really ... And to stop pretending they’re not a media company because they’re a media company no matter how you slice it.

[ad]

Twitter.

Harder.

Big long sigh. I don’t mean the business plan, because that’s a whole nother ball wax.

It would be funny if you could do something that started with the President’s account. Right? Since, as a role model, it may be ...

Don’t you wish Jack would wake up one morning and go, “You know what? He’s off.” Just, “I’m going to take the ...” It would be interesting.

Yes.

He’s crossed the line many times.

In my own politics I agree with that. It is important ...

It’s not politics. Actually, different people are judged at different levels, whether they get kicked off Twitter.

Maybe you should put a not safe for work brand on it, or something.

What would you do at Twitter? Two things.

Well, I think what I would do is ... I think the primary thing that’s interesting is that I think that Twitter is hacked a bunch to hack the media because journalists pay a lot more attention to Twitter. It’s the medium that most are actually on, and that’s part of the indirect thing, so I’d try to figure out how to essentially slow down or stop that.

I think I’d get much more aggressive on the anti-bot stuff, and try to figure out how to, for example, is there a possibility that the tech companies can work together on identifying bots and say, “We’re just going to get rid of those.”

Yep.

I think also, a version of what I was saying, which is say, look, the natural thing ... It’s the same thing on television, you show the fire, you show the explosion, you show the catastrophe because we as humans respond to that more en masse. You say, look, how do you promote more positive connection? How do you promote more, this was a great story of heroism, or of compassion ...

And then promote it further.

And promote it further, put balance on the scale.

Do you think they will do that? How do you assess? Because they’re influential even if they’re batshit crazy over there. They are!

I think that — this is in part an answer to your earlier questions, which is one of the reasons why I actually appreciate the criticism that the industry is getting, is I think with a continued presence of that criticism they will realize that their better outcome is to do that over time. They may be already doing it. Like I say, I’m not an insider in the conversations. I may be saying something that’s already in motion.

To me, immediately for Twitter, rules that are consistent. Not haphazard, that seem haphazard, and probably a full time CEO. I’m sorry, like you know what I mean. Someone who’s just dedicated, given the importance that it has or the attention that it gets. As to the journalists watching it, there are a lot more journalists but everybody’s on it. Every reaction to Charlottesville was on Twitter. Everybody reacted, which I think is interesting, and that when you have all these people paying attention, and they are, that it has an outsized responsibility,I think, in a lot of ways.

And the bots, I think you’re 100 percent right. That’s a really ... A friend of mine’s a New York Times reporter, was arguing with a bot. I finally had to text him and say, “You’re arguing with a bot. Stop! Stand down. It’s someone in Russia, I don’t know who it is, just stop. Who knows? Stop, don’t argue with the bot.”

Might be a cyborg.

So Reddit, last one, and then we’re going to talk about the future. They’ve done a lot more.

Sorry?

Seem to have done a lot more.

Yes. No, I think Steve is actually attentive to these issues because he’s actually one of the folks in the last few months that I’ve actually had the opportunity to sit down and talk with about it.

Steve Huffman, the CEO.

And like I said, I was initially like this anonymity thing, this has to go. He’s like, “No, no. It’s really important for these things.” I was like, “Okay. I hear you and I get it. Sharing experiences about I’m not out and I want to talk to other people about it, that can be super important things to have.”

It’s the general tool that I was thinking about the whole industry, so again it’s not a new thing for Reddit but again it’s kind of like, here is the report card about how we’re trying to make people more empathetic, more compassionate in terms of their own actions, and seeing diverse points of view and so forth. Here’s our report card, and here’s how we’re improving it as we go. That’s roughly what I’d like to see, and then that would probably be implemented in a different way. And Reddit could be, for example, the ratio of positive sentiment conversations to negative sentiment conversations. It could be something like that.

Yeah.

They would have to decide, themselves, this is who we are, this is what we’re about and these are the metrics that we’re tracking and we’re being open about these metrics so that you guys can help hold us accountable and then the discussion to are we having the right impact on society.

And presumably having a more diverse group of people working there might be nice.

Yes. And that’s part of the reason I look at every tech company of any size, should be doing the diversity measurements and reports, and saying look, this is our goals. This is what we’re improving.

One quick question on diversity, because it’s not why we’re here today but it does give you points of view. Why has it continued to be so astonishingly white straight guy? Boring white straight guy (laughter).

Boring with a capital B, yes.

Not you, you’re riveting.

Or riveted. So I think part of it, but I think that’s one of the reasons we have to put our hands on the balance and shift it, is what happens is, it starts with some ideas from people who are kind of naturally friends. They hire the people they know. They get in this world where, they’re in a tornado, which I call Blitz-scaling, and they tend to hire the people they know in terms of doing that. That tends to create ...

They just think, culture is everything, we don’t need this diversity. No, in actual fact you need to create that culture from the beginning. So I think that the kind of thing is to make sure that you have that kind of playbook. I think one of the good things that is coming about is most of the VCs that I know and like working with — not all of them, the species has some problems — are now asking, what are you doing in diversity in your company? What are you doing in diversity on your board?

Making it part of the investment?

Yes, and making it part of the conversation from the very beginning. It doesn’t mean that you have to, by month three you have to have done something, but it’s just like, for example, when we do partner recruiting, every time we talk about it, we talk about how are we expanding our diversity in terms of doing that. That’s always present in the conversation. I think that, at least, puts you on the right path.

Do you think it’s gotten to, you know a lot of ... I’ve always noticed, when we talk about these things, when there’s, especially boards, where you can get plenty of diverse choices. That they only use the word standards when it comes to women and people of color. Only.

Really?

Really, they don’t. The word standards never comes up in other ... You remember when Twitter had the same 10 white guys, the same names, they all had the same names, and they of course drove it right to a wall, I was like, where were the standards for these idiots? You know what I mean? It was interesting, but they often say, “Well, we have standards.” And stuff like that.

Oh, so they use standards as a disqualifier.

Yes. Yes.

Oh god.

Because we don’t want to just have a women. I’m like, why not?

Well look, it’s idiotsville to say there are not women to meet the standards.

Right, exactly. I think what’s interesting, and I’ve always said this, is that they think it’s a meritocracy, which they go on and on about. It’s a mirror-tocracy.

That’s a good line. I may borrow that line, if that’s okay.

You may borrow it.

Last thing we’re going to talk about is the future. So a lot of the technologies that are coming out really are immersive. If you think it’s already immersive, the stuff that’s coming out, especially around VR and AR — and I’m going to move those together because it think they kind of are together. Someone was talking about the idea of having someone in the alt-right watch what it’s like to be persecuted, or be on the other end of it, or a cop with a head thing on in the feelings of a poor black kid being pursued, something like that.

I don’t think you can VR your way into empathy in any way, and you can’t have the life experiences of that person so you can feel what it’s like all your life. Anyone who’s been persecuted in any way, do you imaging that there’s some ways we can use VR, AR, a lot of these new technologies coming, or until we get to the shot, where we have the empathy shot.

I think we can certainly do it within at least the corporate context because, for example, one of the things that — and I may not have this exactly right — but one of the examples I heard that was really interesting is part of the New York Police Department training, is in induction and early training, is they have two of the white officers kind of pushing two of the black officers up against the wall with guns. They say, “What’s that?” “That’s an arrest.” And then they reverse it, and they say, “It’s a robbery.” And they use that in order to try to explode these biases and really get a sense of understanding what’s going on. See people for who they are. Understand that you have biases that you need to correct against.

I could see, actually, both VR and AR being very helpful in that kind of induction. Like for example, say right now, it’s super important to the anti-sexual harassment training but you could imagine it’d be a whole lot better if it actually hit you more emotionally, and you understood what the other side felt like, because part of the problem with white, straight guys is they usually go, “Wow, is that insulting?” You’re like, “Yes!” And so getting ...

Sometimes every man in Silicon Valley feels like my 15-year-old son, but go ahead.

Well, we’re all trying to grow up at some point.

Yeah, hurry up.

Fair enough. But part of, I think, the hope in these technologies is more of a visceral experience because it’s once you begin to realize how it’s heard, how it actually feels like disrespect, how it actually feels like assault, and that’s actually how it feels, then that’s the beginning by which you can begin to understand that stuff. For me, my growing up from being a teenager was at Stanford. They had all kinds of classes on this, and I went to some of these classes and it was literally eye-opening. I was like, “Oh God, people do that?” And of course you hear it ...

That was actually important. I think we need that broader within society, and one of the things about technology is a lot of scale.

Is there any technology you think that could solve this problem? You’re the big technologist.

Solve, within the messiness of human beings, no. Improve, yes.

Unless AI and the machines take over and just kill us all, right?

I’m not sure that’s an improvement.

It could be.

I think, very unlikely. I think, part of the whole thing within AI in tech is to try and say, what is the design goal that’s essentially symbiotic? For example, one of the things that’s interesting when you beginning thinking about, for example, AI tutors or AI work, is how does that try to improve compassion and empathy? Because we know that’s our better selves, and that’s the thing we want the whole society to head towards.

What we need to do is, as we create these high-powered technologies, how in our interaction does it help us bring out our better selves. I think that should be part of the design goal for how we’re looking at it, and that’s part of the reason why I started thinking about, okay, what should the companies be reporting on? It was like, okay, how am I actually having a good theory of improving compassion and empathy, and how am I tracking against that, is actually in fact would be a good contribution.

All right. I wish Reid Hoffman was running every Silicon Valley company. Unfortunately he’s not. Thank you so much.

Thank you.


This article originally appeared on Recode.net.