clock menu more-arrow no yes mobile

Filed under:

Full transcript: Dave Patterson and John Hennessy on Recode Decode

The pair won the Turing Prize in 2017 for revolutionizing computer processing by developing RISC.

If you buy something from a Vox link, Vox Media may earn a commission. See our ethics statement.

A man looks at his laptop screen; the laptop lid is covered in stickers, including for Facebook and Google. Justin Sullivan / Getty

On this episode of Recode Decode, hosted by Kara Swisher, Alphabet chairman John Hennessy and Google distinguished engineer Dave Patterson talk about winning the 2017 Turing Award, a prestigious achievement in computer science. In the 1980s, Hennessy and Patterson developed a revolutionary new type of computer processor called RISC, which allowed computers to run faster and more efficiently — a breakthrough that became especially important in the current era of mobile devices.

You can read a write-up of the interview here or listen to the whole thing in the audio player above. Below, we’ve also provided a lightly edited complete transcript of their conversation.

If you like this, be sure to subscribe to Recode Decode on Apple Podcasts, Spotify, Pocket Casts, Overcast or wherever you listen to podcasts.

Kara Swisher: Hi, I’m Kara Swisher, executive editor of Recode. You may know me as the winner of the Swisher Award for Excellence in Podcasting, but in my spare time I talk tech, and you’re listening to Recode Decode for the Vox Media podcast network.

Today in the red chair we’ve got some very big brains. I’m a little bit nervous. John Hennessy and Dave Patterson, the winners of the 2017 Turing Award, which is essentially the Nobel Prize for computer science. They can tell me if that’s different. They won the award for developing RISC, a technology that revolutionized computer processing. John is also the former president of Stanford University and he has been on the stage at AllThingsD many years ago, and the new chairman of Google’s parent company, Alphabet, which is a big job. We’ve lots talk about there, I guess. And Dave is a former professor of computer science at the University of California at Berkeley and today is a distinguished engineer at Google, also. John and Dave, welcome to Recode Decode.

Dave Patterson: Thanks, Kara.

John Hennessy: Thanks.

So, I do, I am nervous about interviewing you because usually I can make jokes and I do know more than many of the people who have been in this.

DP: We’ll make jokes if you’d like.

All right. Okay, so why don’t we start by talking about your backgrounds. Because you asked, when you got here, if we have a really geek audience. We do, but we try to be discernible to lots of bigger — we’re trying to go for a bigger audience. And I was joking with these two that I had Anthony Scaramucci recently, which they can’t believe they’re in the same association with him, and neither can I. But here we are!

So, let’s talk a little bit about your backgrounds. Why don’t we start with you, Dave, and then we’ll talk about the book, the groundbreaking book you wrote. I ran into it today, I was at a cybersecurity thing and everybody talked about your book from, what, 20-some years ago? But start with you, your background.

DP: I’m the first of my family to graduate from college, got all my degrees at UCLA and spent all of my life at UC Berkeley, so all’s I know is giant, large, public universities.

Mm-hmm, and ...?

DP: And? Okay.

I know you have more of a bio than that!

DP: Okay, I ended up working in computer hardware. You know, Berkeley wanted to expand to computer hardware, and so they hired me and some other guys 40 years ago.

And why was that? Why did you decide to do that?

DP: Well, Berkeley was trying to, you know, grow its computer science department. They were really great in what’s called computer science theory and in programming languages, and they wanted to branch out into other areas, and the first one was hardware, and then later they branched out into AI.

And why did you, why were you interested in that, what was it, some 40 years ago?

DP: You know, Berkeley’s a great place. I wanted to try being a professor. I wanted to see, I was the oldest of my family. I have three younger brothers and sisters, and kind of, we’d sit around the dining room table doing homework and I’d end up teaching the material. So I enjoyed teaching and I wanted to see if I could both do teaching and research at a place like Berkeley.

And why computer hardware?

DP: Well, that’s the actually interesting question. What I did for my dissertation was kind of half software and half hardware, so when I went on the interview market, there were places that considered me as a software person and places a hardware person. Berkeley wanted to get into hardware, so I said, “Okay, I’ll work on that.”

And what, tell me what that was at the time, when you were starting there.

DP: Well, as John and I will probably say, you know the center of the computing universe was the East Coast. The two main places were IBM in New York and Digital Equipment Corporation, DEC, in Boston. So when John and I wanted to go try to influence the computer industry, we had to get in an airplane and fly, to go there.

I covered the decline of DEC, just as it was declining, but it was still a force for a long time when I first took ...

DP: Yeah, I’d say the most shocking thing in my technical career was when this tremendous engineering organization got bought by a third-rate PC clone company.

Yeah, who was it? Who bought it?

JH: Compaq.

Compaq! That’s right.

DP: And it’s just like, that’s not the way the world’s supposed to work.

Yeah, a lot of things got bought like that. John, talk about your background.

JH: So I grew up on the East Coast, got my college degrees there, had the good fortune to stumble on my PhD thesis, which involved programming microprocessors. In those days, people can’t think back that far, but microprocessors were used for laboratory control.


JH: There were no personal computers, nothing like that. So I got involved in building a programing language to program microprocessors for real-time control applications. Turned into an interesting area, started interviewing. Stanford was the 14th school I interviewed at.


JH: And so I interviewed everywhere, from Wisconsin and Iowa and Illinois, all the way out to ...

DP: Did you interview at Colorado State?

JH: I did! Because Colorado State had a big group working on real-time control, and so it happened to be a hot area. They were one of the first places I interviewed.

DP: I went there in January, when all of us interviewed at the same time.

JH: So the good news is, I interviewed at Stanford in March and we were having a drought, so the weather was beautiful. It was sunny. I flew back to the East Coast. It was sleeting at JFK when we landed. I looked at my wife and said, “If I get that job in California,” and she said, “We’re going.”

“We’re going!” Right, right.

JH: And so I came. So I came and, um...

DP: Well, I can tell you, I should tell you my Berkeley story. I’m there because of my wife. She grew up in Northern California and was a kid when I met her in high school, and I interviewed at a bunch of places but hadn’t heard from Berkeley, and she says, “You’ve got to call Berkeley to find out.” So, she made me call, as a graduate student, the chair of the department at Berkeley.

Yeah. Good for your wife!

DP: As a grad student it was just humiliating and he said, “Okay, Dave, I’ve got your thing here. Well, you’re in the top 10, but not the top five.” As a grad student, I thought, “Oh, phew, that wasn’t as bad as I thought!” But, it turns out he said that to anybody who called.

Oh, perfect! So you ...

DP: But he took up my resume and said, “Huh!” You know, I think they made an offer to somebody else who turned them down, and he says, “Huh, this guy,” and then he handed it to somebody who was coming to Southern California and we hit it off, and so I got a job there.

See, that’s how life works.

DP: If my wife hadn’t forced me to call ...

You wouldn’t be there.

DP: I wouldn’t be there.

JH: Colorado State.

DP: Yeah, Colorado State.

You would have been at Colorado State! You would have stopped at Colorado. So, you decided to come out to Stanford ...

JH: Yeah, so I came out to Stanford, I mean, people ...

Was it a big place to come at the time? Because now, obviously ...

JH: Yeah, it was very strong. It was a top computer science department, but again, like Dave mentioned ...

It was on the East Coast.

JH: It was kind of more theory-oriented and a strong AI group as well. The Valley was almost nothing. I mean, there was very ... Intel was there.

This was when?

JH: Uh, ’77. Intel was there, but they primarily made memory chips. That was their big business. It wasn’t yet the microprocessor boom that would occur later. HP made laboratory computers, but there were essentially no major computer companies in the Valley at that time. There was still lots of farms, and where the Googleplex is today was a family farm.

Yes, I remember. That was a long time ago, but then there were all kinds of groves and fruits and ...

JH: Groves, there was an orchard still on El Camino and Sunnyvale.

Right, exactly. So coming here was a risk for both of you, correct? Speaking of “RISC”...

DP: Yeah, you know, my wife made this decision, too, because we had two kids and our siblings had houses and we didn’t, and she said, “Well look, if you go to Berkeley, can you change your mind and go into industry?” and I said, “Yes,” and she said, “Well, if we go into the industry, can you go to Berkeley?” I said, “No.” She said, “Okay, we’ll be poor but proud.”

So you come here and you were both going into, since you were saying, there was an industry here. There was Intel, there was a couple of companies, but nothing substantive.

JH: Nothing substantive, and you know, it was the early days. Microprocessors were just growing up, they were just beginning to be thought of as computers, and there were these development systems you could buy to develop hardware, to develop microprocessors, primarily for laboratory control still.

But the field was changing, and it was clear, I think, if you looked at it, that within a few years you were going to be able to build a real computer on a single chip, and that was an interesting question because I think it was the question that Dave and I both asked, which is, “How should these computers be designed?” Should we keep copying mini-computers, which is what they had been doing, or should we rethink how the computers should be designed, given this fairly dramatic change in the underlying implementation of technology?

Absolutely, and explain mini-computers. I get it. There were the large systems ...

JH: Big, big, basically racks of hardware designed using a technique called bit-slice, so you’d have, one chip might implement four bits of NAND or another four bits of NAND and they were, you know, they sold for $100,000 to $1 million.

Right, and this was DEC’s business.

JH: DEC’s key space, right, the VAX-11/780, their big machine that was a big success. You know, it sold for $250,000 to $500,000. Today, maybe 1/10th as fast as the slowest laptop you would buy.

Right, right. So the concept was around this and not anything else. So where did you two meet, then? You were here, at competing universities?

DP: Yeah, we were both, in fact, and people ask where the story of RISC came from, is once we hit upon the ideas of this different way to design computers, which we can, is explainable, but...

Go right ahead.

DP: Well, okay, well, let’s do that. So when software talks to hardware, there’s a vocabulary. You talk to it. In the mini-computer and mainframe eras, the prevailing wisdom was that you’d want these very rich vocabularies. You know, five-dollar words, polysyllabic words, and that’s how, the right way to do it. And John and I’s idea was, “Well, in this fast-changing microprocessor, let’s do the opposite. Let’s have a very small, very simple vocabulary, monosyllabic words.”

And then the question was going to be, “How fast could we execute those words?” How fast ... You can think of it as reading the words. How fast could computers read those words? Well, they have to read more words if they’re simpler, but the question was, “How many more words would they have to read?” and, “How fast could you read them?” And it turned out, the RISC, which was to reduce vocabulary, is we had to read about 20 percent more words, but we could read them four times faster. So it was like a factor of three win.

So now, talk about the implication of this. You got together and you wrote a book together. First, you were making these innovations, so talk about that process of how you worked together.

JH: Well, we started, we were running research groups, which people think, “Okay, Berkeley and Stanford are competing.”

Competing, right.

JH: But the truth is, we were both on the same side of the line, and there were a lot of people who were naysayers who didn’t believe our technology.

What was the naysayer argument?

JH: The naysayer argument varied from academic ... I think the one that was repeated most often was, “These are academic projects. When you scale them up to be real computers, all the advantages that these papers have written about will go away.”

DP: We were cherry-picking. We were just taking the easy part of the problem and exaggerating the benefits.

And it couldn’t be ...

JH: It couldn’t be transferred to industry and scaled up to be a real computer. When you put it in virtual memory or you put it in floating point, all the advantages...

DP: There’s also a philosophical argument that led to a lot of anger, which was the belief that with these bigger, richer vocabularies, the hardware would be closer to the software. So maybe all the problems we were having with software, with projects failing and filled with bugs, was because the hardware wasn’t very good, and if we just had a richer vocabulary — a bigger, richer vocabulary — software would be easier. And then these two idiots come along and say the opposite of that, and not only is that not going to help, you’re going the wrong way. You shouldn’t go backwards. So this got people angry.

Well, how angry?

DP: These were dangerous ideas that were going to destroy the computing industry.

JH: Yeah.

Why would it destroy it, though? Explain. Give a people a sense of what that ...

JH: It was everything varying from, “You guys are just crazy. You’re just academics. You don’t know what you’re doing,” to, “If you start a company and develop this technology, you’re going to undermine the large computer companies.”

Which were selling these big systems.

JH: Big systems, right, and one of the reasons I think in the end that the technology was not adopted quickly was that it did pose a threat to their existing product line.

Of course it undermined them, that’s exactly what it did.

JH: We see this all the time, Kara. Companies, rather than kind of endanger their own product line, will let a startup come along and wipe them out because they’re just too nervous about the established product line.

I think that’s the expression that, I think it was Disney, when they were getting into online stuff, he said, the CEO at the time, I think it was Bob Iger, said, “We might as well eat our lunch ourselves. If we’re going to have our lunch eaten, we might as well eat it ourselves,” which was an interesting, which was an unusual attitude.

DP: Yeah, I say, “Shoot yourself in the foot rather than have somebody shoot you in the gut,” right?

Yeah, that’s true. That’s a good point. Um, I don’t like any shooting.

DP: More graphic than —

JH: Yeah, I don’t want the shooting!

So when you were doing, when you got together, you were not competing even though people think you were.

DP: No, we were kind of competing.

JH: We were competing in some ways.

You were the Californians, though. You were essentially, California was where this was going on.

DP: Yeah, but we, you know, John and I, I think, are both kind of natural collaborators, and we could have, we could have decided, “Mine was the right way, his was the wrong way,” but fortunately, we were young but wise enough that it’s like, “We need more people on our side. There’s a lot of people out there who don’t think this is a good thing to do, we, you know, let’s take the, we’ll be on the same team.”

JH: There was another issue involved in selling our story about this technology, and that was we didn’t really have a firm, scientific, quantitative explanation of why we could run programs so much faster, and that made it harder to convince people. We had data, but we couldn’t give the scientific explanation why this is true.

And getting that, figuring out that explanation and getting that right, that was the beginning of really our book effort, because we saw, “Wow, there’s a much better way to design computers,” based on principles rather than on what Dave and I would call the supermarket textbook of computer architecture: “Here’s one from Column A, here’s one from Column B, here’s one from Column C.” No attempt to compare them or see how they, see how the trade-offs, what the trade-offs were.

So you write this book together, is that right? And what was, you were trying to change the idea of computer architecture, of how ...

DP: We were following the ideas in the book we followed in our work, so we were taking ... and we put in the title of the book “quantitative approach.” We thought you should be able to run experiments before you build anything and compare two different ways to do it and get a number to say which one’s better, and that’s how we were doing it. We just got increasingly frustrated with the textbooks, which were still from like the supermarket catalog era of describing architecture.

So the actual triggering event was I could see that I was probably going to become chair of computer science in Berkeley, and we were both so naïve about administration that I thought my life would be over. I said, “Oh, my God, my life is going to end. We have to write the book right now.”

Why was that? Because you had to run this big department?

DP: Yeah, and we thought our research careers are over, all our time would go away. This is a classic kind of a faculty attitude, right, is that you’re useful ...

JH: It’s a chance to get something done before you condemn yourself.

So talk about the impact of it, because it was an enormously impactful book for, I talk to so many people, they talk, it’s like, I’m trying to think of ... an equivalent in journalism would be Strunk and White’s “Elements of Style.”

JH: Yeah, “Elements of Style.” That’s very generous.

DP: I would settle for that.

JH: So I think we tried to capture this approach. I think what, probably one of the things that surprised us is in the first year we sold as many copies to practicing engineers as we did to classroom settings, and that was a real change. In fact, Microsoft ...

To try to change their attitude.

JH: Yeah, change their attitude. Microsoft actually put it in their company store so you could get a copy at the same time you ordered a pen or a pad of paper, you could get a copy of our book, and that showed that there was really a different way of thinking about it. And over the years, the book’s been translated into more than a dozen languages and used around the world and it’s been, for us, it’s been a remarkable opportunity to teach students all around the world.

DP: And fortunately for me, John, despite having this little teeny job as president, would every five years or so work on the next edition, so I think he did three editions of the six editions while he was president, and had he said — the entirely reasonable thing is, “I’ve got a day job, I can’t do this anymore.”

Running Stanford, that little thing.

DP: That would have been the end of the book.

All right, we’re going to talk when we get back about developing RISC, this technology that revolutionized computer processing and still continues to. We’re here with John Hennessy and Dave Patterson, winners of the 2017 Turing Award. We’ll also explain what that is. We know who Alan Turing is, well, we’ll explain that anyway, when we get back.


We’re back with John Hennessy and Dave Patterson, the winners of the 2017 Turing Award. They’re also some pretty smart professors, from what I can understand, and actually we’ve been talking about their background and the book they wrote that was very impactful to how people rethought computer architecture, which was how ... 19 ...?

DP: 1990.

1990. So talk about developing RISC, the technology that sort of revolutionized computer processing, and then we’ll go to where it is today.

DP: Well, it got started at Berkeley, actually, in a series of graduate courses. I had done a sabbatical at DEC, where they were doing this conventional wisdom, as I mentioned earlier, about the really rich vocabularies, and it led to a bunch of bugs. In my sabbatical, I was trying to help them with the bugs that they had in their equipment, and so when I got back, you know, the microprocessor guys, as we said earlier, weren’t really experts in computers. So they were just going to imitate what the big guys did, and so I wrote this paper that said, “Okay, if the microprocessor people imitate the big guys, there’s going to be a lot of bugs and we’re going to have to figure out a way to repair it.” The paper was rejected, and the rejection was, “This is a stupid way to design microprocessors.”

Well that kind of, I kind of ... If you’re going to do it, it’s going to have bugs and it is stupid, so there must be, there must be a better way. So we started it out with a series of four graduate courses where we kind of investigated the ideas and eventually built chips out of it, remarkably enough.

So the graduate courses were in order to figure out what to do.

DP: Well, one of the things, yeah, one of the things I did when I ... it’s unusual for an assistant professor to take a sabbatical. It was fortunate, but unusual, and so it gave me a chance to think about what can you do well in the university and what not so well? And academics don’t really have any deadlines, except for courses! Courses are absolutely going to start and stop, so I thought, “Why don’t I tie the research to the courses and then we’d have deadlines and be able to make steady progress?”

And so that’s why, that was the trick or that was the idea, that we were trying to do. And then in, I think in the first quarter or second quarter, John, we were both funded by DARPA, and that’s where the RISC name comes from. DARPA at the time funded high-risk, high-reward research, so we thought if we called it RISC, they had to fund it.

Explain what it actually stands for, John, and then, so this is how you named it?

DP: That’s where the name came from.

Yeah, all right.

JH: Reduced Instruction Set Computer. I think the notion of trying to target the instruction set for fast implementation, for efficient implementation, is probably the right word, Kara, because I think today we care as much about energy as we care about execution speed, and I think that was the key thing.

Lots of things were changing. It was a time when a lot of the computer industry was changing. We were moving from writing in assembly language — remember, UNIX was just coming of age. The first operating system written in a high-level language, as opposed to in assembly language, and of course, that influenced our thinking as well. And I began the same way Dave did, with a brainstorming class of graduate students, just to say ...

What should we do if we ...?

JH: Exactly. Clean slate, clean slate.

How hard is that, though, when you, you know, you’re taught in a certain way? In any discipline, in any academic [setting], you have a class ...

DP: You know, we were young.

JH: Graduate students are completely open. They don’t have all the inhibitions we might have.

DP: They don’t have a history of failures, right? They don’t know all the times it didn’t work, and we were young and optimistic. We thought if our ideas were solid, why not, right?

And so what happened with this? You did these graduate programs, coming up with the green field approach, or clean slate, or however you want to phrase it. What were you going to ... what did you think it would lead to, the new processing?

JH: I actually thought we would publish our papers, people would read them. The data was pretty good. They’d say, “Ah, we should do this.” And that didn’t happen.

DP: That didn’t happen. One of the things that happened is, because it was so controversial, there were a series of debates that John and I participated in, from coast to coast. And I think, I think I remember John saying at the time, by the third debate, I think people thought, “Okay, there’s some ideas here.” Maybe because we, John actually wrote the paper that had the scientific explanation. I think maybe by then we had it? Maybe not.

JH: No, I think even later.

DP: It was even later than that. Okay.

JH: So I think one of the things that happened, for example, was ...

So you start a debate.

JH: Digital Equipment Corporation actually had a West Coast lab at that point. Some of those people had worked on our project and picked up the ideas, but they, in turn, couldn’t get the East Coast guys to accept the ideas. So in the end what happened was a famous computer pioneer came to see me and said, “You have to start a company.”

Who was this?

JH: Gordon Bell.

Yeah, okay, that’s what I thought.

JH: One of the early guys at DEC.

Yeah, I know who ...

JH: Came to me and said, “You’ve got to start a company because otherwise these ideas are not going to get out there.” And I talked to a couple colleagues and we decided to do it, somewhat reluctantly, of course. It wasn’t something I had ...

Why reluctantly? You forget that everybody wasn’t doing that.

JH: Everybody wasn’t doing it! That’s the primary reason, and you know, I knew that it was going to take a lot of time. I wasn’t, you know, would I go back to the university? Would I stay at the company? It wasn’t exactly clear. So that’s how we got started.

And talk about the impact then, because it was ...

JH: People didn’t believe it at the beginning, I mean, just building on what Dave said about this contrarian viewpoint. I was on one panel and there was an antagonist on the panel, an opposing viewpoint, and somebody said, “Well, Hennessy just got a million dollars from the venture capitalists to go build this company. What should he do?” and without blinking an eye he says, “Take the money and go to South America.”

Oh my God.

JH: So it turns out, I didn’t do that. It worked out.

You know, oddly enough, I had Michael Dell on the stage and he said that about, I think, Apple many years ago, like they should take the money and give it back to shareholders or something.

DP: That’s one of those quotes you never ...

... you never take back. I mean, Bill Gates had one like, “64K is enough for anybody.”

JH: Yeah, that’s right, there’s a few like that.

Talk about the implications of once it became clear that this was the way things were going.

DP: It has kind of an interesting trajectory. For maybe 15 years, anybody who used RISC had the fastest computer in the world, but then, you know, the really good engineers at Intel figured out that they could actually translate their rich vocabulary into the simple vocabulary, in hardware, and then any of the RISC ideas they could use, and then they had a lot more money so they had bigger engineering teams and really good technology.

So eventually, Intel kind of used the RISC ideas against the rest of the RISC companies and took over the marketplace in the PC era. And you know, PCs did really well, but starting in 2007 with the iPhone, which I guess is the beginning of the post-PC era, well, suddenly there’s this place, as John said earlier, where they cared about efficiency, and which is similar to kind of what we cared about early on ...

Which is the true concept that you were talking about.

JH: Right. In the early days, it was transistors and silicon area. Today it’s still silicon area, because if you look at internet of things, you’ve got computers all over the place that have to sell for a dollar or two. So it really does matter how big the chip is in that.

Right, and so talk about the shift when mobile came, because I would say the iPhone really...

JH: Mobile made it.

DP: Because then the ARM processor that you’ve heard of, well, the R in ARM stands for RISC, so it’s Advanced RISC Machines. It got increasingly popular and, as part of my retirement, when I went around and gave a lot of talks and so I collected a bunch of data, but basically there’s, probably this year there’ll be 20 billion microprocessors sold, and 99 percent of those will ...

99 percent are RISC, yeah.

DP: Yeah, will be RISC, and so yeah, it’s everywhere.

Yeah, yeah. I want to hear from you about the mobile, what it ...

JH: Well, I think that mobile, mobile really drove it because all of a sudden, you cared both about what the processor cost, but you also cared a lot about energy efficiency, and that’s one thing the so-called CISC approach, right, that was the traditional approach, has never been able to close the gap, so it consumed more energy.

It didn’t matter so much on a desktop machine. Maybe you need a fan, you know, instead of not having one, but it wasn’t a big differentiator. But in the mobile space, power’s everything. You really do have to worry about energy, and as we move into this next generation ...

So it forces a ...

JH: Yeah, forces efficiency. Forces it. And I think as we move into this next era where we’re talking about devices that may have processors in them that may last for 10 years with a single battery, power’s going to matter a lot. RISC is important.

Yeah, I want to get into that in our next section, but you two created this, and are you billionaires? Is this...?

DP: I have a salary. John’s invested better than I have.

Well, one of your ex-students who has a lot of money has wondered why that, did you take advantage of that, do you think, of these ...?

DP: Me? This is Dave. I pretty much stayed a professor. I believe ...

Why didn’t you move into industry?

DP: I had, you know, I think when I was young I had this strong belief in the public university, teaching, you know, we were fulfilling the American Dream, and I just had this little speech I gave when somebody asked about a startup is, “I’m going to be a professor.”

It wouldn’t have been that bad if I’d taken a years off and done a company. But when I was young I was kind of like strong-willed and, “I’m an academic and that’s what I’m going to do.”

JH: Yeah, I’m several companies in. I mean, just starting with Jim Clark, with Silicon Graphics that I was a consultant to, and then MIPS, and then I started a company, Atheros, that built the early Wi-Fi chips.

Which one was it?

JH: Atheros.

Oh, yeah, yeah. I remember them.

JH: They built Wi-Fi chips early on. So I’ve done the entrepreneurship thing a few times, and then I joined the Google board in 2004.

DP: I understand that’s not a voluntary position.

Yeah, that was a good move for you, although that was later. That was later.

JH: Yeah, it was just before they went public, about six months before they went public.

Right. That’s right. I met them in ’98, so ’99.

JH: Right, I met them at Stanford when they were there.

Oh, that’s right. They were there, of course. So talk a little bit about that, where you, these companies, but since you both are academics, because you did stay in academia, really than anything else.

JH: Came back to what I love.

When we’re thinking about students, one of the people that was taught by you, when you think about how you train these students today, because one of your things was to get these ... you did RISC based on a class, essentially. Talk about how students should be trained today. And in our next section I want to get on where things are going and who’s designing these systems, and how they should be designed. Talk about the training of the students.

DP: Well, it’s a fantastic time to be a student in computer science. You have amazing computers at your fingertips. On the, particularly the things that John and I do, it’s much easier to build hardware than it was earlier. There’s these things called field-programmable gate arrays, which are kind of programmable hardware, so you can prototype your ideas and change them every day and connect them to the internet and it’s kind of a, it’s a real computer, but modifiable, so it’s ... I think students getting their hands dirty, I got into computer science because I was a math major at UCLA and a math class was canceled and I took this software class and I was hooked, right?

Math class was just canceled?

DP: Yeah, the class I needed was canceled and there was this two-unit computer class. I hadn’t thought about computers at all.

You hadn’t done computers before?

DP: No, never thought about it. Don’t know why, but took it and I loved it. The ideas in your mind come alive on that screen and that was just exciting, and so I think we want to give students that opportunity. Programming can do that, building hardware can do that, but building things and seeing your ideas come alive is something, you know, in cyberspace we can do in the curriculum that ... you can’t do that in civil engineering, probably.

No, not at all. You just can’t build bridges all over the place. Well you can, but it’s hard.

DP: So it’s this incredibly exciting, stimulating opportunity that we can do as educators.

JH: Yeah, I think Dave’s right. I mean, computing is about building things. I think we teach principles, right? We teach students how to use abstraction so that we can build really complex software systems. The scale of the software systems we build now is phenomenal. If you tried to do that 30 years ago, we didn’t have the tools to do it.

So we try to teach them principles of abstraction, organization, so that they can do that, how to test a large piece of software, because certainly a lot of software that gets released is buggy. We teach them principles of security so that they understand issues of security and privacy, which has become certainly vastly more important in the last few years.

Yeah, we’ll be talking about that ... And what are the challenges now facing teaching, from your perspective?

DP: The number, the popularity. Popularity, it’s, well, at Stanford now it’s the No. 1 major, right?

JH: It’s the No. 1 major for women, even right now, which just happened this past year, which is amazing, amazing.

DP: Yeah, so it, in Berkeley the classes, you know, we have, I didn’t know we could handle four-digit class sizes. I didn’t know that the system would work, but we have introductory courses in computer science with more than a thousand students. So students are voting with their feet, and this is happening at campuses across the country. It’s not just, not just ...

Yeah, but not enough. But there’s not enough.

DP: Well, universities are trying to figure out how to scale to everybody, right?

JH: We’ve got to scale, we’ve got to figure out how to hire faculty. And it’s not just at the big-name places, it’s the entire hierarchy that’s got to figure out how to build people.

Right. When we get back we’re going to talk about that and more, especially about diversity and trying to figure out who’s going to be designing the future, because I don’t know if you’re going to continue, but maybe you will, and where it’s going, when we get back from a word from our sponsors, and then we’ll be back with Alphabet chairman John Hennessy and Google Distinguished Engineer — oh, you’re Distinguished Engineer — Dave Patterson, after this.


We’re back with Alphabet chairman John Hennessy, who’s also a kind of a good academic, apparently, and Google Distinguished Engineer Dave Patterson, who apparently teaches people some things. They won the Turing Prize. What is the winning of the Turing Prize? You’ve had massive, what is it, just a banquet? What happens?

DP: You mean, what happens?

Yeah, what is it? Explain what it is.

DP: Well, right here in San Francisco, I think at the Palace Hotel, on June 23rd there’ll be a ceremony where they’ll have us come on the stage, show a video, and hand us a check.

Yeah good, good.

JH: Yeah, and we, it’s tradition that the Turing Award winners prepare a lecture, talking about the state of the field, where it’s going, what’s happening.

All right, give me a little preview, both of you. What’s the state of the field that you’re going to lecture on?

DP: Okay, well, we’ve collaborated on, we’re going to share the talk, since we co-author things. The title is, “A New Golden Age for Computer Architecture,” and I think the four things that we think that are part of this golden age are, what is called today’s domain-specific architectures, which are like Google’s TPU, you know what that is? Hardware for deep learning, the hardware for machine learning.

Security, you know, security is embarrassing. We think hardware people need to rise to the challenge and do something about it. There’s this idea of an open, you know, I talked about these vocabularies being, this idea of an open vocabulary. There’s something called RISC-V, which is an attempt to be like the Linux of microprocessors. It’s an open thing that anyone can build. Then, finally, there’s a thing called agile hardware development, making it a lot easier to build. So those, we think those four things are going to lead to another golden age in computer architecture.

And, wait, when you say “golden age,” John, what does that mean? It’s been pretty golden for the last 20 years.

JH: It was, for quite some time. The last few years there’s been a slowdown. I mean, when you talk about the end of Moore’s Law, right, really the slowdown of Moore’s Law.

The doubling? Is it doubling?

JH: Yeah, it’s a doubling every few years, and that’s kind of leveled off.

DP: Now it’s doubling every —

JH: — seven years or eight years or 10 years.

So, too long. Not enough.

JH: Yeah. And then there’s another problem that we call the failure of Dennard scaling. So Dennard was the guy who invented D-RAMs, the one-transistor D-RAM. He made an observation that as you got more transistors, the power didn’t go up. So you could actually do more computing for the same amount of energy, and that actually broke down, and so now the problem is, I mean, you look at a modern microprocessor from Intel. It slows this clock right down, it shuts itself off because otherwise it’s going to burn up.

So that’s a challenge that we have to face as well, and I think the way to solve these problems is to rethink the way you design computers, which is why Dave and I think, once again, it’s a new golden age.

So where do you imagine that rethinking happening? Are there any directions that you’re ...?

DP: You mean, where in the world?

Yeah, how does it happen, how does it occur? And where in the world, because it may not be here.

DP: Well, that’s why, I mean, we’re researchers, right? And we think when it’s unclear what to do, those are great times for researchers. When there’s new challenges and, you know, Intel doesn’t know what to do, ARM doesn’t know what to do, that’s a fantastic time to be a researcher in computer architecture because good ideas can win, right? When it’s pretty, I think, maybe 10 or 12 years ago it got kind of dull because any idea you had, Intel would still go ahead and they knew how to make a lot of money, just it’d be faster this year. Now it’s really unclear.

JH: Rise of AI, I mean, that’s the rise of the machine learning.

Well, talk about that a little bit.

JH: That’s a big piece of it because they’re incredibly computationally intensive tasks, right, and that was one of the stumbling blocks we had to overcome. In order to get machine learning to work, we had to throw a thousand times more hardware power than we thought we had to throw at the problem.

And all of a sudden you’ve got these machines doing these comparatively special-purpose tasks, but very different than traditional, general-purpose computers. So you can rethink, “How do you design a machine to do that function very fast?” Virtual reality, augmented reality, you can think about all these intensive ...

Do you imagine that we need a breakthrough to get to that? From what I understand, and especially with the massive amounts of data that are pouring in ...

DP: Yeah, well, we need to do things differently, and I think researchers love it when we have to do things differently. Yes, we need, as innovative ideas, as you know, maybe the RISC ideas were.

JH: Yeah, it’s a discontinuity.

So is there something you’ve heard recently that’s been like ... I’ve heard all kinds, like living computers ...

DP: Oh, I don’t know that it needs to be that exotic. It’s, you know, transistors are pretty, silicon transistors are a pretty amazing technology, even though it’s slowing down, and they are going to get a little better, but we’ve been like, it’s like building, we want to build a building in a different way. We don’t necessarily have to get rid of bricks.

I see. Good point.

DP: And you know, so we’ve, in the past it’s always been a bad idea to do special-purpose architectures. That was like, you know, the kiss of death because you do all of that energy and then how many are you going to sell, how many people ... But now we have no choice. With this ending of Moore’s Law and Dennard scaling, there’s no other choice. We have to do special-purpose architectures, and so the excitement of machine learning is it’s kind of a narrow but general-purpose technology, and we have to figure out how to build, you know, machines for those.

And the companies critical to this are? Well, Google, Alphabet ...

DP: Nvidia is kind of — Nvidia’s the reigning champion. That’s where people go. Google, you know, I helped write papers about the TPU that first generation and, I think, pretty successful. You know, it was at a time in normal computing, if you’re like twice as fast, you know, kill everybody in the marketplace. We said that the TPU was like 30 to 80 times better, right? That’s kind of amazing numbers, but because it’s a new area and it does that one thing well, you can get these fantastic advantages.

JH: But I think you’ll see all the main ... I think Apple, Amazon, Facebook, Microsoft are all investing in this technology because it appears the range of applicability for deep learning is quite broad on very complex tasks that traditionally computers have not been able to do well.

Well, explain one of those tasks. Give an example for ...

JH: Image recognition is probably the best one. It’s the one we can, now we can have a program which is better at classifying breeds of dogs and cats than anybody but an AKC certified master, which is absolutely amazing. And self-driving cars, I mean, they really depend on this ability to interpret scenes which are not easy to interpret for computers.

Right, and then learn it again and again.

JH: And then learn it again and again.

Right. Well, how do you make, you know I don’t want to dumb this down because you’re both so highly intelligent, but the idea that it’s dangerous, that these new types of computing ...

DP: Uh, you mean the ...

The Elon argument, the Stephen Hawking ...

DP: The AI itself.


DP: Yeah, it’s not so much the hardware we’re building but the technology itself. Yeah, I think there’s this argument that other fields have done a better job of when we get to these cultural issues like a physicist in atomic energy and biology and, you know, dangerous bugs.

That’s what I’m thinking.

DP: Yeah, I would say, I hang out with a lot of machine-learning people. I know they care desperately about fairness, which is one of the criticisms that you hear about machine-learning.

Sure, because they all tend to look the same, sorry, like you guys. Younger versions of you guys.

DP: Yeah, at least at Google there’s women. And so I, you know, one of my colleagues at Berkeley is writing a book about fairness, so they seem to be taking these issues on, but right now there’s big holes in the technology, and if we don’t work on them, bad things can happen.

JH: Well, and I think there’s a concern that humans will be removed from the loop in all these cases, and particularly if the technology were to be used for offensive war fighting or something like that. I mean, there are real dangers that we need to worry about.

You have a technology ... I mean, think about medical technology. It has both good uses and dangerous uses as well. It’s the same thing here. Appropriately used, the technology will be fine. Maliciously used, it’ll be dangerous.

Right. Does Silicon Valley understand the malicious uses as well? We just got off a week of hearings of Facebook where Mark Zuckerberg was essentially lauded because he was able to wear a suit and not sweat, you know, pretty much, but he didn’t say much, and there’s a lot of questions about the responsibility of tech companies. I’m not using just Mark, because it’s a general attitude in Silicon Valley.

DP: Well, if I can speak as a professor, I just ... the fact that they let somebody doing an app access to all tens of millions of peoples’ data, that’s kind of irresponsible, right?

That’s bad management. That’s what I called it.

DP: Yeah, I mean, did they not realize that?

Well, that’s what I’m talking about.

DP: You know, were they, is the reason they didn’t realize they were making ... There was just a real failure there, and it’s a black mark for everybody.

JH: I think the real danger here is a breakdown in trust, because we trust companies. We give them our data, we give them, they have our email. We trust them. We trust Google to do a search properly. If we lose that trust element, then the tech sector will be abandoned by people, and whether it’s information security, it’s accuracy of data, it’s accuracy of news feeds, all those things.

It’s use of data, especially with these new computer architectures, which will be much more embedded and much smarter, correct?

JH: Correct.

Do you think the government understands that correctly? I mean, you got your first funding from DARPA. Is there still that commitment from government to really understand and discern it, or are you worried about ...

JH: I think certainly government wants to understand the technology and how to use it. I think the problem becomes when they want to legislate, they have a hard time writing legislation that keeps up with technology. Look at our copyright law. It’s stuck in the 1700s, and we haven’t been able to make the vital changes, and I think that’s what we have to worry about. How do we craft regulations — if we’re going to have regulations — how do we craft regulations that don’t inhibit innovation?

So as we move into this new era of architecture, which I think is very clear, as you’re saying, it seems as if we’re on the cusp of another innovation in computer architecture, who should be responsible for that? Should it be the industry? Who? Is it academia? Is it government?

What struck me last week from the hearings is that the congressmen or the senators kept asking Mark what regulation he’d like, which I thought was fascinating. But again, of course, why would they know any, why would they know anything to do at all, because they hardly understood Terms of Service.

JH: Well, I suppose, given the importance of technology to society, it’s going to have to be all three parties coming together, right? And as difficult, that’s probably a very difficult proposition to take —

— and citizens.

JH: ... for government to work with. And citizens, right? And academia can play a part of bringing in knowledge and expertise without necessarily a bias of one form or another and help chart that.

But it’s not going to be easy to chart, Kara. I think it’s going to be hard. I think most Americans probably haven’t thought about, “Okay, how much privacy am I willing to give up in exchange for what?”


JH: They really haven’t thought about the boundaries, so of course they all use credit cards, and if you don’t think everybody who touches that credit card is collecting information, you’re being naïve.

No, not at all, but I think some of the technologies that are showing up now are quite different from the last 10 years. I mean, some of them, a cellphone is one thing, but self-driving cars, automation, AI, robotics, for example.

DP: Well, yeah. But I think self-driving cars is something that computer scientists have been talking about for a while.

Yes, they have.

DP: We think this is, I mean, this will be, if it really works, this will be something that we brag about forever. I mean, 1.2 million people die every year, there’s incredible billions of damages. If we can cut ...

And energy inefficiency. It goes on, obviously.

DP: If we could cut back, could we save a million lives a year with, you know, advances in technology? We could! And anybody who knows somebody who’s been in one of these terrible accidents knows it changes their lives forever. We could make this, you know, over time a much rarer event, and that would be one of the things we brag about like the internet, right?

Right, so but in that vein — and again, I’m not trying to be a Luddite in this area — do you think they think enough about jobs? The impact of jobs, the impact of ... Do you think Silicon Valley’s, to me, has matured enough where they, there is this, when you interview certain people, like I had Sundar and Schroepfer from Facebook and others, and it was the same year we had Elon. He was talking about Terminator-like kind of outcomes, essentially, and they were talking about sort of the happy, shiny future.

But what I do get a sense of is that nobody really does, I think. I did an interview with Marc Andreessen last year where he talked about, that it was the farming to manufacturing shift, it was a similar thing, and I kept saying, “Well, there was a lot of social unrest. There was a lot of populism, and that took 70 years. This is a very compressed time period.”

Who has the ethical underpinnings? Because some of these technologies are quite culturally changing, social changing, political, all this stuff, and I think a lot of these past elections have been about that, about fear of the future.

JH: Yeah, well, I think you’re right, Kara, and I think you’re going to see disruption to white-collar jobs, not just blue-collar jobs.

Yes, that’s what I mean. Well-paying jobs.

JH: And I think the data that’s out there shows that in the end, it will lead to economic growth and new opportunities, but there will be a disruption just as there was during the Industrial Revolution, and you’re right that it’s going to happen much faster. So we’re going to have to adjust. Many jobs are going to be what they call “de-skilled.” In other words, part of the skillset of the job will be taken over by the computer.

And why shouldn’t it be?

JH: Why shouldn’t it be, right?

Like you’re saying with the cars, why shouldn’t it be safer?

JH: Why shouldn’t it be, right? But then, obviously, drivers are out of jobs in that setting, and how do we recover that? How do we restructure that, we re-educate people into new jobs?

DP: You should probably interview Sundar again, because that’s —

I am. I’m going to be.

DP: This is one of his hobbies, or hobby horses, right, is helping with technology and jobs, and there’s ...

Yeah, I had him on an MSNBC show talking about [that], I’m going to bring him back here.

DP: Yeah, he’s, there’s a bunch of programs that I read about, so fortunately, you know, I’m glad I’m working in a place that seems to be taking this seriously. I worry about for my grandchildren, you know, about the jobs, stuff like that.

So, I’m going to finish up by asking what would you guys do now if you were, I mean you could do whatever you want, I don’t think age is a hindrance in any way, but if you were starting out right now.

DP: If we were young again?

No, I’m old, too. I mean if you would pick anything and go anywhere right now, change everything, is there one area of computing that you would focus on, or would you own a restaurant? I don’t know.

DP: No, we’re both optimists. I mean, if I was younger and had more energy, I would, this golden age sounds pretty good to me. I think computer architects haven’t been asked enough, to do enough about security, and it’s, you know, for those of us with an industry — it’s humiliating how bad security is.

It is.

DP: It’s, you know, it’s not, I don’t think it’s necessary, and I think hardware, which does things, you know, every nanosecond, we should try to see if hardware can really make a difference. So, yeah, that’s the one I ...

I agree.

DP: That’s the one that I’m particularly interested in, and this, I think I talked about earlier with RISC-V, this open-source instruction set.

In the past, you know, we’ve had to wait for Intel. We have to beg Intel to make a change before we can do anything. Now we don’t have to beg anybody. We can jump in there, come up, try ideas, put them online through these field programmable gate arrays, and see if they work. And not only that, you don’t have to work for Intel or ARM. Anybody in the world can do this. So we could see this potentially rapid acceleration of innovation around security with architecture and software systems. We need to get better at this, and I can imagine this path working. And so yeah, that’s what I think that’s a really exciting thing to work on.

JH: Yeah, I think it’s an interesting time. Here’s this whole new set of applications, which consume enormous amounts of computer power and produce incredible results. We have to rethink both the hardware and the software systems that we use to build them because they’re both changing. We need to respond to these new kinds of applications and we need to change the way we design the machines, so that opens up opportunities for both software and hardware people.

It’s really focused on co-design, so you’ve got to bring these people together and get them to work together to do something innovative, and that’s always an exciting time when that happens in a field.

So John, you don’t want to bring back Google Glass?

JH: Try again.

Do you know what? It’s still a great concept.

JH: It’s a great concept.

It’s exactly the right concept.

JH: Yeah, it’s a great concept. We need more killer apps besides face recognition.

It’s, you know what? Remember General Magic? There’s a new movie coming out about that. It had an iPhone back then, it just ...

JH: Remember there were a couple tries at PDAs before the iPhone.

Well, General Magic was — and they were, they all worked there, all the people that went on. I think Google Glass is going to make a comeback.

DP: Yeah?

I’ve decided. It’s the right concept, but the idea of something around your face and computing and somehow that’s just interesting to me. I would like you to invent that, please, if you don’t mind.

JH: Okay, I’ll work on it.

DP: He’s Chair, so.

Anyway, thank you so much for coming and congratulations on your award, named for Alan Turing, who was another great engineer and visionary, actually, about where computing was going. We’ve had a great interview with John Hennessy and Dave Patterson. They are really legends in the business and I hope you’ll come back again and tell me where things are going in the future. Thanks for coming on the show.

This article originally appeared on

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.