clock menu more-arrow no yes

Microsoft Researcher Wins the "Nobel Prize" of Computer Science

The $250,000 Turing prize recognizes Lamport's contributions to distributed systems.

Courtesy: Microsoft Research

The Association for Computing Machinery has named Microsoft Research’s Leslie Lamport the recipient of the 2013 A.M. Turing Award, considered the highest honor in computer science.

The award, which comes with a $250,000 prize, recognizes Lamport’s contributions to distributed computing systems, the underlying software that coordinate the activities of machines across networks. The technology enables everything from massively multiplayer online games like World of Warcraft to peer-to-peer file-sharing networks like BitTorrent to big data research that leverages the processing power of servers around the globe.

In a long career spanning earlier roles at SRI International and Digital Equipment Corp., Lamport, 73, created algorithms and protocols that have improved the performance and reliability of computer systems, ACM said. His contributions are widely used in online security, cloud computing and database systems — in other words, some of the critical foundations of the always-connected, multiscreen, modern computing world.

“His pioneering work in distributed and concurrent algorithms substantially improved consumer and industrial computing systems, ranging from multiprocessor technology used in data centers to multicomputer networks used in aircraft control systems,” said Wen-Hann Wang, managing director of Intel Labs, in a statement.

Intel and Google both contributed money toward the quarter-million-dollar prize.

Among Lamport’s many contributions, one of the most widely implemented is known as the Paxos algorithm, which can be found at work behind the scene in Google or Bing online searches, among much else. It allows a computer network to continue working in a coherent way even in the face of failures, by transferring leadership roles among machines and halting progress rather than allowing damage to occur to the system.

He is also known for the “logical clock” protocol, which ensures that the steps in a computing process occur in the right order even if they take place across different machines. He himself downplays the importance of this one, however, noting reaction to the paper tended to break down into two groups: Those who found it mundane and those who considered it brilliant.

“I couldn’t argue with the first, I really felt it was trivial, but I wasn’t inclined to argue with the second,” he said.

The award is named after Alan Turing, the British mathematician and World War II code breaker who laid out the basic framework for the computer back in 1936 and did seminal work in what would become artificial intelligence. (For more on the Turing test for AI and the scientist’s tragic life story, check out this story and this story by some guy.)

Past winners include: Douglas Engelbart, the late SRI engineer who developed the mouse along with other advances in human-computer interaction; Vint Cerf, president of the ACM, chief Internet evangelist for Google and one of the architects of the Internet; and Alan Kay, a pioneer in graphical user interfaces.

“It’s certainly a great honor to be in the company of such wonderful Turing Award winners, people I respect and learned an awful lot from like Butler Lampson, Jim Gray and Dijkstra,” Lamport said.

Lamport is a principal researcher at Microsoft Research Silicon Valley, where he is focused on a so-called specification language known as “TLA+” and related tools, used to express a system’s requirements and design before the actual programming stage. The basic idea is that more precise planning leads to better coding.

He made this point in a Wired Opinion piece early last year, stressing that developers don’t construct buildings without blueprints and neither should programmers.

“You should write the instruction manual before you write programs,” Lamport said. “If you can’t explain clearly what this thing does and how to use it, then the user won’t be able to figure it out and you’ll be producing pretty lousy programs.”

This article originally appeared on