clock menu more-arrow no yes

Google Beats Facebook in Race to Beat Unbeatable Game

Ready? Set? Race to the Singularity!

There’s a machine war in Silicon Valley. And the battlefield is a very, very old board game.

Point: Google.

On Wednesday, DeepMind, the taciturn artificial intelligence arm of the search engine, made a big announcement: Its program has defeated a champion human Go player. It’s a big deal because the complex board game, developed in China millennia ago, is considered the quintessential unsolved problem for machine intelligence.

DeepMind announced its accomplishment in a paper published in the research journal Nature. The paper, authored by 20 DeepMind researchers, “will surely be received as an historical milestone in AI,” Nature senior editor Tanguy Chouard said in a call with reporters.

January 28 cover of Nature
January 28 cover of Nature

By cracking Go, Google takes a step ahead in the accelerating AI arms race for progress and recruiting (and publicity) with other tech giants.

Most notably, with Facebook, which is also working on beating Go. In fact, late on Tuesday, the social company seems to have caught wind of DeepMind’s pending unveiling. So Facebook decided to share its advancements first!

CEO Mark Zuckerberg posted about the game, noting that while his AI scientists haven’t beaten it yet, they are “getting close.” Simultaneously, his scientists updated their earlier research paper on the game. And Yann LeCun, Facebook’s AI chief, posted a very long indeed explanation of the methods, stressing that he has but a “lone” researcher working on it.

In its paper, Google DeepMind unveiled how its trained machine program, called AlphaGo, bested a three-time European Go champ during a secretive match in October. More accurately, the program had trained itself to win using the advanced AI techniques that DeepMind is known for pioneering. Before being scooped up by Google for $400 million two years ago, the company released two papers demonstrating its algorithms teaching themselves to whup classic Atari games with alarming speed.

“Ultimately, we want to apply these techniques to important real-world problems,” DeepMind chief Demis Hassabis said on the call.

In the short term, that means things like making smartphone assistants smarter, he said. In the medium term, Hassabis imagines applications in medical diagnoses and climate modeling.

The long term? “My dream is to use these types of general learning systems to help with science — to help them make faster breakthroughs in scientific endeavors,” he said.

Based in London, DeepMind works adjacent to Google’s sprawling research division but has an independent edict to “solve intelligence.” DeepMind is now some 200 researchers strong and growing fast, according to sources. (Google declined to comment on its personnel size.)

Back in November, Hassabis coyly hinted that DeepMind had beaten Go. In March, his new program will take on celebrated Go player Lee Sedol — the Roger Federer of the game, per Hassabis — in a five-game match in Seoul.

This article originally appeared on Recode.net.