Artificial intelligence just overcame a new hurdle: learning to play Go, a game thousands of times more complex than chess, well enough to beat the greatest human player at his own game. Twice.
AlphaGo, an artificial intelligence system developed by Google DeepMind, is two games into a six-day, five-game match with Lee Sedol, the world's best Go player. And so far, AlphaGo has won both games — meaning that if Sedol is going to triumph, he has to stage a quick comeback.
Go, a two-player game, is played on a board with 361 squares, with an unlimited supply of white and black game pieces, called stones. Players arrange the stones on the board to create "territories" by marking off parts of the board game, and can capture their opponent's pieces by surrounding them. The player with the most territory wins.
Although the rules are relatively simple, the number of possible combinations is nearly infinite — there are more ways to arrange the pieces on the board than there are atoms in the universe.
The computer's victory shocked Sedol. But it's also astounded experts, who thought that teaching computers to play Go well enough to beat a champion like Sedol would take another decade. AlphaGo did it by studying millions of games, just as Google's algorithms learn to identify photos by looking at millions of similar ones.
The next match between AlphaGo and Sedol is Friday at 11 pm Eastern time. It's being live-streamed online.
No pun intended.
- The Verge explains why AlphaGo's win is such a big deal.
- If you want to know more about how Google did it, this paper from Nature explains how it works.
- The head of Google's AI division now leads its web search team — and that makes perfect sense if you understand how crucial artificial intelligence is to the company, Wired reported.