Google’s computer program AlphaGo defeated its human opponent, South Korean Go champion Lee Sedol, on Wednesday in the first game of a historic five-game match between human and computer.
AlphaGo’s victory in the ancient Chinese board game is a breakthrough for artificial intelligence, showing the program developed by Google DeepMind has mastered one of the most creative and complex games ever devised.
Commentators said the match was close, with both AlphaGo and Lee making some mistakes. The result was unpredictable until near the end. Lee’s loss was a shock to South Koreans and Go fans. Two weeks ago, the 33-year-old was confident of a sweeping victory, but he sounded less optimistic a day before the match.
“I was very surprised because I did not think that I would lose the game. A mistake I made at the very beginning lasted until the very last,” said Lee, who has won 18 world championships since becoming a professional Go player at the age of 12. Lee said AlphaGo’s strategy was “excellent” from the beginning.
Yoo Chang-hyuk, another South Korean Go master who commentated on the game, described the result as a big shock and said that Lee appeared shaken at one point. Hundreds of thousands of people watched the game live on TV and YouTube. The remaining four matches will end on Tuesday.
Computers conquered chess in 1997 in a match between IBM’s Deep Blue and chess champion Garry Kasparov, which – according to DeepMind’s CEO Demis Hassabis – leaves Go as “the only game left above chess”.
Top human players rely heavily on intuition to choose among a near-infinite number of board positions in Go, making the game extremely challenging for artificial intelligence.
AI experts had forecast it would take another decade for computers to beat professional Go players. That changed when AlphaGo defeated a European Go champion last year, in a closed-door match later published in the journal Nature. Since then, AlphaGo’s performance has steadily improved. “We are very excited about this historic moment. We are very pleased about how AlphaGo performed,” said Hassabis.
DeepMind’s team built “reinforcement learning” into AlphaGo, meaning the machine plays against itself and adjusts its own neural networks based on trial and error. AlphaGo can also narrow down the search space for the next best move from the near-infinite to something more manageable. It can also anticipate long-term results of each move and predict the winner.
AlphaGo’s win over a human champion shows computers can mimic intuition and tackle more complex tasks, its creators say.