In a landmark battle between man and artificial intelligence (AI), the world champion of the game Go is facing off against a computer.
South Korea’s Lee Se-dol is playing Google’s AlphaGo programme in the first of a series of games in Seoul.
In October 2015, AlphaGo beat the European Go champion, an achievement that was not expected for years.
A computer has beaten the world chess champion, but the Chinese game Go is seen as significantly more complex.
The first game between Mr Lee and AlphaGo kicked off at 13:00 local time (04:00 GMT) and is expected to last for several hours. It is being live broadcast on Youtube.
The BBC’s Stephen Evans in Seoul said Mr Lee appeared “nervous, sighing and shaking his head” at the outset of the match.
The two opponents will play a total of five games over the next five days for a prize of about $1m (£700,000).
Algorithm vs intuition
The five-day battle is being seen as a major test of what scientists and engineers have achieved in the sphere of artificial intelligence.
Go is a 3,000-year old Chinese board game and is considered to be a lot more complex than chess where artificial intelligence scored its most famous victory to date when IBM’s Deep Blue beat grandmaster Gary Kasparov in 1997.
But experts say Go presents an entirely different challenge because of the game’s incomputable number of move options which means that the computer must be capable of human-like “intuition” to prevail.
“Playing against a machine is very different from an actual human opponent,” Mr Lee told the BBC ahead of the match.
“Normally, you can sense your opponent’s breathing, their energy. And lots of times you make decisions which are dependent on the physical reactions of the person you’re playing against.
“With a machine, you can’t do that.”
Learning from mistakes
Google’s AlphaGo was developed by British computer company DeepMind which was bought by Google in 2014.
The computer programme first studied common patterns that are repeated in past games, Demis Hassabis, DeepMind chief executive explained to the BBC.
“After it’s learned that, it’s got to reasonable standards by looking at professional games. It then played itself, different versions of itself millions and millions of times and each time get incrementally slightly better – it learns from its mistakes”
Learning and improving from its own matchplay experience means the super computer is now even stronger than when it beat the European champion late last year.