skip to main content

Google programme beats European champion at complex board game

Go originated in ancient China and has more possible configurations on the board than atoms in the universe
Go originated in ancient China and has more possible configurations on the board than atoms in the universe

In what they called a milestone achievement for artificial intelligence, scientists said they have created a computer programme that beat a professional human player at a complex board game called Go.         

The feat recalls IBM supercomputer Deep Blue's 1997 match victory over chess world champion Garry Kasparov. But Go, a strategy board game - most popular in places like China, South Korea and Japan - is vastly more complicated than chess.          

"Go is considered to be the pinnacle of game AI research," said artificial intelligence researcher Demis Hassabis of Google Deep Mind, the British company that developed the Alpha Go programme

Deep Mind was acquired in 2014 by Google.   

"It's been the grand challenge, or holy grail if you like, of AI since Deep Blue beat Kasparov at chess."          

Alpha Go swept a five-game match against three-time European Go champion and Chinese professional Fan Hui.

Until now, the best computer Go programmess had played only at the level of human amateurs.            Until now, the best computer Go programs had played only at the level of human amateurs

In Go - also called Igo, Weiqi and Baduk - two players place black and white pieces on a square grid, aiming to take more territory than their adversary.            

"It's a very beautiful game with extremely simple rules that lead to profound complexity. In fact, Go is probably the most complex game ever devised by humans," said Mr Hassabis, a former child chess prodigy.            

Scientists have made artificial intelligence strides in recent years, making computers think and learn more like people do.            

Mr Hassabis acknowledged some people might worry about the increasing capabilities of artificial intelligence after the Go accomplishment, but added: "We're still talking about a game here."           

While Alpha Go learns in a more human-like way, it still needs many more games of practice, millions rather than thousands, than a human expert needs to become good at Go, Mr Hassabis said.            

Scientists foresee future applications for such AI programmes including: improving smartphone assistants (Apple's Siri is an example); medical diagnostics; and eventually collaborating with human scientists in research.             

Mr Hassabis said South Korea's Lee Sedol, the world's top Go player, has agreed to play Alpha Go in a five-game match in Seoul in March.

Mr Lee said in a statement: "I heard Google Deep Mind's AI is surprisingly strong and getting stronger, but I am confident that I can win, at least this time."             

The findings were published in the journal Nature.