What would Lee Sedol say about the surprise human victory over a artificial intelligence (AI) in ‘Go’? The second best player in the Chinese Origin Strategy Challenge retired in 2016 after being defeated in multiple games against an AI. “It is an entity that cannot be defeated”, he reflected after making that drastic decision.
Now things seem to have changed. Kellin Pelrine, an experienced player but far below Sedol’s level claims to have found the Achilles heel of the seemingly invincible AI systems that play ‘Go’. The result? A series of crushing defeats that has not gone unnoticed and that teaches us more about these systems.
Artificial intelligence has fallen into ‘Go’
The recent triumph reveals much more than we can imagine about the nature of these AI-based systems. To understand it better, let’s see what have been the steps that Pelrine has followed. The player, who is a researcher at the FAR AI lab, started by look for KataGo weaknessesa system whose operation is similar to the famous AlphaGo that beat Sedol.
To achieve this, he used an adversary model developed by his team. He was responsible for playing more than 1 million games with KataGo until he found a “blind spot” that a human player could learn to implement later and emerge victorious. The software did its job and delivered a series of moves that could defeat the AI.
Pelrine memorized the techniques and, without the help of the aforementioned computer system, put them to the test in KataGo. So, won 14 out of 15 games against the AI-powered program. “The strategy could be used by an intermediate level player to beat the machines,” says the researcher In an interview with the Financial Times.
Broadly speaking, one of the strategies consists of distraction. The player makes a large circle of pieces to surround one of his opponent’s groups while interspersing moves with other corners of the board. A human would be aware of the threatening perimeter, but the AI does not perceive such movements until its pieces are captured.
That the AI, apparently so advanced and invincible, has not been able to respond to movements that a intermediate level player detected has an explanation, at least according to FAR AI chief Adam Gleave. The executive recalls that the skills of these computer systems come from his training with different plays.
So, since the tactic used by his research team is rarely used due to its low chance of success in a real scenario, KataGo has not been trained with enough related plays to see it as a threat. This blind spot, on paper, dramatically changes our perception of winning against an AI.
Until now, in most cases, opponents have been trying to play against the AI as if they were playing against a human, trying to take advantage of human vulnerabilities. Performing certain moves discovered by the opponent attack model that would not make sense to human players could mean victory against an AI.
Images: Elena Popova | DeepMind
In Xataka: This Meta AI has shredded human opponents in complex games. It’s just a test of your potential.