AI based on Imperial research beats world champion bridge players | Imperial News
A ‘next generation’ artificial intelligence has beaten the records of eight world champion bridge players for the first time.
The machine learning system, called Nook, was developed by French start-up NukkAI. It is based on probabilistic inductive logic programming (PILP), an AI approach introduced and developed over twenty years by Professor Stephen Muggleton and his research group at Imperial College London.
This is a fundamental breakthrough in the application of white box AI. Professor Stephen Muggleton Department of Computing
PILP underpins ‘white box’ AI – a system that learns in a more human way and is more easily understood than the more common ‘black box’ type of AI, whose decision making can be enigmatic to humans.
The system beat the records of eight world champion bridge players at a two-day tournament in Paris by a substantial margin. The human champions played 800 consecutive deals divided into 80 sets of 10.
Each champion played their own and their ‘dummy’ partner’s cards against two previously used robot opponents. The AI then played against these robots, who had previously won many AI competitions but had previously paled in comparison to champion human players.
Professor Stephen Muggleton of Imperial’s Department of Computing said: “This is a fundamental breakthrough in the application of white box AI. The outcome of this competition represents a fundamentally important advance in state-of-the-art of AI systems and can be expected in time to have widespread applications in other contexts in science, technology, and industry.”
Unlike board games, bridge is based on incomplete information relating to the state of play. This event has highlighted some fundamental similarities in how humans and white box AI approach uncertainty and incomplete information in game play.
Previous AI successes, like when playing humans against chess, are based on black box systems where the human can’t understand how decisions are made. By contrast the white box uses logic and probabilities just like a human. Professor Stephen Muggleton Department of Computing
Humans learn to play a game by first learning the rules. We then hone our skills by practicing, reading books, sharing knowledge, and interacting with other players. We can also train other humans by describing how we make good choices.
Machine learning-based systems tend towards a different approach, honing their skills by playing billions of games until they learn to perform well. This commonest form of machine learning, known as ‘black box’ AI, means humans cannot explain how it came to its decisions.
This breakthrough is significant because it demonstrates the power of ‘explainable’, or ‘white box’ machine learning – an AI system that learns by understanding and practicing the game, in the same way that humans do.
This approach means its workings are inherently understandable to humans, which is important where AI is used for medical diagnoses, for example, so doctors and patients can have confidence in the AI’s conclusions.
Professor Muggleton added: “Previous AI successes, like when playing humans against chess, are based on black box systems where the human can’t understand how decisions are made. By contrast the white box uses logic and probabilities just like a human.
“White box machine learning is closely related to the way we humans learn incrementally as we carry out everyday tasks. White box systems can explain to humans how game playing decisions are made – something black box AI cannot do so readily.”
Bridge deals consist of two phases: the bidding phase, where players bid for the minimum number of tricks they think they can take to win the deal, and the card play phase. The tournament omitted this first part, focusing instead on card play.
Next they will apply white box processes to the bidding process. This will allow the researchers to test white box AI’s aptitude at explaining what has happened to other players and in building up a repertoire of better understanding.