Verifying Anaconda's expert rating by competing against Chinook: experiments in co-evolving a neural checkers player

Citation
Db. Fogel et K. Chellapilla, Verifying Anaconda's expert rating by competing against Chinook: experiments in co-evolving a neural checkers player, NEUROCOMPUT, 42, 2002, pp. 69-86
Citations number
11
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
NEUROCOMPUTING
ISSN journal
09252312 → ACNP
Volume
42
Year of publication
2002
Pages
69 - 86
Database
ISI
SICI code
0925-2312(200201)42:<69:VAERBC>2.0.ZU;2-9
Abstract
Since the early days of artificial intelligence, there has been interest in having a computer teach itself how to play a game of skill, like checkers, at a level that is competitive with human experts. To be truly noteworthy, such efforts should minimize the amount of human intervention in the learn ing process. Recently, co-evolution has been used to evolve a neural networ k (called Anaconda) that when coupled with a minimax search, can evaluate c hecker-boards and play to the level of a human expert, as indicated by its rating of 2045 on an international web site for playing checkers. The neura l network uses only the location, type, and number of pieces on the board a s input. No other features that would require human expertise are included. Experiments were conducted to verify the neural network's expert rating by competing it in 10 games against a "novice-level" version of Chinook, a wo rld-champion checkers program. The neural network had 2 wins, 4 losses, and 4 draws in the 10-game match. Based on an estimated rating of Chinook at t he novice level, the results corroborate Anaconda's expert rating. (C) 2002 Elsevier Science B.V. All rights reserved.