DocumentCode
2500507
Title
Learning a Strategy with Neural Approximated Temporal-Difference Methods in English Draughts
Author
Fausser, Stefan ; Schwenker, Friedhelm
Author_Institution
Inst. of Neural Inf. Process., Univ. of Ulm, Ulm, Germany
fYear
2010
fDate
23-26 Aug. 2010
Firstpage
2925
Lastpage
2928
Abstract
Having a large game-tree complexity and being EXPTIME-complete, English Draughts, recently weakly solved during almost two decades, is still hard to learn for intelligent computer agents. In this paper we present a Temporal-Difference method that is nonlinear neural approximated by a 4-layer multi-layer perceptron. We have built multiple English Draughts playing agents, each starting with a randomly initialized strategy, which use this method during self-play to improve their strategies. We show that the agents are learning by comparing their winning-quote relative to their parameters. Our best agent wins versus the computer draughts programs Neuro Draughts, KCheckers and CheckerBoard with the easych engine and looses to Chinook, GuiCheckers and CheckerBoard with the strong cake engine. Overall our best agent has reached an amateur league level.
Keywords
computational complexity; computer games; game theory; multi-agent systems; multilayer perceptrons; trees (mathematics); CheckerBoard; Chinook; EXPTIME-complete; English Draughts; GuiCheckers; KCheckers; Neuro Draughts; cake engine; computer draughts programs; easych engine; game-tree complexity; intelligent computer agents; multilayer perceptron; neural approximated temporal-difference method; winning-quote; Computers; Estimation; Games; Intelligent agent; Materials; Neurons; Training; Board Games; Draughts; Neural Networks; Reinforcement Learning;
fLanguage
English
Publisher
ieee
Conference_Titel
Pattern Recognition (ICPR), 2010 20th International Conference on
Conference_Location
Istanbul
ISSN
1051-4651
Print_ISBN
978-1-4244-7542-1
Type
conf
DOI
10.1109/ICPR.2010.717
Filename
5597057
Link To Document