DocumentCode :
3683520
Title :
An improved approach to reinforcement learning in Computer Go
Author :
Michael Dann;Fabio Zambettau;John Thangarajah
Author_Institution :
School of Computer Science and, Information Technology, RMIT University Melbourne, Victoria 3000
fYear :
2015
Firstpage :
169
Lastpage :
176
Abstract :
Monte-Carlo Tree Search (MCTS) has revolutionized, Computer Go, with programs based on the algorithm, achieving a level of play that previously seemed decades away., However, since the technique involves constructing a search tree, its performance tends to degrade in larger state spaces. Dyna-2, is a hybrid approach that attempts to overcome this shortcoming, by combining Monte-Carlo methods with state abstraction. While, not competitive with the strongest MCTS-based programs, the, Dyna-2-based program RLGO achieved the highest ever rating, by a traditional program on the 9×9 Computer Go Server. Plain, Dyna-2 uses _-greedy exploration and a flat learning rate, but we, show that the performance of the algorithm can be significantly, improved by making some relatively minor adjustments to this, configuration. Our strongest modified program achieved an Elo, rating 289 points higher than the original in head-to-head play, equivalent to an expected win rate of 84%.
Keywords :
"Games","Training","Computers","Monte Carlo methods","Shape","Computer science","Information technology"
Publisher :
ieee
Conference_Titel :
Computational Intelligence and Games (CIG), 2015 IEEE Conference on
ISSN :
2325-4270
Electronic_ISBN :
2325-4289
Type :
conf
DOI :
10.1109/CIG.2015.7317910
Filename :
7317910
Link To Document :
بازگشت