• DocumentCode
    65507
  • Title

    Incentive Learning in Monte Carlo Tree Search

  • Author

    Kuo-Yuan Kao ; I-Chen Wu ; Shi-Jim Yen ; Yi-Chang Shan

  • Author_Institution
    Dept. of Inf. Manage., Nat. Penghu Univ., Magong, Taiwan
  • Volume
    5
  • Issue
    4
  • fYear
    2013
  • fDate
    Dec. 2013
  • Firstpage
    346
  • Lastpage
    352
  • Abstract
    Monte Carlo tree search (MCTS) is a search paradigm that has been remarkably successful in computer games like Go. It uses Monte Carlo simulation to evaluate the values of nodes in a search tree. The node values are then used to select the actions during subsequent simulations. The performance of MCTS heavily depends on the quality of its default policy, which guides the simulations beyond the search tree. In this paper, we propose an MCTS improvement, called incentive learning, which learns the default policy online. This new default policy learning scheme is based on ideas from combinatorial game theory, and hence is particularly useful when the underlying game is a sum of games. To illustrate the efficiency of incentive learning, we describe a game named Heap-Go and present experimental results on the game.
  • Keywords
    Monte Carlo methods; learning (artificial intelligence); tree searching; MCTS; Monte Carlo simulation; Monte Carlo tree search; combinatorial game theory; computer games; incentive learning; node values; policy learning; policy online; search tree; Computers; Game theory; Games; Monte Carlo methods; Radiation detectors; Size measurement; Temperature measurement; Artificial intelligence; combinatorial games; computational intelligence; computer games; reinforcement learning;
  • fLanguage
    English
  • Journal_Title
    Computational Intelligence and AI in Games, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1943-068X
  • Type

    jour

  • DOI
    10.1109/TCIAIG.2013.2248086
  • Filename
    6468079