• DocumentCode
    3735633
  • Title

    Q-learning vs. FRIQ-learning in the Maze problem

  • Author

    Tam?s Tompa;Szilveszter Kov?cs

  • Author_Institution
    Department of Information Technology, University of Miskolc, Miskolc, Hungary
  • fYear
    2015
  • Firstpage
    545
  • Lastpage
    550
  • Abstract
    The goal of this paper is to give a demonstrative example for introducing the benefits of the FRIQ-learning (Fuzzy Rule Interpolation-based Q-learning) versus the traditional discrete Q-learning. The chosen example is an easily scalable discrete state and discrete action space task the Maze problem. The main difference of the two studied reinforcement learning methods, that the traditional Q-learning has discrete state, action and Q-function representation. While the FRIQ-learning has continuous state, action space and a Fuzzy Rule Interpolation based Q-function representation. For comparing the convergence speed of the two methods, both will start from an empty knowledge base, zero Q-table for the Q-learning and empty rule-base for the FRIQ-learning and following the same policy stops at the same performance condition. In the example of the paper the Maze problem will be studied in different obstacle configurations and different scaling.
  • Keywords
    "Interpolation","Adaptation models","Visualization","Learning (artificial intelligence)","MATLAB","Knowledge based systems","Man machine systems"
  • Publisher
    ieee
  • Conference_Titel
    Cognitive Infocommunications (CogInfoCom), 2015 6th IEEE International Conference on
  • Type

    conf

  • DOI
    10.1109/CogInfoCom.2015.7390652
  • Filename
    7390652