• DocumentCode
    460747
  • Title

    Fuzzy Q-Map Algorithm for Reinforcement Learning

  • Author

    Lee, YoungAh ; Hong, SeokMi

  • Author_Institution
    Dept. of Comput. Eng., KyungHee Univ., Seocheon-Dong
  • Volume
    1
  • fYear
    2006
  • fDate
    Nov. 2006
  • Firstpage
    1
  • Lastpage
    6
  • Abstract
    In reinforcement learning, it is important to get nearly right answers early. Good prediction early can reduce the prediction error afterward and accelerate learning speed. We propose fuzzy Q-map, function approximation algorithm based on on-line fuzzy clustering in order to accelerate learning. Fuzzy Q-map can handle the uncertainty owing to the absence of environment model. Applying membership function to reinforcement learning can reduce the prediction error and destructive interference phenomenon caused by changes of the distribution of training data. In order to evaluate fuzzy Q-map´s performance, we experimented on the mountain car problem and compared it with CMAC. CMAC achieves the prediction rate 80% from 250 training data, fuzzy Q-map learns faster and keep up the prediction rate 80% from 250 training data. Fuzzy Q-map may be applied to the field of simulation that has uncertainty and complexity
  • Keywords
    function approximation; fuzzy set theory; learning (artificial intelligence); pattern clustering; function approximation; fuzzy Q-map algorithm; membership function; online fuzzy clustering; prediction error; reinforcement learning; Acceleration; Approximation algorithms; Clustering algorithms; Computer errors; Function approximation; Interference; State-space methods; Training data; Uncertainty; Unsupervised learning;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Computational Intelligence and Security, 2006 International Conference on
  • Conference_Location
    Guangzhou
  • Print_ISBN
    1-4244-0605-6
  • Electronic_ISBN
    1-4244-0605-6
  • Type

    conf

  • DOI
    10.1109/ICCIAS.2006.294080
  • Filename
    4072033