• DocumentCode
    3132774
  • Title

    Q-learning in multi-agent cooperation

  • Author

    Kao-Shing Hwang ; Yu-Jen Chen ; Tzung-Feng Lin

  • Author_Institution
    Dept. of Electr. Eng., Nat. Chung Cheng Univ., Chiayi
  • fYear
    2008
  • fDate
    23-25 Aug. 2008
  • Firstpage
    1
  • Lastpage
    6
  • Abstract
    Q-learning, a most widely used reinforcement learning method, normally needs well-defined quantized state and action spaces to obtain an optimal policy for accomplishing a given task. This means it difficult to be applied to real robot tasks because of poor performance of learned behavior due to the failure of quantization of continuous state and action spaces. In this paper, we proposed a fuzzy-based CMAC method to calculate contribution values to estimate a continuous action value in order to make motion smooth and effective. And we implement it to a multi-agent system for real robot applications.
  • Keywords
    cerebellar model arithmetic computers; learning (artificial intelligence); multi-agent systems; Q-learning; fuzzy-based CMAC method; multi-agent system; multiagent cooperation; reinforcement learning method; Dynamic programming; Learning systems; Motion estimation; Multiagent systems; Orbital robotics; Quantization; Robot kinematics; State-space methods; Stochastic processes; Uncertainty;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Advanced robotics and Its Social Impacts, 2008. ARSO 2008. IEEE Workshop on
  • Conference_Location
    Taipei
  • Print_ISBN
    978-1-4244-2674-4
  • Electronic_ISBN
    978-1-4244-2675-1
  • Type

    conf

  • DOI
    10.1109/ARSO.2008.4653621
  • Filename
    4653621