• DocumentCode
    3373759
  • Title

    Cooperative Q-learning based on maturity of the policy

  • Author

    Yang, Mao ; Tian, Yantao ; Liu, Xiaomei

  • Author_Institution
    Sch. of Commun. Eng., Jilin Univ., Changchun, China
  • fYear
    2009
  • fDate
    9-12 Aug. 2009
  • Firstpage
    1352
  • Lastpage
    1356
  • Abstract
    In order to improve the convergence speed of reinforcement learning and avoid the local optimum for multirobot systems, a new method of cooperative Q-learning based on maturity of the policy is presented. The learning process is executed at the blackboard architecture making use of all the robots in the training scenario to explore the learning space and collect experiences. The reinforcement learning algorithm was divided into two types: constant credit-degree and variable credit-degree, which the particle swarm optimize algorithm (PSO) is adopted to find the optimum for the constant credit-factor. The method is used to the task for fire-disaster response. Simulation experiments verify the effectiveness of the proposed algorithm.
  • Keywords
    blackboard architecture; learning (artificial intelligence); multi-robot systems; particle swarm optimisation; blackboard architecture; constant credit-degree learning; cooperative Q-learning process; multi-robot systems; particle swarm optimization; policy maturity; reinforcement learning; variable credit-degree learning; Acceleration; Automation; Batteries; Costs; Engines; Fuel economy; Hybrid electric vehicles; Laboratories; Vehicle dynamics; Virtual manufacturing; Blackboard architecture; Cooperative Q-learning; PSO; Policy maturity;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Mechatronics and Automation, 2009. ICMA 2009. International Conference on
  • Conference_Location
    Changchun
  • Print_ISBN
    978-1-4244-2692-8
  • Electronic_ISBN
    978-1-4244-2693-5
  • Type

    conf

  • DOI
    10.1109/ICMA.2009.5246732
  • Filename
    5246732