Title :
A Speedup Convergent Method for Multi-Agent Reinforcement Learning
Author_Institution :
Mech. & Electr. Eng. Sch. Beijing Inf. Sci. & Technol., Univ. Beijing, Beijing, China
Abstract :
For achieving cooperation, one agent must consider the joint action in multi-agent systems because the action result of a agent often depends on the other agent´ behaviors. However, joint action reinforcement learning always suffers the slow convergence rate because of the enormous learning space produced by joint action and information sharing ability. In this paper, a Speedup Convergent Reinforcement Learning Algorithm(SCRLA) is presented for multi-agent cooperation tasks, which demands all agents to learn and evaluate all actions that other agents may execute. A multi-agent inverted pendulum based on multi-agent cooperation experiment simulation is made to test the efficiency of SCRLA, and the results show that SCRLA has higher convergent rate and implements the cooperation strategy much faster than the primitive multi-agent reinforcement learning algorithm.
Keywords :
learning (artificial intelligence); multi-agent systems; SCRLA efficiency; information sharing ability; joint action reinforcement learning; multi-agent cooperation task; multi-agent inverted pendulum; multi-agent reinforcement learning; speedup convergent method; Dynamic programming; Information science; Learning; Multiagent systems; Nash equilibrium; Robustness; Space technology; Stochastic processes; Stochastic systems; Testing;
Conference_Titel :
Information Engineering and Computer Science, 2009. ICIECS 2009. International Conference on
Conference_Location :
Wuhan
Print_ISBN :
978-1-4244-4994-1
DOI :
10.1109/ICIECS.2009.5365958