DocumentCode :
550944
Title :
Multi-agent Q-learning with joint state value approximation
Author :
Chen Gang ; Cao Weihua ; Chen Xin ; Wu Min
Author_Institution :
Sch. of Inf. Sci. & Eng., Central South Univ., Changsha, China
fYear :
2011
fDate :
22-24 July 2011
Firstpage :
4878
Lastpage :
4882
Abstract :
This paper relieves the “curse of dimensionality” problem, which becomes intractable when scaling reinforcement learning to multi-agent systems. This problem is aggravated exponentially as the number of agents increases, resulting in large memory requirement and slowness in learning speed. For cooperative systems which are widely existed in multi-agent systems, this paper proposes a new multi-agent Q-learning algorithm based on the decomposing the joint state and joint action learning into two learning processes, which are learning individual action and the maximum value of the joint state approximately. The latter process considers others´ actions to insure the joint action is optimal and supports the updating of the former one. The simulation results illustrate that the proposed algorithm can learn the optimal joint behavior with smaller memory and faster speed comparing with Friend-Q learning.
Keywords :
approximation theory; learning (artificial intelligence); multi-agent systems; cooperative system; curse of dimensionality problem; joint action learning; joint state value approximation; multiagent Q-learning; reinforcement learning; Games; Joints; Learning; Learning systems; Markov processes; Memory management; Multiagent systems; Cooperative systems; Curse of dimensionality; Decomposition; Multi-agent system; Q-learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Control Conference (CCC), 2011 30th Chinese
Conference_Location :
Yantai
ISSN :
1934-1768
Print_ISBN :
978-1-4577-0677-6
Electronic_ISBN :
1934-1768
Type :
conf
Filename :
6001285
Link To Document :
بازگشت