Title :
Sparse Cooperative Multi-agent Q-learning Based on Vector Potential Field
Author :
Liu, Liang ; Li, Longshu
Author_Institution :
Key Lab. of Intell. Comput. & Signal Process., Anhui Univ., Hefei, China
Abstract :
Multi-agent reinforcement learning(RL) problems can in principle be solved by treating the joint actions of the agents as single actions and applying single-agent Q-learning. However, the number of joint actions is exponential in the number of agents, rendering this approach infeasible for most problems. We investigate a sparse cooperative of the Q-function based on vector potential field by only considering the joint actions in those states in which coordination is actually required in this paper. In all other states single-agent Q-learning is applied. This offers a compact state-action value representation, without compromising much in terms of solution quality. We distinguish the coordinated state by vector potential field. We have performed experiments in RoboCup simulation-2D and compared our algorithm to other multi-agent reinforcement learning algorithms with promising results.
Keywords :
learning (artificial intelligence); multi-agent systems; RoboCup simulation-2D; compact state-action value representation; multiagent reinforcement learning; single-agent Q-learning; sparse cooperative multiagent Q-learning; vector potential field; Game theory; History; Intelligent agent; Intelligent systems; Learning; Multiagent systems; Signal processing; Signal processing algorithms; MAS; Q-learning; Sparse Cooperative; Vector Potential Field;
Conference_Titel :
Intelligent Systems, 2009. GCIS '09. WRI Global Congress on
Conference_Location :
Xiamen
Print_ISBN :
978-0-7695-3571-5
DOI :
10.1109/GCIS.2009.44