Title :
Prediction learning in multi agent systems
Author :
Guo, Rui ; Wu, Min ; Chen, Xin ; Cao, Weihua
Author_Institution :
Sch. of Inf. Sci. & Eng., Central South Univ., Changsha, China
Abstract :
In MAS, model-free action-value based reinforcement learning, such as Q-learning, suffers from the fact that both the state and the action space scale exponentially with the number of agents, the learning process is very slow and low efficiency, meanwhile, the convergence of multi-agent reinforcement learning is not guaranteed when ideal assumptions do not hold. To solve the question, this paper proposes a learning framework of MAS, the framework consists of two levels, the high-level is a planner which comprised of abstract control policies that based on prior knowledge; the low-level is a prediction Q-learning module. In learning the prediction of next state will help greatly reducing the action search space, we can perceive the actual state after each prediction step, thus with known methods we can easily improve the predictor performance. We demonstrate the application of framework in RoboCup, showing the faster learning efficiency and generalization ability of the framework.
Keywords :
learning (artificial intelligence); multi-agent systems; RoboCup; abstract control policies; action search space; model-free action-value based reinforcement learning efficiency; multiagent system; prediction Q-learning module; Convergence; Learning; Machine learning; Markov processes; Presses; Training; MAS; predicting learning;
Conference_Titel :
Intelligent Control and Automation (WCICA), 2010 8th World Congress on
Conference_Location :
Jinan
Print_ISBN :
978-1-4244-6712-9
DOI :
10.1109/WCICA.2010.5554394