DocumentCode :
423957
Title :
Hybrid model for multiagent reinforcement learning
Author :
Könönen, Ville
Author_Institution :
Neural Networks Res. Center, Helsinki Univ. of Technol., Finland
Volume :
3
fYear :
2004
fDate :
25-29 July 2004
Firstpage :
1793
Abstract :
In This work we propose a new method for reducing space and computational requirements of multiagent reinforcement learning based on Markov games. The proposed method estimates value functions by using two Q-value tables or function approximators. We formulate the method for symmetric and asymmetric multiagent reinforcement learning and also discuss some numerical approximation techniques. Additionally, we present a brief literature survey of multiagent reinforcement learning and test the proposed method with a simple example application.
Keywords :
Markov processes; function approximation; game theory; learning (artificial intelligence); multi-agent systems; Markov games; Q-value tables; asymmetric multiagent reinforcement learning; function approximators; hybrid model; numerical approximation techniques; symmetric multiagent reinforcement learning; value function estimation; Game theory; Learning systems; Neural networks; Space technology; State-space methods; Testing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on
ISSN :
1098-7576
Print_ISBN :
0-7803-8359-1
Type :
conf
DOI :
10.1109/IJCNN.2004.1380880
Filename :
1380880
Link To Document :
بازگشت