DocumentCode :
3196035
Title :
Cooperative co-learning: a model-based approach for solving multi-agent reinforcement problems
Author :
Scherrer, Bruno ; Charpillet, François
Author_Institution :
LORIA-INRIA Lorraine, Vandoeuvre-les-Nancy, France
fYear :
2002
fDate :
2002
Firstpage :
463
Lastpage :
468
Abstract :
Solving multiagent reinforcement learning problems is a key issue. Indeed, the complexity of deriving multiagent plans, especially when one uses an explicit model of the problem, is dramatically increasing with the number of agents. This papers introduces a general iterative heuristic: at each step one chooses a sub-group of agents and update their policies to optimize the task given the rest of agents have fixed plans. We analyse this process in a general purpose and show how it can be applied to Markov decision processes, partially observable Markov decision processes and decentralized partially observable Markov decision processes.
Keywords :
Markov processes; computational complexity; decision theory; heuristic programming; iterative methods; learning (artificial intelligence); multi-agent systems; agent subgroup; complexity; cooperative co-learning; decentralized partially observable Markov decision processes; iterative heuristic; model-based approach; multiagent plan derivation; multiagent reinforcement learning problems; multiagent reinforcement problem solving; task optimization; Artificial intelligence; Ecosystems; Iterative algorithms; Organisms;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings. 14th IEEE International Conference on
ISSN :
1082-3409
Print_ISBN :
0-7695-1849-4
Type :
conf
DOI :
10.1109/TAI.2002.1180839
Filename :
1180839
Link To Document :
بازگشت