DocumentCode :
2211081
Title :
Communications for improving policy computation in distributed POMDPs
Author :
Nair, R. ; Tambe, M. ; Roth, M. ; Yokoo, M.
Author_Institution :
Computer Science Dept., Univ. of Southern California
fYear :
2004
fDate :
23-23 July 2004
Firstpage :
1098
Lastpage :
1105
Abstract :
Distributed Partially Observable Markov Decision Problems (POMDPs) are emerging as a popular approach for modeling multiagent teamwork where a group of agents work together to jointly maximize a reward function. Since the problem of finding the optimal joint policy for a distributed POMDP has been shown to be NEXP-Complete if no assumptions are made about the domain conditions, several locally optimal approaches have emerged as a viable solution. However, the use of communicative actions as part of these locally optimal algorithms has been largely ignored or has been applied only under restrictive assumptions about the domain. In this paper, we show how communicative acts can be explicitly introduced in order to find locally optimal joint policies that allow agents to coordinate better through synchronization achieved via communication. Furthermore, the introduction of communication allows us to develop a novel compact policy representation that results in savings of both space and time which are verified empirically. Finally, through the imposition of constraints on communication such as not going without communicating for more than K steps, even greater space and time savings can be obtained.
Keywords :
Distributed computing; Humans; Multiagent systems; Observability; Performance loss; Permission; Teamwork; Uncertainty;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004. Proceedings of the Third International Joint Conference on
Conference_Location :
New York, NY, USA
Print_ISBN :
1-58113-864-4
Type :
conf
Filename :
1373631
Link To Document :
بازگشت