Title :
Approximation of Expected Reward Value in MMDP
Author :
Hanna, Hosam ; Yao, Jin ; Zreik, Khaldoun
Author_Institution :
Comput. Sci. Dept., GREYC - Caen Univ., Caen
Abstract :
Among researchers in multi-agent systems, there has been growing interest in a coordination problem, particularly when agents´ behaviors are stochastic. A multiagent Markov Decision Process MMDP is an efficient way to obtain an optimal suite of decisions that all agents have to take. But, a hard computation is required to solve it. Proposed methods to solve an MMDP depend on the fact that each agent has precise knowledge about the behaviors of the others. In this paper, we consider a fully cooperative multi-agent system where agents have to coordinate their uncertain behaviors. In this system, an agent can partially observe the state of the others. We present a method allowing agents to construct and to solve an MMDP by exchanging the expected reward value of some states. For large systems, we present a model to approximate the expected reward value using the distributed MDPs.
Keywords :
Markov processes; approximation theory; decision making; multi-agent systems; MMDP; coordination problem; expected reward value; multiagent Markov decision process; Centralized control; Computer science; Control systems; Laboratories; Multiagent systems; State-space methods; Stochastic systems; Uncertainty;
Conference_Titel :
Information and Communication Technologies: From Theory to Applications, 2008. ICTTA 2008. 3rd International Conference on
Conference_Location :
Damascus
Print_ISBN :
978-1-4244-1751-3
Electronic_ISBN :
978-1-4244-1752-0
DOI :
10.1109/ICTTA.2008.4530314