DocumentCode :
3179178
Title :
Localization for a class of two-team zero-sum Markov games
Author :
Chang, Hyeong Soo ; Fu, Michael C.
Author_Institution :
Dept. of Comput. Sci. & Eng., Sogang Univ., Seoul, South Korea
Volume :
5
fYear :
2004
fDate :
14-17 Dec. 2004
Firstpage :
4844
Abstract :
This paper presents a novel concept of "localization" for a class of infinite horizon two-team zero-sum Markov games (MGs) with a minimizer team of multiple decision makers that competes against nature (a maximizer team) which controls the disturbances that are unknown to the minimizer team. The minimizer team is associated with a general joint cost structure but has a special decomposable state/action structure such that each pair of a minimizing agent\´s action and a random disturbance to the agent affects the system\´s state transitions independently from all of the other pairs. By localization, the original MG is decomposed into "local" MGs defined only on local state and action spaces. We discuss how to use localization to develop an efficient distributed heuristic scheme to find an "autonomous" joint policy such that each agent\´s action is based on only its local state.
Keywords :
Markov processes; game theory; infinite horizon; autonomous joint policy; decomposable state/action structure; distributed heuristic scheme; general joint cost structure; infinite horizon two-team zero-sum Markov games; local action spaces; local state spaces; maximizer team; minimizer team; multi-agent Markov decision processes; multiple decision makers; two-team zero-sum Markov game localization; Computer science; Control systems; Cost function; Decision making; Educational institutions; Game theory; Infinite horizon; Protocols; Robust control; Uncertainty;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control, 2004. CDC. 43rd IEEE Conference on
ISSN :
0191-2216
Print_ISBN :
0-7803-8682-5
Type :
conf
DOI :
10.1109/CDC.2004.1429563
Filename :
1429563
Link To Document :
بازگشت