DocumentCode :
1803468
Title :
Distributed learning in large-scale multi-agent games: A modified fictitious play approach
Author :
Swenson, Brian ; Kar, Soummya ; Xavier, Joao
Author_Institution :
Dept. of Electr. & Comput. Eng., Carnegie Mellon Univ., Pittsburgh, PA, USA
fYear :
2012
fDate :
4-7 Nov. 2012
Firstpage :
1490
Lastpage :
1495
Abstract :
The paper concerns the development of distributed equilibria learning strategies in large-scale multi-agent games with repeated plays. With inter-agent information exchange being restricted to a preassigned communication graph, the paper presents a modified version of the fictitious play algorithm that relies only on local neighborhood information exchange for agent policy update. Under the assumption of identical agent utility functions that are permutation invariant, the proposed distributed algorithm leads to convergence of the networked-averaged empirical play histories to a subset of the Nash equilibria, designated as the consensus equilibria. Applications of the proposed distributed framework to strategy design problems encountered in large-scale traffic networks are discussed.
Keywords :
distributed algorithms; game theory; learning (artificial intelligence); multi-agent systems; MFP algorithm; Nash equilibria; agent policy updation; consensus equilibria; distributed algorithm; distributed equilibria learning strategies; identical agent utility functions; inter-agent information exchange; large-scale multiagent games; large-scale traffic networks; local neighborhood information exchange; modified fictitious play algorithm; networked averaged empirical play histories; preassigned communication graph; strategy design problems;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Signals, Systems and Computers (ASILOMAR), 2012 Conference Record of the Forty Sixth Asilomar Conference on
Conference_Location :
Pacific Grove, CA
ISSN :
1058-6393
Print_ISBN :
978-1-4673-5050-1
Type :
conf
DOI :
10.1109/ACSSC.2012.6489275
Filename :
6489275
Link To Document :
بازگشت