Title :
Distributed learning in large-scale multi-agent games: A modified fictitious play approach
Author :
Swenson, Brian ; Kar, Soummya ; Xavier, Joao
Author_Institution :
Dept. of Electr. & Comput. Eng., Carnegie Mellon Univ., Pittsburgh, PA, USA
Abstract :
The paper concerns the development of distributed equilibria learning strategies in large-scale multi-agent games with repeated plays. With inter-agent information exchange being restricted to a preassigned communication graph, the paper presents a modified version of the fictitious play algorithm that relies only on local neighborhood information exchange for agent policy update. Under the assumption of identical agent utility functions that are permutation invariant, the proposed distributed algorithm leads to convergence of the networked-averaged empirical play histories to a subset of the Nash equilibria, designated as the consensus equilibria. Applications of the proposed distributed framework to strategy design problems encountered in large-scale traffic networks are discussed.
Keywords :
distributed algorithms; game theory; learning (artificial intelligence); multi-agent systems; MFP algorithm; Nash equilibria; agent policy updation; consensus equilibria; distributed algorithm; distributed equilibria learning strategies; identical agent utility functions; inter-agent information exchange; large-scale multiagent games; large-scale traffic networks; local neighborhood information exchange; modified fictitious play algorithm; networked averaged empirical play histories; preassigned communication graph; strategy design problems;
Conference_Titel :
Signals, Systems and Computers (ASILOMAR), 2012 Conference Record of the Forty Sixth Asilomar Conference on
Conference_Location :
Pacific Grove, CA
Print_ISBN :
978-1-4673-5050-1
DOI :
10.1109/ACSSC.2012.6489275