Title :
Multi-agent differential graphical games: Nash online adaptive learning solutions
Author :
Abouheaf, Mohammed I. ; Lewis, Frank L.
Author_Institution :
Arlington Res. Inst., Univ. of Texas, Arlington, TX, USA
Abstract :
This paper studies a class of multi-agent graphical games denoted by differential graphical games, where interactions between agents are prescribed by a communication graph structure. Ideas from cooperative control are given to achieve synchronization among the agents to a leader dynamics. New coupled Bellman and Hamilton-Jacobi-Bellman equations are developed for this class of games using Integral Reinforcement Learning. Nash solutions are given in terms of solutions to a set of coupled continuous-time Hamilton-Jacobi-Bellman equations. A multi-agent policy iteration algorithm is given to learn the Nash solution in real time without knowing the complete dynamic models of the agents. A proof of convergence for this algorithm is given. An online multi-agent method based on policy iterations is developed using a critic network to solve all the Hamilton-Jacobi-Bellman equations simultaneously for the graphical game.
Keywords :
continuous time systems; differential games; iterative methods; learning (artificial intelligence); multi-agent systems; synchronisation; Nash online adaptive learning solutions; communication graph structure; cooperative control; coupled Bellman-Hamilton-Jacobi-Bellman equations; coupled continuous-time Hamilton-Jacobi-Bellman equations; integral reinforcement learning; leader dynamics; multiagent differential graphical games; multiagent policy iteration algorithm; online multiagent method; policy iterations; synchronization; Ash; Equations; Games; Gold; Heuristic algorithms; Jacobian matrices; Critic network structures; graphical games; integral reinforcement learning; optimal control;
Conference_Titel :
Decision and Control (CDC), 2013 IEEE 52nd Annual Conference on
Conference_Location :
Firenze
Print_ISBN :
978-1-4673-5714-2
DOI :
10.1109/CDC.2013.6760804