DocumentCode :
2173207
Title :
Distributed dynamic reinforcement of efficient outcomes in multiagent coordination
Author :
Chasparis, Georgios C. ; Shamma, Jeff S.
Author_Institution :
Dept. of Mech. & Aerosp. Eng., Univ. of California Los Angeles, Los Angeles, CA, USA
fYear :
2007
fDate :
2-5 July 2007
Firstpage :
2505
Lastpage :
2512
Abstract :
We consider the problem of achieving distributed convergence to coordination in a multiagent environment. Each agent is modeled as a learning automaton which repeatedly interacts with an unknown environment, receives a reward, and updates the probabilities of its next action based on its own previous actions and received rewards. In this class of problems, more than one stable equilibrium (i.e., coordination structure) exists. We analyze the dynamic behavior of the distributed system in terms of convergence to an efficient equilibrium, suitably defined. In particular, we analyze the effect of dynamic processing on convergence properties, where agents include the derivative of their own reward into the decision process (i.e., derivative action). We show that derivative action can be used as an equilibrium selection scheme by appropriately adjusting derivative feedback gains.
Keywords :
convergence; distributed processing; learning automata; multi-agent systems; convergence properties; decision process; derivative action; derivative feedback gains; distributed convergence; distributed dynamic reinforcement; distributed system; efficient equilibrium; equilibrium selection scheme; learning automaton; multiagent coordination; Asymptotic stability; Convergence; Games; Heuristic algorithms; Learning (artificial intelligence); Learning automata; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Control Conference (ECC), 2007 European
Conference_Location :
Kos
Print_ISBN :
978-3-9524173-8-6
Type :
conf
Filename :
7069003
Link To Document :
بازگشت