Title :
Aspiration learning in coordination games
Author :
Chasparis, Georgios C. ; Shamma, Jeff S. ; Arapostathis, Ari
Author_Institution :
Sch. of Electr. & Comput. Eng., Georgia Inst. of Technol., Atlanta, GA, USA
Abstract :
We consider the problem of distributed convergence to efficient outcomes in coordination games through payoff-based learning dynamics, namely aspiration learning. The proposed learning scheme assumes that players reinforce well performed actions, by successively playing these actions, otherwise they randomize among alternative actions. Our first contribution is the characterization of the asymptotic behavior of the induced Markov chain of the iterated process by an equivalent finite-state Markov chain, which simplifies previously introduced analysis on aspiration learning. We then characterize explicitly the behavior of the proposed aspiration learning in a generalized version of so-called coordination games, an example of which is network formation games. In particular, we show that in coordination games the expected percentage of time that the efficient action profile is played can become arbitrarily large.
Keywords :
Markov processes; convergence; game theory; iterative methods; learning (artificial intelligence); aspiration learning; asymptotic behavior; coordination games; distributed convergence; equivalent finite-state Markov chain; induced Markov chain; iterated process; learning scheme; network formation games; payoff-based learning dynamics; well performed actions; Convergence; Gallium; Games; Limiting; Markov processes; Nash equilibrium; Topology;
Conference_Titel :
Decision and Control (CDC), 2010 49th IEEE Conference on
Conference_Location :
Atlanta, GA
Print_ISBN :
978-1-4244-7745-6
DOI :
10.1109/CDC.2010.5717289