DocumentCode :
3538768
Title :
Learning to coordinate in a beauty contest game
Author :
Molavi, Pooya ; Eksin, Ceyhun ; Ribeiro, Alejandro ; Jadbabaie, A.
Author_Institution :
Dept. of Electr. & Syst. Eng., Univ. of Pennsylvania, Philadelphia, PA, USA
fYear :
2013
fDate :
10-13 Dec. 2013
Firstpage :
7358
Lastpage :
7363
Abstract :
We study a dynamic game in which a group of players attempt to coordinate on a desired, but only partially known, outcome. The desired outcome is represented by an unknown state of the world. Agents´ stage payoffs are represented by a quadratic utility function that captures the kind of trade-off exemplified by the Keynesian beauty contest: each agent´s stage payoff is decreasing in the distance between her action and the unknown state; it is also decreasing in the distance between her action and the average action taken by other agents. The agents thus have the incentive to correctly estimate the state while trying to coordinate with and learn from others. We show that myopic, but Bayesian, agents who repeatedly play this game and observe the actions of their neighbors in a connected network eventually succeed in coordinating on a single action. However, as we show through an example, the consensus action is not necessarily optimal given all the available information.
Keywords :
game theory; learning (artificial intelligence); multi-agent systems; Bayesian agents; Keynesian beauty contest; agent action; agent stage payoffs; beauty contest game; dynamic game; learning; quadratic utility function; state estimation; Bayes methods; Games; History; Probability distribution; Random variables; Robot kinematics;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control (CDC), 2013 IEEE 52nd Annual Conference on
Conference_Location :
Firenze
ISSN :
0743-1546
Print_ISBN :
978-1-4673-5714-2
Type :
conf
DOI :
10.1109/CDC.2013.6761057
Filename :
6761057
Link To Document :
بازگشت