DocumentCode
3393286
Title
Learning desirable actions in two-player two-action games
Author
Moriyama, Koichi
Author_Institution
Dept. of Comput. Sci., Tokyo Inst. of Technol., Japan
fYear
2005
fDate
4-8 April 2005
Firstpage
495
Lastpage
500
Abstract
Reinforcement learning is widely used to let an autonomous agent learn actions in an environment, and recently, it is used in a multi-agent context in which several agents share an environment. Most of multi-agent reinforcement learning algorithms aim to converge to a Nash equilibrium of game theory, but it does not necessarily mean a desirable result On the other hand, there are several methods aiming to depart from unfavorable Nash equilibria, but they use other agents´ information for learning and the condition whether or not they work has not yet been analyzed and discussed in detail. In this paper, we first see the sufficient conditions of symmetric two-player two-action games that show whether or not reinforcement learning agents learn to bring the desirable result After that, we construct a new method that does not need any other agents´ information for learning.
Keywords
game theory; learning (artificial intelligence); multi-agent systems; Nash equilibrium; autonomous agent; game theory; multiagent reinforcement learning algorithm; two-player two-action games; Autonomous agents; Computer science; Game theory; Information analysis; Learning systems; Machine learning; Multiagent systems; Nash equilibrium; Probability distribution;
fLanguage
English
Publisher
ieee
Conference_Titel
Autonomous Decentralized Systems, 2005. ISADS 2005. Proceedings
Print_ISBN
0-7803-8963-8
Type
conf
DOI
10.1109/ISADS.2005.1452119
Filename
1452119
Link To Document