Title :
1-recall reinforcement learning leading to an optimal equilibrium in potential games with discrete and continuous actions
Author :
Tatiana Tatarenko
Author_Institution :
Control Methods and Robotics Lab, Technical University Darmstadt, 64289, Germany
Abstract :
Game theory serves as a powerful tool for distributed optimization in multiagent systems in different applications. In this paper we consider multiagent systems that can be modeled as a potential game whose potential function coincides with a global objective function to be maximized. This approach renders the agents the strategic decision makers and the corresponding optimization problem the problem of learning an optimal equilibruim point in the designed game. In distinction from the existing works on the topic of payoff-based learning, we deal here with the systems where agents have neither memory nor ability for communication, and they base their decision only on the currently played action and the experienced payoff. Because of these restrictions, we use the methods of reinforcement learning, stochastic approximation, and learning automata extensively reviewed and analyzed in [3], [9]. These methods allow us to set up the agent dynamics that moves the game out of inefficient Nash equilibria and leads it close to an optimal one in both cases of discrete and continuous action sets.
Keywords :
"Games","Optimization","Convergence","Linear programming","Nash equilibrium","Markov processes","Learning (artificial intelligence)"
Conference_Titel :
Decision and Control (CDC), 2015 IEEE 54th Annual Conference on
DOI :
10.1109/CDC.2015.7403282