DocumentCode :
2423881
Title :
Online algorithms for the multi-armed bandit problem with Markovian rewards
Author :
Tekin, Cem ; Liu, Mingyan
Author_Institution :
Dept. of Electr. Eng. & Comput. Sci., Univ. of Michigan, Ann Arbor, MI, USA
fYear :
2010
fDate :
Sept. 29 2010-Oct. 1 2010
Firstpage :
1675
Lastpage :
1682
Abstract :
We consider the classical multi-armed bandit problem with Markovian rewards. When played an arm changes its state in a Markovian fashion while it remains frozen when not played. The player receives a state-dependent reward each time it plays an arm. The number of states and the state transition probabilities of an arm are unknown to the player. The player´s objective is to maximize its long-term total reward by learning the best arm over time. We show that under certain conditions on the state transition probabilities of the arms, a sample mean based index policy achieves logarithmic regret uniformly over the total number of trials. The result shows that sample mean based index policies can be applied to learning problems under the rested Markovian bandit model without loss of optimality in the order. Moreover, comparision between Anantharam´s index policy and UCB shows that by choosing a small exploration parameter UCB can have a smaller regret than Anantharam´s index policy.
Keywords :
Markov processes; game theory; learning (artificial intelligence); probability; Markovian rewards; index policy; learning; long-term total reward; multiarmed bandit problem; online algorithms; state transition probabilities; state-dependent reward; Context; Eigenvalues and eigenfunctions; Indexes; Markov processes; Numerical models; Silicon; Space stations;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on
Conference_Location :
Allerton, IL
Print_ISBN :
978-1-4244-8215-3
Type :
conf
DOI :
10.1109/ALLERTON.2010.5707118
Filename :
5707118
Link To Document :
بازگشت