Title :
A structured multiarmed bandit problem and the greedy policy
Author :
Mersereau, Adam J. ; Rusmevichientong, Paat ; Tsitsiklis, John N.
Author_Institution :
Kenan-Flagler Bus. Sch., Univ. of North Carolina, Chapel Hill, NC, USA
Abstract :
We consider a multiarmed bandit problem where the expected reward of each arm is a linear function of an unknown scalar with a prior distribution. The objective is to choose a sequence of arms that maximizes the expected total (or discounted total) reward. We demonstrate the effectiveness of a greedy policy that takes advantage of the known statistical correlation structure among the arms. In the infinite horizon discounted reward setting, we show that both the greedy and optimal policies eventually coincide and settle on the best arm, in contrast with the Incomplete Learning Theorem for the case of independent arms. In the total reward setting, we show that the cumulative Bayes risk after T periods under the greedy policy is at most O (log T), which is smaller than the lower bound of ¿ (log2 T) established by [1] for a general, but different, class of bandit problems. We also establish the tightness of our bounds. Theoretical and numerical results show that the performance of our policy scales independently of the number of arms.
Keywords :
Bayes methods; correlation methods; decision making; greedy algorithms; learning (artificial intelligence); cumulative Bayes risk; greedy policy; incomplete learning theorem; statistical correlation structure; structured multiarmed bandit problem; Arm; Convergence; Costs; Infinite horizon; Operations research; Prototypes; Random variables;
Conference_Titel :
Decision and Control, 2008. CDC 2008. 47th IEEE Conference on
Conference_Location :
Cancun
Print_ISBN :
978-1-4244-3123-6
Electronic_ISBN :
0191-2216
DOI :
10.1109/CDC.2008.4738680