• DocumentCode
    2147173
  • Title

    The non-Bayesian restless multi-armed bandit: A case of near-logarithmic regret

  • Author

    Dai, Wenhan ; Gai, Yi ; Krishnamachari, Bhaskar ; Zhao, Qing

  • Author_Institution
    Sch. of Inf. Sci. & Technol., Tsinghua Univ., Beijing, China
  • fYear
    2011
  • fDate
    22-27 May 2011
  • Firstpage
    2940
  • Lastpage
    2943
  • Abstract
    In the classic Bayesian restless multi-armed bandit (RMAB) problem, there are N arms, with rewards on all arms evolving at each time as Markov chains with known parameters. A player seeks to activate K ≥ 1 arms at each time in order to maximize the expected total reward obtained over multiple plays. RMAB is a challenging problem that is known to be PSPACE-hard in general. We consider in this work the even harder non-Bayesian RMAB, in which the parameters of the Markov chain are assumed to be unknown a priori. We develop an original approach to this problem that is applicable when the corresponding Bayesian problem has the structure that, de pending on the known parameter values, the optimal solution is one of a prescribed finite set of policies. In such settings, we propose to learn the optimal policy for the non-Bayesian RMAB by employing a suitable meta-policy which treats each policy from this finite set as an arm in a different non-Bayesian multi-armed bandit problem for which a single-arm selection policy is optimal. We demonstrate this approach by developing a novel sensing policy for opportunistic spectrum access over unknown dynamic channels. We prove that our policy achieves near-logarithmic regret (the difference in expected reward compared to a model-aware genie), which leads to the same average reward that can be achieved by the optimal policy under a known model. This is the first such result in the literature for a non Bayesian RMAB.
  • Keywords
    Markov processes; set theory; telecommunication channels; Bayesian problem; Markov chain; PSPACE; classic Bayesian restless multiarmed bandit; dynamic channel; finite set; near logarithmic regret; nonBayesian RMAB; nonBayesian restless multiarmed bandit; opportunistic spectrum access; optimal policy; optimal solution; sensing policy; single arm selection policy; Algorithm design and analysis; Bayesian methods; Heuristic algorithms; Indexes; Markov processes; Sensors; Switches; learning; non-Bayesian; opportunistic spectrum access; regret; restless bandit;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on
  • Conference_Location
    Prague
  • ISSN
    1520-6149
  • Print_ISBN
    978-1-4577-0538-0
  • Electronic_ISBN
    1520-6149
  • Type

    conf

  • DOI
    10.1109/ICASSP.2011.5946273
  • Filename
    5946273