DocumentCode :
2020599
Title :
Efficient online learning for opportunistic spectrum access
Author :
Dai, Wenhan ; Gai, Yi ; Krishnamachari, Bhaskar
Author_Institution :
Massachusetts Inst. of Technol., Cambridge, MA, USA
fYear :
2012
fDate :
25-30 March 2012
Firstpage :
3086
Lastpage :
3090
Abstract :
The problem of opportunistic spectrum access in cognitive radio networks has been recently formulated as a non-Bayesian restless multi-armed bandit problem. In this problem, there are N arms (corresponding to channels) and one player (corresponding to a secondary user). The state of each arm evolves as a finite-state Markov chain with unknown parameters. At each time slot, the player can select K <; N arms to play and receives state-dependent rewards (corresponding to the throughput obtained given the activity of primary users). The objective is to maximize the expected total rewards (i.e., total throughput) obtained over multiple plays. The performance of an algorithm for such a multi-armed bandit problem is measured in terms of regret, defined as the difference in expected reward compared to a model-aware genie who always plays the best K arms. In this paper, we propose a new continuous exploration and exploitation (CEE) algorithm for this problem. When no information is available about the dynamics of the arms, CEE is the first algorithm to guarantee near-logarithmic regret uniformly over time. When some bounds corresponding to the stationary state distributions and the state-dependent rewards are known, we show that CEE can be easily modified to achieve logarithmic regret over time. In contrast, prior algorithms require additional information concerning bounds on the second eigenvalues of the transition matrices in order to guarantee logarithmic regret. Finally, we show through numerical simulations that CEE is more efficient than prior algorithms.
Keywords :
Markov processes; cognitive radio; CEE algorithm; cognitive radio networks; continuous exploration-exploitation algorithm; efficient-online learning; finite-state Markov chain; model-aware genie; near-logarithmic regret; nonBayesian restless multiarmed bandit problem; numerical simulations; opportunistic spectrum access; primary users; state-dependent rewards; stationary state distributions; transition matrix eigenvalues; Algorithm design and analysis; Bayesian methods; Bismuth; Heuristic algorithms; Indexes; Markov processes; Numerical stability;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
INFOCOM, 2012 Proceedings IEEE
Conference_Location :
Orlando, FL
ISSN :
0743-166X
Print_ISBN :
978-1-4673-0773-4
Type :
conf
DOI :
10.1109/INFCOM.2012.6195765
Filename :
6195765
Link To Document :
بازگشت