• DocumentCode
    28281
  • Title

    Decentralized Learning for Multiplayer Multiarmed Bandits

  • Author

    Kalathil, Dileep ; Nayyar, Naumaan ; Jain, R.

  • Author_Institution
    Dept. of Electr. Eng., Univ. of Southern California, Los Angeles, CA, USA
  • Volume
    60
  • Issue
    4
  • fYear
    2014
  • fDate
    Apr-14
  • Firstpage
    2331
  • Lastpage
    2345
  • Abstract
    We consider the problem of distributed online learning with multiple players in multiarmed bandit (MAB) models. Each player can pick among multiple arms. When a player picks an arm, it gets a reward. We consider both independent identically distributed (i.i.d.) reward model and Markovian reward model. In the i.i.d. model, each arm is modeled as an i.i.d. process with an unknown distribution with an unknown mean. In the Markovian model, each arm is modeled as a finite, irreducible, aperiodic and reversible Markov chain with an unknown probability transition matrix and stationary distribution. The arms give different rewards to different players. If two players pick the same arm, there is a collision, and neither of them get any reward. There is no dedicated control channel for coordination or communication among the players. Any other communication between the users is costly and will add to the regret. We propose an online index-based distributed learning policy called dUCB4 algorithm that trades off exploration versus exploitation in the right way, and achieves expected regret that grows at most as near- O(log2T). The motivation comes from opportunistic spectrum access by multiple secondary users in cognitive radio networks wherein they must pick among various wireless channels that look different to different users. This is the first distributed learning algorithm for multiplayer MABs with heterogeneous players (that have player-dependent rewards) to the best of our knowledge.
  • Keywords
    Markov processes; computational complexity; distributed algorithms; learning (artificial intelligence); matrix algebra; MAB model; Markovian reward model; cognitive radio network; dUCB4 algorithm; decentralized learning; distributed learning algorithm; distributed online learning; heterogeneous player; i.i.d. reward model; independent identically distributed reward model; multiarmed bandit model; multiplayer multiarmed bandit; multiple secondary user; online index-based distributed learning policy; opportunistic spectrum access; player-dependent reward; probability transition matrix; reversible Markov chain; stationary distribution; wireless channel; Algorithm design and analysis; Cognitive radio; Equations; Indexes; Markov processes; Resource management; Distributed adaptive control; multi-agent systems; multi-armed bandit; online learning;
  • fLanguage
    English
  • Journal_Title
    Information Theory, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    0018-9448
  • Type

    jour

  • DOI
    10.1109/TIT.2014.2302471
  • Filename
    6763073