Title of article :
Linear Bayes policy for learning in contextual-bandits
Author/Authors :
Martيn H.، نويسنده , , José Antonio and Vargas، نويسنده , , Ana M.، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2013
Pages :
7
From page :
7400
To page :
7406
Abstract :
Machine and Statistical Learning techniques are used in almost all online advertisement systems. The problem of discovering which content is more demanded (e.g. receive more clicks) can be modeled as a multi-armed bandit problem. Contextual bandits (i.e., bandits with covariates, side information or associative reinforcement learning) associate, to each specific content, several features that define the “context” in which it appears (e.g. user, web page, time, region). This problem can be studied in the stochastic/statistical setting by means of the conditional probability paradigm using the Bayes’ theorem. However, for very large contextual information and/or real-time constraints, the exact calculation of the Bayes’ rule is computationally infeasible. In this article, we present a method that is able to handle large contextual information for learning in contextual-bandits problems. This method was tested in the Challenge on Yahoo! dataset at ICML2012’s Workshop “new Challenges for Exploration & Exploitation 3”, obtaining the second place. Its basic exploration policy is deterministic in the sense that for the same input data (as a time-series) the same results are obtained. We address the deterministic exploration vs. exploitation issue, explaining the way in which the proposed method deterministically finds an effective dynamic trade-off based solely in the input-data, in contrast to other methods that use a random number generator.
Keywords :
Contextual bandits , Online advertising , Recommender Systems , One-to-one marketing , Empirical Bayes
Journal title :
Expert Systems with Applications
Serial Year :
2013
Journal title :
Expert Systems with Applications
Record number :
2354120
Link To Document :
بازگشت