• DocumentCode
    574761
  • Title

    The importance of variance reduction in policy gradient method

  • Author

    Tak Kit Lau ; Yun-Hui Liu

  • Author_Institution
    Dept. of Mech. & Autom. Eng., Chinese Univ. of Hong Kong, Hong Kong, China
  • fYear
    2012
  • fDate
    27-29 June 2012
  • Firstpage
    1376
  • Lastpage
    1381
  • Abstract
    Reinforcement learning (RL) has been applied to a wide range of motion control problems in robotics. In particular, policy gradient method (PGM) emerges as a powerful subset of RL that can learn effectively from one´s experience. However, when the dynamics is stochastic and is short of samples for learning, the performance of PGM becomes inconsistent and heavily relies on the tweaking of the learning rate. In this work, we argue that this degeneration is mainly due to the high variance in the gradient. Through theoretical justifications, simulations and experiments, we verify that by applying a variance suppression, which is called local baseline, on the gradient, PGM can then be applied to some previously untouchable problems.
  • Keywords
    learning (artificial intelligence); mobile robots; motion control; PGM; RL; learning rate; local baseline; motion control problems; policy gradient method; reinforcement learning; robotics; stochastic dynamics; variance reduction; variance suppression; Algorithm design and analysis; Cost function; Gradient methods; Heuristic algorithms; Robots; Trajectory; Uncertainty;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    American Control Conference (ACC), 2012
  • Conference_Location
    Montreal, QC
  • ISSN
    0743-1619
  • Print_ISBN
    978-1-4577-1095-7
  • Electronic_ISBN
    0743-1619
  • Type

    conf

  • DOI
    10.1109/ACC.2012.6315368
  • Filename
    6315368