Title :
The importance of variance reduction in policy gradient method
Author :
Tak Kit Lau ; Yun-Hui Liu
Author_Institution :
Dept. of Mech. & Autom. Eng., Chinese Univ. of Hong Kong, Hong Kong, China
Abstract :
Reinforcement learning (RL) has been applied to a wide range of motion control problems in robotics. In particular, policy gradient method (PGM) emerges as a powerful subset of RL that can learn effectively from one´s experience. However, when the dynamics is stochastic and is short of samples for learning, the performance of PGM becomes inconsistent and heavily relies on the tweaking of the learning rate. In this work, we argue that this degeneration is mainly due to the high variance in the gradient. Through theoretical justifications, simulations and experiments, we verify that by applying a variance suppression, which is called local baseline, on the gradient, PGM can then be applied to some previously untouchable problems.
Keywords :
learning (artificial intelligence); mobile robots; motion control; PGM; RL; learning rate; local baseline; motion control problems; policy gradient method; reinforcement learning; robotics; stochastic dynamics; variance reduction; variance suppression; Algorithm design and analysis; Cost function; Gradient methods; Heuristic algorithms; Robots; Trajectory; Uncertainty;
Conference_Titel :
American Control Conference (ACC), 2012
Conference_Location :
Montreal, QC
Print_ISBN :
978-1-4577-1095-7
Electronic_ISBN :
0743-1619
DOI :
10.1109/ACC.2012.6315368