DocumentCode
3849972
Title
Experience Replay for Real-Time Reinforcement Learning Control
Author
Sander Adam;Lucian Busoniu;Robert Babuska
Author_Institution
Large Corporates and Merchant Banking Division, ABN AMRO Bank, The Netherlands
Volume
42
Issue
2
fYear
2012
Firstpage
201
Lastpage
212
Abstract
Reinforcement-learning (RL) algorithms can automatically learn optimal control strategies for nonlinear, possibly stochastic systems. A promising approach for RL control is experience replay (ER), which learns quickly from a limited amount of data, by repeatedly presenting these data to an underlying RL algorithm. Despite its benefits, ER RL has been studied only sporadically in the literature, and its applications have largely been confined to simulated systems. Therefore, in this paper, we evaluate ER RL on real-time control experiments that involve a pendulum swing-up problem and the vision-based control of a goalkeeper robot. These real-time experiments are complemented by simulation studies and comparisons with traditional RL. As a preliminary, we develop a general ER framework that can be combined with essentially any incremental RL technique, and instantiate this framework for the approximate Q-learning and SARSA algorithms. The successful real-time learning results that are presented here are highly encouraging for the applicability of ER RL in practice.
Keywords
"Erbium","Trajectory","Approximation algorithms","Approximation methods","Real time systems","Learning","Complexity theory"
Journal_Title
IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)
Publisher
ieee
ISSN
1094-6977
Type
jour
DOI
10.1109/TSMCC.2011.2106494
Filename
5719642
Link To Document