Title :
Efficient reinforcement learning for robots using informative simulated priors
Author :
Cutler, Mark ; How, Jonathan P.
Author_Institution :
Lab. of Inf. & Decision Syst., Massachusetts Inst. of Technol., Cambridge, MA, USA
Abstract :
Autonomous learning through interaction with the physical world is a promising approach to designing controllers and decision-making policies for robots. Unfortunately, learning on robots is often difficult due to the large number of samples needed for many learning algorithms. Simulators are one way to decrease the samples needed from the robot by incorporating prior knowledge of the dynamics into the learning algorithm. In this paper we present a novel method for transferring data from a simulator to a robot, using simulated data as a prior for real-world learning. A Bayesian nonparametric prior is learned from a potentially black-box simulator. The mean of this function is used as a prior for the Probabilistic Inference for Learning Control (PILCO) algorithm. The simulated prior improves the convergence rate and performance of PILCO by directing the policy search in areas of the state-space that have not yet been observed by the robot. Simulated and hardware results show the benefits of using the prior knowledge in the learning framework.
Keywords :
Bayes methods; learning (artificial intelligence); learning systems; nonparametric statistics; robots; Bayesian nonparametric prior; PILCO algorithm; autonomous learning; black-box simulator; controller design; convergence rate; decision-making policy; informative simulated priors; probabilistic inference for learning control algorithm; reinforcement learning algorithm; Data models; Gaussian processes; Hardware; Heuristic algorithms; Mathematical model; Prediction algorithms; Robots;
Conference_Titel :
Robotics and Automation (ICRA), 2015 IEEE International Conference on
Conference_Location :
Seattle, WA
DOI :
10.1109/ICRA.2015.7139550