Title :
Bayesian reinforcement learning in continuous POMDPs with gaussian processes
Author :
Dallaire, Patrick ; Besse, Camille ; Ross, Stephane ; Chaib-Draa, Brahim
Author_Institution :
Dept. of Comput. Sci., Laval Univ., Quebec City, QC, Canada
Abstract :
Partially Observable Markov Decision Processes (POMDPs) provide a rich mathematical model to handle real-world sequential decision processes but require a known model to be solved by most approaches. However, mainstream POMDP research focuses on the discrete case and this complicates its application to most realistic problems that are naturally modeled using continuous state spaces. In this paper, we consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are unknown. We advocate the use of Gaussian Process Dynamical Models (GPDMs) so that we can learn the model through experience with the environment. Our results on the blimp problem show that the approach can learn good models of the sensors and actuators in order to maximize long-term rewards.
Keywords :
Gaussian processes; Markov processes; belief networks; learning (artificial intelligence); Bayesian reinforcement learning; Gaussian processes; continuous POMDP process; partially observable Markov decision process; Bayesian methods; Gaussian processes; Intelligent robots; Learning; Mathematical model; Orbital robotics; Robot sensing systems; State-space methods; USA Councils; Uncertainty;
Conference_Titel :
Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on
Conference_Location :
St. Louis, MO
Print_ISBN :
978-1-4244-3803-7
Electronic_ISBN :
978-1-4244-3804-4
DOI :
10.1109/IROS.2009.5354013