DocumentCode :
2849374
Title :
Inverse reinforcement learning with Gaussian process
Author :
Qifeng Qiao ; Beling, P.A.
Author_Institution :
Dept. of Syst. & Inf. Eng., Univ. of Virginia, Charlottesville, VA, USA
fYear :
2011
fDate :
June 29 2011-July 1 2011
Firstpage :
113
Lastpage :
118
Abstract :
We present new algorithms for inverse reinforcement learning (IRL, or inverse optimal control) in convex optimization settings. We argue that finite-space IRL can be posed as a convex quadratic program under a Bayesian inference framework with the objective of maximum a posteriori estimation. To deal with problems in large or even infinite state space, we propose a Gaussian process model and use preference graphs to represent observations of decision trajectories. Our method is distinguished from other approaches to IRL in that it makes no assumptions about the form of the reward function and yet it retains the promise of computationally manageable implementations for potential real-world applications. In comparison with an establish algorithm on small-scale numerical problems, our method demonstrated better accuracy in apprenticeship learning and a more robust dependence on the number of observations.
Keywords :
belief networks; convex programming; inference mechanisms; learning (artificial intelligence); maximum likelihood estimation; quadratic programming; Bayesian inference framework; Gaussian process model; IRL; apprenticeship learning; convex optimization settings; inverse reinforcement learning; maximum a posteriori estimation; small-scale numerical problems; Accuracy; Approximation methods; Bayesian methods; Gaussian processes; Machine learning; Markov processes; Trajectory;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
American Control Conference (ACC), 2011
Conference_Location :
San Francisco, CA
ISSN :
0743-1619
Print_ISBN :
978-1-4577-0080-4
Type :
conf
DOI :
10.1109/ACC.2011.5990948
Filename :
5990948
Link To Document :
بازگشت