Title :
Maximum entropy inverse reinforcement learning in continuous state spaces with path integrals
Author :
Aghasadeghi, Navid ; Bretl, Timothy
Author_Institution :
Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 61801, USA
Abstract :
In this paper, we consider the problem of inverse reinforcement learning for a particular class of continuous-time stochastic systems with continuous state and action spaces, under the assumption that both the cost function and the optimal control policy are parametric with known basis functions. Our goal is to produce a cost function for which a given policy, observed in experiment, is optimal. We proceed by enforcing a constraint on the relationship between input noise and input cost that produces a maximum entropy distribution over the space of all sample paths. We apply maximum likelihood estimation to approximate the parameters of this distribution (hence, of the cost function) given a finite set of sample paths. We iteratively improve our approximation by adding to this set the sample path that would be optimal given our current estimate of the cost function. Preliminary results in simulation provide empirical evidence that our algorithm converges.
Keywords :
Aerospace electronics; Cost function; Equations; Learning; Mathematical model; Optimal control; Trajectory;
Conference_Titel :
Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on
Conference_Location :
San Francisco, CA
Print_ISBN :
978-1-61284-454-1
DOI :
10.1109/IROS.2011.6094679