Title :
A convergent recursive least squares approximate policy iteration algorithm for multi-dimensional Markov decision process with continuous state and action spaces
Author :
Ma, Jun ; Powell, Warren B.
Author_Institution :
Dept. of Oper. Res. & Financial Eng., Princeton Univ., Princeton, NJ
fDate :
March 30 2009-April 2 2009
Abstract :
In this paper, we present a recursive least squares approximate policy iteration (RLSAPI) algorithm for infinite-horizon multi-dimensional Markov decision process in continuous state and action spaces. Under certain problem structure assumptions on value functions and policy spaces, the approximate policy iteration algorithm is provably convergent in the mean. That is to say the mean absolute deviation of the approximate policy value function from the optimal value function goes to zero as successive approximation improves.
Keywords :
Markov processes; approximation theory; convergence of numerical methods; decision theory; iterative methods; least squares approximations; action space; approximate policy value function; continuous state space; convergent recursive least square approximate policy iteration algorithm; mean absolute deviation; multidimensional Markov decision process; optimal value function; Acoustic noise; Approximation algorithms; Convergence; Dynamic programming; Function approximation; Infinite horizon; Least squares approximation; Least squares methods; State-space methods;
Conference_Titel :
Adaptive Dynamic Programming and Reinforcement Learning, 2009. ADPRL '09. IEEE Symposium on
Conference_Location :
Nashville, TN
Print_ISBN :
978-1-4244-2761-1
DOI :
10.1109/ADPRL.2009.4927527