Title :
Error bounds for rolling horizon policies in discrete-time Markov control processes
Author :
Hernandez-Lerma, O. ; Lasserre, J.B.
Author_Institution :
Dept. de Matematicas, CTAV-INVES-IPN, Mexico City, Mexico
fDate :
10/1/1990 12:00:00 AM
Abstract :
Error bounds are presented for rolling horizon (RH) policies in general, stationary and nonstationary, (Borel) Markov control problems with both discounted and average reward criteria. In each of these cases, conditions are given under which the reward of the rolling horizon policy converges geometrically to the optimal reward function, uniformly in the initial state, as the length of the rolling horizon increases. A description of the control model and the general assumptions are given. The approach is based on extending the results of J.M. Alden and A.R.L. Smith (1988) on nonstationary processes with finite state and action spaces. However the proofs presented are simpler. This is because, when stationary models are analyzed first, the error bounds follow more or less directly from well-known value iteration results. The corresponding error bounds for nonstationary models are obtained by reducing these models to stationary ones
Keywords :
Markov processes; discrete time systems; average reward criteria; discrete-time Markov control processes; error bounds; nonstationary processes; rolling horizon policies; Context modeling; Control systems; Error correction; Extraterrestrial measurements; Government; Optimal control; Process control; Roentgenium; Stochastic processes; Topology;
Journal_Title :
Automatic Control, IEEE Transactions on