DocumentCode :
1547651
Title :
Error bounds for rolling horizon policies in discrete-time Markov control processes
Author :
Hernandez-Lerma, O. ; Lasserre, J.B.
Author_Institution :
Dept. de Matematicas, CTAV-INVES-IPN, Mexico City, Mexico
Volume :
35
Issue :
10
fYear :
1990
fDate :
10/1/1990 12:00:00 AM
Firstpage :
1118
Lastpage :
1124
Abstract :
Error bounds are presented for rolling horizon (RH) policies in general, stationary and nonstationary, (Borel) Markov control problems with both discounted and average reward criteria. In each of these cases, conditions are given under which the reward of the rolling horizon policy converges geometrically to the optimal reward function, uniformly in the initial state, as the length of the rolling horizon increases. A description of the control model and the general assumptions are given. The approach is based on extending the results of J.M. Alden and A.R.L. Smith (1988) on nonstationary processes with finite state and action spaces. However the proofs presented are simpler. This is because, when stationary models are analyzed first, the error bounds follow more or less directly from well-known value iteration results. The corresponding error bounds for nonstationary models are obtained by reducing these models to stationary ones
Keywords :
Markov processes; discrete time systems; average reward criteria; discrete-time Markov control processes; error bounds; nonstationary processes; rolling horizon policies; Context modeling; Control systems; Error correction; Extraterrestrial measurements; Government; Optimal control; Process control; Roentgenium; Stochastic processes; Topology;
fLanguage :
English
Journal_Title :
Automatic Control, IEEE Transactions on
Publisher :
ieee
ISSN :
0018-9286
Type :
jour
DOI :
10.1109/9.58554
Filename :
58554
Link To Document :
بازگشت