DocumentCode :
3080012
Title :
A new approach to stochastic adaptive control
Author :
Meyn, S.P. ; Caines, P.E.
Author_Institution :
McGill University, Montr??, Canada
fYear :
1986
fDate :
10-12 Dec. 1986
Firstpage :
1893
Lastpage :
1897
Abstract :
The principal techniques used up to now for the analysis of stochastic adaptive control systems have been (i) super-martingale (often called stochastic Lyapunov) methods and (ii) methods relying upon the strong consistency of some parameter estimation scheme. Optimal stochastic control and filtering methods have also been employed. Although there have been some successes, the extension of these techniques to a broad class of adptive control problems, including the case of time varying parameters, has been difficult. In this paper a new approach is adopted: If an underlying Markovian state space system for the controlled process is available, and if this process possesses stationary transition probabilities, then the powerful ergodic theory of Markov processes may be applied. Subject to technical conditions one may deduce (amongst other facts) (i) the existence of an invariant measure ???? for the process and (ii) the convergence almost surely of the sample averages of a function of the state process (and of its expectation) to its conditional expectation [????] with respect to a sub-??-field of invariant sets ??I. The technique is illustrated by an application to a previously unsolved problem involving a linear system with unbounded random time varying parameters. Work suppoted by Canada NSERC Grant No.: 1329 and a UK SERC Visiting Research Fellowship.
Keywords :
Adaptive control; Control systems; Filtering; Markov processes; Optimal control; Parameter estimation; Process control; State-space methods; Stochastic processes; Stochastic systems;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control, 1986 25th IEEE Conference on
Conference_Location :
Athens, Greece
Type :
conf
DOI :
10.1109/CDC.1986.267319
Filename :
4049125
Link To Document :
بازگشت