Title :
New designs for universal stability in classical adaptive control and reinforcement learning
Author_Institution :
Nat. Sci. Found., Arlington, VA, USA
fDate :
6/21/1905 12:00:00 AM
Abstract :
Many researchers think that neurocontrollers should never be used in real-world applications until firm, unconditional stability theorems for them have been established. This paper explains key ideas from the author´s previous paper (1998) which discusses the problem of “universal stability” (in the linear care) and proposes a new solution. New forms of real-time “reinforcement learning” or “approximate dynamic programming”, developed for the nonlinear stochastic case, appear to permit this kind of universal stability. They also offer a hope of easier and more reliable convergence in off-line learning applications, such as those discussed in this paper or those required for nonlinear robust control. Challenges for future research are also discussed
Keywords :
adaptive control; control system synthesis; dynamic programming; learning (artificial intelligence); neurocontrollers; stability; adaptive control; approximate dynamic programming; neurocontrollers; nonlinear control systems; reinforcement learning; robust control; universal stability; Adaptive control; Biological neural networks; Costs; Learning; Neurocontrollers; Programmable control; Riccati equations; Robust control; Stability; Stochastic processes;
Conference_Titel :
Neural Networks, 1999. IJCNN '99. International Joint Conference on
Print_ISBN :
0-7803-5529-6
DOI :
10.1109/IJCNN.1999.833420