Title :
Online policy iteration based algorithms to solve the continuous-time infinite horizon optimal control problem
Author :
Vamvoudakis, Kyriakos ; Vrabie, Draguna ; Lewis, Frank
Author_Institution :
Autom. & Robot. Res. Inst., Univ. of Texas at Arlington, Fort Worth, TX
fDate :
March 30 2009-April 2 2009
Abstract :
In this paper we discuss two online algorithms based on policy iterations for learning the continuous-time (CT) optimal control solution when nonlinear systems with infinite horizon quadratic cost are considered. For the first time we present an online adaptive algorithm implemented on an actor/critic structure which involves synchronous continuous-time adaptation of both actor and critic neural networks. This is a version of generalized policy iteration for CT systems. The convergence to the optimal controller based on the novel algorithm is proven while stability of the system is guaranteed. The characteristics and requirements of the new online learning algorithm are discussed in relation with the regular online policy iteration algorithm for CT systems which we have previously developed. The latter solves the optimal control problem by performing sequential updates on the actor and critic networks, i.e. while one is learning the other one is held constant. In contrast, the new algorithm relies on simultaneous adaptation of both actor and critic networks. To support the new theoretical result a simulation example is then considered.
Keywords :
continuous time systems; infinite horizon; neurocontrollers; optimal control; stability; actor-critic structure; continuous-time infinite horizon optimal control problem; critic neural networks; infinite horizon quadratic cost; nonlinear systems; online learning algorithm; online policy iteration based algorithms; stability; Approximation algorithms; Convergence; Costs; Infinite horizon; Iterative algorithms; Neural networks; Nonlinear equations; Nonlinear systems; Optimal control; Riccati equations;
Conference_Titel :
Adaptive Dynamic Programming and Reinforcement Learning, 2009. ADPRL '09. IEEE Symposium on
Conference_Location :
Nashville, TN
Print_ISBN :
978-1-4244-2761-1
DOI :
10.1109/ADPRL.2009.4927523