DocumentCode :
3653559
Title :
Adaptive dynamic programming for terminally constrained finite-horizon optimal control problems
Author :
L. Andrews;J. R. Klotz;R. Kamalapurkar;W. E. Dixon
Author_Institution :
Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, USA
fYear :
2014
Firstpage :
5095
Lastpage :
5100
Abstract :
Adaptive dynamic programming is applied to control-affine nonlinear systems with uncertain drift dynamics to obtain a near-optimal solution to a finite-horizon optimal control problem with hard terminal constraints. A reinforcement learning-based actor-critic framework is used to approximately solve the Hamilton-Jacobi-Bellman equation, wherein critic and actor neural networks (NN) are used for approximate learning of the optimal value function and control policy, while enforcing the optimality condition resulting from the hard terminal constraint. Concurrent learning-based update laws relax the restrictive persistence of excitation requirement. A Lyapunov-based stability analysis guarantees uniformly ultimately bounded convergence of the enacted control policy to the optimal control policy.
Keywords :
"Optimal control","Approximation methods","Stability analysis","Convergence","Vectors","Artificial neural networks","Eigenvalues and eigenfunctions"
Publisher :
ieee
Conference_Titel :
Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on
ISSN :
0191-2216
Print_ISBN :
978-1-4799-7746-8
Type :
conf
DOI :
10.1109/CDC.2014.7040185
Filename :
7040185
Link To Document :
بازگشت