Title :
Approximate finite-horizon optimal control without PDE´s
Author :
Sassano, M. ; Astolfi, A.
Author_Institution :
Dept. of Electr. & Electron. Eng., Imperial Coll. London, London, UK
Abstract :
The problem of controlling the state of a system, from a given initial condition, during a fixed time interval minimizing at the same time a criterion of optimality is commonly referred to as finite-horizon optimal control problem. It is well-known that one of the standard solutions to the finite-horizon optimal control problem relies upon the solution of the Hamilton-Jacobi-Bellman (HJB) partial differential equation, which may be difficult or impossible to obtain in closed-form. Herein we propose a methodology to avoid the explicit solution of such HJB pde for input-affine nonlinear systems by means of a dynamic extension. This results in a dynamic time-varying state feedback yielding an approximate solution to the finite-horizon optimal control problem.
Keywords :
nonlinear control systems; optimal control; state feedback; time-varying systems; approximate finite-horizon optimal control; dynamic extension; dynamic time-varying state feedback; fixed time interval; input-affine nonlinear systems; Boundary conditions; Equations; History; Nonlinear dynamical systems; Optimal control; Partial differential equations;
Conference_Titel :
Decision and Control and European Control Conference (CDC-ECC), 2011 50th IEEE Conference on
Conference_Location :
Orlando, FL
Print_ISBN :
978-1-61284-800-6
Electronic_ISBN :
0743-1546
DOI :
10.1109/CDC.2011.6161137