Author_Institution :
Dept. of Aeronaut. & Astronaut., Stanford Univ., CA, USA
Abstract :
Optimal control had its origins in the calculus of variations in the 17th century. The calculus of variations was developed further in the 18th century by Euler and Lagrange and in the 19th century by Legendre, Jacobi, Hamilton, and Weierstrass. In the early 20th century, Bolza and Bliss put the final touches of rigor on the subject. In 1957, Bellman gave a new view of Hamilton-Jacobi theory which he called dynamic programming, essentially a nonlinear feedback control scheme. McShane (1939) and Pontryagin (1962) extended the calculus of variations to handle control variable inequality constraints, the latter enunciating his elegant maximum principle. The truly enabling element for use of optimal control theory was the digital computer, which became available commercially in the 1950s. In the 1980s research began, and continues today, on making optimal feedback logic more robust to variations in the plant and disturbance models; one element of this research is worst-case and H-infinity control, which developed out of differential game theory
Keywords :
control engineering computing; dynamic programming; feedback; game theory; history; nonlinear control systems; optimal control; H-infinity control; Hamilton-Jacobi theory; calculus; control variable inequality constraints; differential game theory; dynamic programming; feedback; maximum principle; nonlinear feedback control; optimal control; worst-case control; Calculus; Dynamic programming; Feedback control; Game theory; H infinity control; Jacobian matrices; Lagrangian functions; Logic; Optimal control; Robust control;