DocumentCode :
1055862
Title :
Optimal control-1950 to 1985
Author :
Bryson, E., Jr.
Author_Institution :
Dept. of Aeronaut. & Astronaut., Stanford Univ., CA, USA
Volume :
16
Issue :
3
fYear :
1996
fDate :
6/1/1996 12:00:00 AM
Firstpage :
26
Lastpage :
33
Abstract :
Optimal control had its origins in the calculus of variations in the 17th century. The calculus of variations was developed further in the 18th century by Euler and Lagrange and in the 19th century by Legendre, Jacobi, Hamilton, and Weierstrass. In the early 20th century, Bolza and Bliss put the final touches of rigor on the subject. In 1957, Bellman gave a new view of Hamilton-Jacobi theory which he called dynamic programming, essentially a nonlinear feedback control scheme. McShane (1939) and Pontryagin (1962) extended the calculus of variations to handle control variable inequality constraints, the latter enunciating his elegant maximum principle. The truly enabling element for use of optimal control theory was the digital computer, which became available commercially in the 1950s. In the 1980s research began, and continues today, on making optimal feedback logic more robust to variations in the plant and disturbance models; one element of this research is worst-case and H-infinity control, which developed out of differential game theory
Keywords :
control engineering computing; dynamic programming; feedback; game theory; history; nonlinear control systems; optimal control; H-infinity control; Hamilton-Jacobi theory; calculus; control variable inequality constraints; differential game theory; dynamic programming; feedback; maximum principle; nonlinear feedback control; optimal control; worst-case control; Calculus; Dynamic programming; Feedback control; Game theory; H infinity control; Jacobian matrices; Lagrangian functions; Logic; Optimal control; Robust control;
fLanguage :
English
Journal_Title :
Control Systems, IEEE
Publisher :
ieee
ISSN :
1066-033X
Type :
jour
DOI :
10.1109/37.506395
Filename :
506395
Link To Document :
بازگشت