DocumentCode :
3194126
Title :
Distributed control by Lagrangian steepest descent
Author :
Wolpert, David H. ; Bieniawski, Stefan
Author_Institution :
NASA Ames Res. Center, Moffett Field, CA, USA
Volume :
2
fYear :
2004
fDate :
14-17 Dec. 2004
Firstpage :
1562
Abstract :
Often adaptive, distributed control can be viewed as an iterated game between independent players. The coupling between the players´ mixed strategies, arising as the system evolves, is determined by the system designer. Information theory tells us that the most likely joint strategy of the players, given a value of the expectation of the overall control objective function, is the minimizer of a Lagrangian function of the joint strategy. So the goal of the system designer is to speed evolution of the joint strategy to that Lagrangian minimizing point, lower the expected value of the control objective function, and repeat. Here, we discuss how to do this using local descent procedures, and thereby achieve efficient, adaptive, distributed control.
Keywords :
adaptive control; distributed control; information theory; minimisation; Lagrangian function; Lagrangian minimizing point; Lagrangian steepest descent; adaptive control; control objective function; distributed control; information theory; iterated game; joint strategy; local descent procedures; system design; Adaptive control; Control systems; Distributed control; Game theory; Information theory; Lagrangian functions; Mathematics; Programmable control; Sampling methods; Stochastic systems;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control, 2004. CDC. 43rd IEEE Conference on
ISSN :
0191-2216
Print_ISBN :
0-7803-8682-5
Type :
conf
DOI :
10.1109/CDC.2004.1430266
Filename :
1430266
Link To Document :
بازگشت