DocumentCode :
2191649
Title :
Bounds of the incremental gain for discrete-time recurrent neural networks
Author :
Chu, Yun-Chung
Author_Institution :
Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore
Volume :
3
fYear :
2001
fDate :
2001
Firstpage :
2732
Abstract :
As a nonlinear system, a recurrent neural network generally has an incremental gain different from its induced norm. While most of the previous research efforts were focused on the latter, this paper presents a method to compute an effective upper bound of the former for a class of discrete-time recurrent neural networks, which is not only applied to systems with arbitrary inputs but also extended to systems with small-norm inputs. The upper bound is computed by simple optimizations subject to linear matrix inequalities
Keywords :
Lyapunov methods; matrix algebra; optimisation; recurrent neural nets; Lyapunov functions; diagonally dominant matrices; incremental gain; linear matrix inequality; optimizations; recurrent neural network; upper bound; Computer networks; Control system synthesis; Ear; Linear matrix inequalities; Lyapunov method; Neurons; Nonlinear systems; Recurrent neural networks; Reduced order systems; Upper bound;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control, 2001. Proceedings of the 40th IEEE Conference on
Conference_Location :
Orlando, FL
Print_ISBN :
0-7803-7061-9
Type :
conf
DOI :
10.1109/.2001.980685
Filename :
980685
Link To Document :
بازگشت