DocumentCode :
2902132
Title :
Using a nonlinear mechanism to speed up the neural optimization processes
Author :
Hou, Zeng-Guang ; Jing, Feng-Shui ; Tan, Min
Author_Institution :
Inst. of Autom., Acad. Sinica, Beijing, China
fYear :
2002
fDate :
2002
Firstpage :
868
Lastpage :
873
Abstract :
Recurrent neural networks based on gradient descent algorithms have been widely used in computation of various optimization and control problems. With the aid of a nonlinear mechanism, an improved method for accelerating the neural computation of optimization processes is proposed. We analyze its convergence property and compare it with other methods. Finally, we give simulation results to show its effectiveness for high-speed computation.
Keywords :
convergence of numerical methods; gradient methods; neurocontrollers; nonlinear control systems; nonlinear differential equations; optimal control; optimisation; recurrent neural nets; control problems; convergence property; gradient descent algorithms; high-speed computation; minimization problem; neural optimization processes; nonlinear mechanism; nonlinear optimal control problem; recurrent neural network computation acceleration; Acceleration; Automation; Computer networks; Control systems; Convergence; Design optimization; Equations; Neural networks; Optimization methods; Recurrent neural networks;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Control, 2002. Proceedings of the 2002 IEEE International Symposium on
ISSN :
2158-9860
Print_ISBN :
0-7803-7620-X
Type :
conf
DOI :
10.1109/ISIC.2002.1157876
Filename :
1157876
Link To Document :
بازگشت