DocumentCode :
1686697
Title :
Minimization through convexitization in training neural networks
Author :
Lo, James T.
Author_Institution :
Dept. of Math. & Stat., Maryland Univ., Baltimore, MD, USA
Volume :
2
fYear :
2002
fDate :
6/24/1905 12:00:00 AM
Firstpage :
1889
Lastpage :
1894
Abstract :
Provides a mathematical explanation of the ability of the adaptive risk-averting training method to avoid poor local minima. The method actually transforms the standard least-squares error criterion into a "quasi-convex" criterion to make it unnecessary to search throughout the entire weight space to avoid poor local minima. Two theorems are proven in the paper, one examining the convexity region of the risk-averting error criterion to which the standard criterion is transformed to and the other giving a minimax interpretation of the risk-averting error criterion
Keywords :
Hessian matrices; learning (artificial intelligence); least squares approximations; minimisation; neural nets; adaptive risk-averting training method; convexification; convexity region; minimax interpretation; minimization; neural networks; quasi-convex criterion; standard least-squares error criterion; Contracts; Design methodology; Electronic mail; Government; Intelligent networks; Mathematics; Minimax techniques; Neural networks; Optimization methods; Statistics;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 2002. IJCNN '02. Proceedings of the 2002 International Joint Conference on
Conference_Location :
Honolulu, HI
ISSN :
1098-7576
Print_ISBN :
0-7803-7278-6
Type :
conf
DOI :
10.1109/IJCNN.2002.1007807
Filename :
1007807
Link To Document :
بازگشت