DocumentCode :
2657940
Title :
Neural network training using homotopy continuation methods
Author :
Chow, J. ; Udpa, L. ; Udpa, S.
Author_Institution :
Dept. of Electr. & Comput. Eng., Iowa State Univ., Ames, IA, USA
fYear :
1991
fDate :
18-21 Nov 1991
Firstpage :
2528
Abstract :
Neural networks are widely used in performing classification tasks. The networks are traditionally trained using gradient methods to minimize the training error. These techniques, however, are highly susceptible to getting trapped in local minima. The authors propose an innovative approach to obtain the global minimum of the training error. The globally optimum solution can be obtained by employing the homotopy continuation method for minimizing the classification error during training. Two different approaches are considered. The first approach involves the polynomial modeling of the nodal activation function and the second approach involves the traditional sigmoid function. Results illustrating the superiority of the homotopy method over the gradient descent method are presented
Keywords :
errors; learning systems; neural nets; pattern recognition; classification; global minimum; homotopy continuation methods; neural network training error minimization; nodal activation function; polynomial modeling; sigmoid function; Artificial neural networks; Biological neural networks; Computer errors; Gradient methods; Humans; Nervous system; Neural networks; Neurons; Pattern classification; Polynomials;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1991. 1991 IEEE International Joint Conference on
Print_ISBN :
0-7803-0227-3
Type :
conf
DOI :
10.1109/IJCNN.1991.170769
Filename :
170769
Link To Document :
بازگشت