DocumentCode :
3312906
Title :
Convergence of a gradient algorithm with penalty for training two-layer neural networks
Author :
Shao, Hongmei ; Liu, Lijun ; Zheng, Gaofeng
Author_Institution :
Coll. of Math. & Comput. Sci., China Univ. of Pet., Dongying, China
fYear :
2009
fDate :
8-11 Aug. 2009
Firstpage :
76
Lastpage :
79
Abstract :
In this paper, a squared penalty term is added to the conventional error function to improve the generalization of neural networks. A weight boundedness theorem and two convergence theorems are proved for the gradient learning algorithm with penalty when it is used for training a two-layer feedforward neural network. To illustrate above theoretical findings, numerical experiments are conducted based on a linearly separable problem and simulation results are presented. The abstract goes here.
Keywords :
convergence of numerical methods; feedforward neural nets; gradient methods; learning (artificial intelligence); conventional error function; convergence theorem; feedforward neural network; gradient algorithm; gradient learning algorithm; squared penalty term; two-layer neural net training; Computer networks; Convergence; Cost function; Educational institutions; Feedforward neural networks; Gradient methods; Mathematics; Neural networks; Petroleum; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Science and Information Technology, 2009. ICCSIT 2009. 2nd IEEE International Conference on
Conference_Location :
Beijing
Print_ISBN :
978-1-4244-4519-6
Electronic_ISBN :
978-1-4244-4520-2
Type :
conf
DOI :
10.1109/ICCSIT.2009.5234616
Filename :
5234616
Link To Document :
بازگشت