DocumentCode :
2657299
Title :
Maximization of the gradient function for efficient neural network training
Author :
Ahmed, Sultan Uddin ; Shahjahan, Md ; Murase, Kazuyuki
Author_Institution :
Dept. of Electron. & Commun. Eng., Khulna Univ. of Eng. & Technol. (KUET), Khulna, Bangladesh
fYear :
2010
fDate :
23-25 Dec. 2010
Firstpage :
424
Lastpage :
429
Abstract :
In this paper, a faster supervised algorithm (BPfast) for the neural network training is proposed that maximizes the derivative of sigmoid activation function during back-propagation (BP) training. BP adjusts the weights of neural network with minimizing an error function. Due to the presence of derivative information in the weight update rule, BP goes to `premature saturation´ that slows down the training convergence. In the saturation region, the derivative information tends to zero. To overcome the problem, BPfast maximizes the derivative of activation function together with minimizing the error function. BPfast is tested on five real world benchmark problems such as breast cancer, diabetes, heart disease, Australian credit card, and horse. BPfast exhibits faster convergence and good generalization ability over standard BP algorithm.
Keywords :
backpropagation; gradient methods; learning (artificial intelligence); neural nets; optimisation; Australian credit card; backpropagation training; breast cancer; diabetes; faster supervised algorithm; gradient function maximization; heart disease; horse; neural network training; sigmoid activation function; Artificial neural networks; Convergence; Diabetes; Heart; Horses; Testing; Training; Convergence; Generalization ability; Gradient information; Maximization; Neural network;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer and Information Technology (ICCIT), 2010 13th International Conference on
Conference_Location :
Dhaka
Print_ISBN :
978-1-4244-8496-6
Type :
conf
DOI :
10.1109/ICCITECHN.2010.5723895
Filename :
5723895
Link To Document :
بازگشت