DocumentCode :
3523493
Title :
New gradient learning rules for artificial neural nets based on moderatism and feedback model
Author :
Islam, M. Tanvir ; Okabe, Yoichi
Author_Institution :
Dept. of Electron. Eng., Tokyo Univ., Japan
fYear :
2003
fDate :
14-17 Dec. 2003
Firstpage :
601
Lastpage :
604
Abstract :
Moderatism (Y. Okabe, 1998), which is a learning model for ANNs, is based on the principle that individual neurons and neural nets as a whole try to sustain a "moderate" level in their input and output signals. In this way, a close mutual relationship with the outside environment is maintained. In this paper, two potential moderatism-based local, gradient learning rules are proposed. Then, a pattern learning experiment is performed to compare the learning performances of these two learning rules, the error based weight update (EBWU) rule (M. Tanvir Islam et al., 2001), and error backpropagation (Christopher M. Bishop, 1995).
Keywords :
backpropagation; feedback; gradient methods; learning (artificial intelligence); neural nets; artificial neural nets; error backpropagation; error based weight update; feedback model; gradient learning rules; pattern learning; Artificial neural networks; Backpropagation; Biological system modeling; Costs; Electronic mail; Error correction; Multi-layer neural network; Neural networks; Neurofeedback; Neurons;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Signal Processing and Information Technology, 2003. ISSPIT 2003. Proceedings of the 3rd IEEE International Symposium on
Print_ISBN :
0-7803-8292-7
Type :
conf
DOI :
10.1109/ISSPIT.2003.1341192
Filename :
1341192
Link To Document :
بازگشت