DocumentCode :
348806
Title :
An improved method for reducing the forgetfulness in incremental learning
Author :
Shinozawa, Kazuhiko ; Shimohara, Katsunori
Author_Institution :
NTT Commun. Sci. Labs., Kyoto, Japan
Volume :
4
fYear :
1999
fDate :
1999
Firstpage :
1068
Abstract :
We propose an improved method for reducing the forgetfulness in incremental learning in a feedforward multilayer perceptron. In incremental learning, the network learns using new data items one by one. Naturally, the network is affected mainly by the most recent data, and past data is forgotten. Usually, to solve this problem, all data is stored and used for repeated learning, but this is not efficient use of computing time and memory. We believe the forgetfulness is an increase in the error function´s value for past training data, and that it thus can be suppressed by minimizing the change in the value. The change in the error function´s value is approximated using eigenvalues and eigenvectors of the coefficient of the second order term of an error function that is expanded with the Taylor expansion. By introducing a constraint for minimizing the change in the value based on this approximation to update the weight parameters, we believe the forgetfulness can be suppressed. Based on the above idea, we proposed a method that assigns the initial value of the momentum term an eigenvector with small eigenvalue. However, this method is a weak constraint for minimizing the change in the value. Thus, while it is an effective method for learning a sine function, it is not effective for learning a chaotic sequence calculated with a logistic map. In this paper, we modify the way in which the initial value of the momentum term is estimated, and propose a method that provides a stronger constraint for updating the weight parameters to minimize the change of the value in the error function. This method can effectively suppress the forgetfulness. This method was tested on a chaotic sequence calculated with a logistic map. The result shows greater suppression of the forgetfulness than with the original method
Keywords :
eigenvalues and eigenfunctions; feedforward neural nets; learning (artificial intelligence); multilayer perceptrons; Taylor expansion; chaotic sequence; eigenvector; feedforward multilayer perceptron; forgetfulness; incremental learning; logistic map; past training data; repeated learning; sine function; weak constraint; Costs; Eigenvalues and eigenfunctions; Feedforward systems; Function approximation; Intelligent networks; Laboratories; Logistics; Multilayer perceptrons; Taylor series; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Systems, Man, and Cybernetics, 1999. IEEE SMC '99 Conference Proceedings. 1999 IEEE International Conference on
Conference_Location :
Tokyo
ISSN :
1062-922X
Print_ISBN :
0-7803-5731-0
Type :
conf
DOI :
10.1109/ICSMC.1999.812558
Filename :
812558
Link To Document :
بازگشت