Title :
A multi-sieving neural network architecture that decomposes learning tasks automatically
Author :
Lu, Bao-Liang ; Kita, Hajimek ; Nishikawa, Yoshikazu
Author_Institution :
Dept. of Electr. Eng., Kyoto Univ., Japan
fDate :
27 Jun-2 Jul 1994
Abstract :
This paper presents a multi-sieving network (MSN) architecture and the multi-sieving learning (MSL) algorithm for it. The basic idea behind MSN architecture is that patterns are classified by a rough sieve at the beginning and done by finer ones gradually. MSN is constructed by adding a sieving module (SM) adaptively with progress of training. SM consists of two different neural networks and a simple logical circuit. MSL algorithm starts with a single SM, then does the following three phases repeatedly until all the training samples are successfully learned: 1) the learning phase in which the training samples are learned by the current SM; 2) the sieving phase in which the training samples that have been successfully learned are sifted out from the training set; and 3) the growing phase in which the current SM is frozen and a new SM is added in order to learn the remaining training samples. The performance of MSN architecture is illustrated on two benchmark problems
Keywords :
learning (artificial intelligence); logic circuits; neural net architecture; neural nets; parallel architectures; architecture; growing phase; learning phase; learning task decomposition; logical circuit; multi-sieving neural network; sieving module; Circuits; Convergence; Feedforward neural networks; Humans; Neural networks; Pattern classification; Pattern recognition; Problem-solving; Samarium; Switches;
Conference_Titel :
Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
Conference_Location :
Orlando, FL
Print_ISBN :
0-7803-1901-X
DOI :
10.1109/ICNN.1994.374475