DocumentCode :
285224
Title :
n-h-1 networks store no less n×h+1 examples, but sometimes no more
Author :
Sakurai, Akito
Author_Institution :
Hitachi Ltd., Saitama, Japan
Volume :
3
fYear :
1992
fDate :
7-11 Jun 1992
Firstpage :
936
Abstract :
The author shows that an n-h-1 artificial neural network with n real inputs, a single layer of h hidden units, and one binary output unit can store correctly at least n×h+1 examples in a general position. The proof is constructive so that weights are obtained deterministically from examples. The result is thought to be a generalization of the fact that one threshold gate can remember any n+1 examples in a general position. The number obtained is a good lower bound of the network capacity and is a great improvement on the previous best bound by S. Akaho and S. Amari (1990). It is also shown that the figure nh+1 is tight in a certain sense
Keywords :
artificial intelligence; neural nets; binary output unit; h hidden units; lower bound; n-h-1 artificial neural network; network capacity; threshold gate; Artificial neural networks; Capacity planning; Circuits; Laboratories; Logic; Probability; Reservoirs; Upper bound; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1992. IJCNN., International Joint Conference on
Conference_Location :
Baltimore, MD
Print_ISBN :
0-7803-0559-0
Type :
conf
DOI :
10.1109/IJCNN.1992.227079
Filename :
227079
Link To Document :
بازگشت