DocumentCode :
1327615
Title :
Capabilities of a four-layered feedforward neural network: four layers versus three
Author :
Tamura, Shin´ichi ; Tateishi, Masahiko
Author_Institution :
Res. Labs., Nippondenso Co. Ltd., Aichi, Japan
Volume :
8
Issue :
2
fYear :
1997
fDate :
3/1/1997 12:00:00 AM
Firstpage :
251
Lastpage :
255
Abstract :
Neural-network theorems state that only when there are infinitely many hidden units is a four-layered feedforward neural network equivalent to a three-layered feedforward neural network. In actual applications, however, the use of infinitely many hidden units is impractical. Therefore, studies should focus on the capabilities of a neural network with a finite number of hidden units, In this paper, a proof is given showing that a three-layered feedforward network with N-1 hidden units can give any N input-target relations exactly. Based on results of the proof, a four-layered network is constructed and is found to give any N input-target relations with a negligibly small error using only (N/2)+3 hidden units. This shows that a four-layered feedforward network is superior to a three-layered feedforward network in terms of the number of parameters needed for the training data
Keywords :
feedforward neural nets; multilayer perceptrons; four-layered feedforward neural network; input-target relations; multilayer feedforward neural network; training data parameters; Feedforward neural networks; Multi-layer neural network; Multilayer perceptrons; Neural networks; Polynomials; Training data;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.557662
Filename :
557662
Link To Document :
بازگشت