DocumentCode :
1160044
Title :
Back propagation fails to separate where perceptrons succeed
Author :
Brady, Martin L. ; Raghavan, Raghu ; Slawny, Joseph
Author_Institution :
Lockheed Res. & Dev. Div., Palo Alto, CA, USA
Volume :
36
Issue :
5
fYear :
1989
fDate :
5/1/1989 12:00:00 AM
Firstpage :
665
Lastpage :
674
Abstract :
It is widely believed that the back-propagation algorithm in neural networks, for tasks such as pattern classification, overcomes the limitations of the perceptron. The authors construct several counterexamples to this belief. They also construct linearly separable examples which have a unique minimum which fails to separate two families of vectors, and a simple example with four two-dimensional vectors in a single-layer network showing local minima with a large basin of attraction. Thus, back-propagation is guaranteed to fail in the first example, and likely to fail in the second example. It is shown that even multilayered (hidden-layer) networks can also fail in this way to classify linearly separable problems. Since the authors´ examples are all linearly separable, the perceptron would correctly classify them. The results disprove the presumption, made in recent years, that, barring local minima, back-propagation will find the best set of weights for a given problem
Keywords :
computerised pattern recognition; neural nets; back-propagation algorithm; basin of attraction; counterexamples; hidden layer networks; linearly separable examples; local minima; multilayered networks; neural networks; pattern classification; perceptrons succeed; single-layer network; two-dimensional vectors; Circuits and systems; Helium; Logistics; Multi-layer neural network; Neural networks; Pattern classification; Physics;
fLanguage :
English
Journal_Title :
Circuits and Systems, IEEE Transactions on
Publisher :
ieee
ISSN :
0098-4094
Type :
jour
DOI :
10.1109/31.31314
Filename :
31314
Link To Document :
بازگشت