DocumentCode :
2771963
Title :
Deep, super-narrow neural network is a universal classifier
Author :
Szymanski, Lech ; McCane, Brendan
Author_Institution :
Dept. of Comput. Sci., Univ. of Otago, Dunedin, New Zealand
fYear :
2012
fDate :
10-15 June 2012
Firstpage :
1
Lastpage :
8
Abstract :
Deep architecture models are known to be conducive to good generalisation for certain types of classification tasks. Existing unsupervised and semi-supervised training methods do not explain why and when deep internal representations will be effective. We investigate the fundamental principles of representation in deep architectures by devising a method for binary classification in multi-layer feed forward networks with limited breadth. We show that, given enough layers, a super-narrow neural network, with two neurons per layer, is capable of shattering any separable binary dataset. We also show that datasets that exhibit certain type of symmetries are better suited for deep representation and may require only few hidden layers to produce desired classification.
Keywords :
feedforward neural nets; pattern classification; unsupervised learning; binary classification; classification task; deep architecture model; deep architectures representation; deep belief nets; multilayer feedforward neural network; semisupervised training method; separable binary dataset; super narrow neural network; unsupervised training method; Biological neural networks; Computational modeling; Computer architecture; Neurons; Spirals; Training; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), The 2012 International Joint Conference on
Conference_Location :
Brisbane, QLD
ISSN :
2161-4393
Print_ISBN :
978-1-4673-1488-6
Electronic_ISBN :
2161-4393
Type :
conf
DOI :
10.1109/IJCNN.2012.6252513
Filename :
6252513
Link To Document :
بازگشت