DocumentCode :
3661175
Title :
Deep learning using partitioned data vectors
Author :
Ben Mitchell;Hasari Tosun;John Sheppard
Author_Institution :
Department of Computer Science, Johns Hopkins University, USA
fYear :
2015
fDate :
7/1/2015 12:00:00 AM
Firstpage :
1
Lastpage :
8
Abstract :
Deep learning is a popular field that encompasses a range of multi-layer connectionist techniques. While these techniques have achieved great success on a number of difficult computer vision problems, the representation biases that allow this success have not been thoroughly explored. In this paper, we examine the hypothesis that one strength of many deep learning algorithms is their ability to exploit spatially local statistical information. We present a formal description of how data vectors can be partitioned into sub-vectors that preserve spatially local information. As a test case, we then use statistical models to examine how much of such structure exists in the MNIST dataset. Finally, we present experimental results from training RBMs using partitioned data, and demonstrate the advantages they have over non-partitioned RBMs. Through these results, we show how the performance advantage is reliant on spatially local structure, by demonstrating the performance impact of randomly permuting the input data to destroy local structure. Overall, our results support the hypothesis that a representation bias reliant upon spatially local statistical information can improve performance, so long as this bias is a good match for the data. We also suggest statistical tools for determining a priori whether a dataset is a good match for this bias or not.
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), 2015 International Joint Conference on
Electronic_ISBN :
2161-4407
Type :
conf
DOI :
10.1109/IJCNN.2015.7280484
Filename :
7280484
Link To Document :
بازگشت