DocumentCode :
3661057
Title :
Comparison of auto-encoders with different sparsity regularizers
Author :
Li Zhang; Yaping Lu
Author_Institution :
School of Computer Science and Technology, Soochow University, Suzhou 215006, Jiangsu, China
fYear :
2015
fDate :
7/1/2015 12:00:00 AM
Firstpage :
1
Lastpage :
5
Abstract :
Generally, in order to learn sparse representations for raw inputs via an auto-encoder, the Kullback-Leibler (KL) divergence as a sparsity regularizer is introduced to the loss function for penalizing active code units. In fact, there exist other sparsity regularizers except the KL divergence. This paper introduces some classical sparsity regularizers into auto-encoders, and empirically gives a survey on the auto-encoders with different sparsity regularizers. Specifically, we analyze another two sparsity regularizers which are usually used in sparse coding. In addition, we also consider the effect of different activation functions and different sparsity regularizers on learning performance of auto-encoders. Our experiments are conducted on the datasets of MNIST and COIL.
Keywords :
"Visualization","Encoding"
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), 2015 International Joint Conference on
Electronic_ISBN :
2161-4407
Type :
conf
DOI :
10.1109/IJCNN.2015.7280364
Filename :
7280364
Link To Document :
بازگشت