Title :
Sparse maximum entropy deep belief nets
Author :
How Jing ; Yu Tsao
Author_Institution :
Res. Center for Inf. Technol. Innovation, Acad. Sinica of Taiwan, Taiwan
Abstract :
In this paper, we present a sparse maximum entropy (SME) learning algorithm for deep belief net (DBN). The SME algorithm aims to maximize the entropy and encourage sparsity of the model. Compared with the conventional maximum likelihood (ML) learning, the proposed SME algorithm enables DBN to be more unbiased to data distributions and robust to overfitting issues, and accordingly provide a better generalization capability. MNIST and NORB data sets were used to evaluated the proposed SME algorithm. Experimental results show that SME-trained DBN outperforms ML-trained DBN on both data sets.
Keywords :
belief networks; data analysis; learning (artificial intelligence); maximum entropy methods; maximum likelihood estimation; DBN; ML learning; MNIST data sets; NORB data sets; SME algorithm; data distributions; deep belief net; generalization capability; maximum likelihood learning; sparse maximum entropy learning algorithm; Computational modeling; Computer architecture; Data models; Entropy; Monte Carlo methods; Stacking; Training;
Conference_Titel :
Neural Networks (IJCNN), The 2013 International Joint Conference on
Conference_Location :
Dallas, TX
Print_ISBN :
978-1-4673-6128-6
DOI :
10.1109/IJCNN.2013.6706749