DocumentCode
730773
Title
Neuron sparseness versus connection sparseness in deep neural network for large vocabulary speech recognition
Author
Jian Kang ; Cheng Lu ; Meng Cai ; Wei-Qiang Zhang ; Jia Liu
Author_Institution
Dept. of Electron. Eng., Tsinghua Univ., Beijing, China
fYear
2015
fDate
19-24 April 2015
Firstpage
4954
Lastpage
4958
Abstract
Exploiting sparseness in deep neural networks is an important method for reducing the computational cost. In this paper, we study neuron sparseness in deep neural networks for acoustic modeling. For the feed-forward stage, we only activate neurons whose input values are larger than a given threshold, and set the outputs of inactive nodes to zero. Thus, only a few nonzero outputs are fed to the next layer. Using this method, the output vector of each hidden layer becomes very sparse, so that the computational cost of the feed-forward algorithm can be reduced by adopting sparse matrix operations. The proposed method is evaluated in both small and large vocabulary speech recognition tasks, and results demonstrate that we can reduce the nonzero outputs to fewer than 20% of the total number of hidden nodes, without sacrificing speech recognition performance.
Keywords
feedforward; neural nets; sparse matrices; speech recognition; vocabulary; acoustic modeling; connection sparseness; deep neural network; feed-forward algorithm; large vocabulary speech recognition; neuron sparseness; sparse matrix operations; Artificial neural networks; Complexity theory; Context; Hidden Markov models; Neurons; Vocabulary; acoustic modeling; deep neural network; sparseness; speech recognition;
fLanguage
English
Publisher
ieee
Conference_Titel
Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on
Conference_Location
South Brisbane, QLD
Type
conf
DOI
10.1109/ICASSP.2015.7178913
Filename
7178913
Link To Document