Title :
Reshaping deep neural network for fast decoding by node-pruning
Author :
Tianxing He ; Yuchen Fan ; Yanmin Qian ; Tian Tan ; Kai Yu
Author_Institution :
Dept. of Comput. Sci. & Eng., Shanghai Jiao Tong Univ., Shanghai, China
Abstract :
Although deep neural networks (DNN) has achieved significant accuracy improvements in speech recognition, it is computationally expensive to deploy large-scale DNN in decoding due to huge number of parameters. Weights truncation and decomposition methods have been proposed to speed up decoding by exploiting the sparseness of DNN. This paper summarizes different approaches of restructuring DNN and proposes a new node pruning approach to reshape DNN for fast decoding. In this approach, hidden nodes of a fully trained DNN are pruned with certain importance function and the reshaped DNN is retuned using back-propagation. The approach requires no modification on code and can directly save computational costs during decoding. Furthermore, it is complementary to weight decomposition methods. Experiments on a switchboard task shows that, by using the proposed node-pruning approach, DNN complexity can be reduced to 37.9%. The complexity can be further reduced to 12.3% without accuracy loss when node-pruning is combined with weight decomposition.
Keywords :
backpropagation; computational complexity; decoding; neural nets; speech recognition; DNN complexity reduction; backpropagation; computational cost reduction; decoding task; deep-neural network reshaping; fully-trained DNN; hidden node pruning; importance function; large-scale DNN restructuring; switchboard English task; weight decomposition; Complexity theory; Decoding; Matrix decomposition; Neural networks; Speech recognition; Switches; Training; Deep Neural Networks; Node Pruning; Singular Value Decomposition; Speech Recognition;
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on
Conference_Location :
Florence
DOI :
10.1109/ICASSP.2014.6853595