Title :
Spoken language understanding using long short-term memory neural networks
Author :
Kaisheng Yao ; Baolin Peng ; Yu Zhang ; Dong Yu ; Zweig, Geoffrey ; Yangyang Shi
Abstract :
Neural network based approaches have recently produced record-setting performances in natural language understanding tasks such as word labeling. In the word labeling task, a tagger is used to assign a label to each word in an input sequence. Specifically, simple recurrent neural networks (RNNs) and convolutional neural networks (CNNs) have shown to significantly outperform the previous state-of-the-art - conditional random fields (CRFs). This paper investigates using long short-term memory (LSTM) neural networks, which contain input, output and forgetting gates and are more advanced than simple RNN, for the word labeling task. To explicitly model output-label dependence, we propose a regression model on top of the LSTM un-normalized scores. We also propose to apply deep LSTM to the task. We investigated the relative importance of each gate in the LSTM by setting other gates to a constant and only learning particular gates. Experiments on the ATIS dataset validated the effectiveness of the proposed models.
Keywords :
convolution; natural language processing; recurrent neural nets; speech processing; word processing; ATIS dataset; CNN; CRF; LSTM neural networks; LSTM unnormalized scores; RNN; conditional random fields; convolutional neural networks; long short-term memory neural networks; natural language understanding tasks; neural network based approach; output-label dependence; recurrent neural networks; regression model; spoken language understanding; word labeling task; Logic gates; Recurrent neural networks; Semantics; Speech; Training; Vectors; Recurrent neural networks; language understanding; long short-term memory;
Conference_Titel :
Spoken Language Technology Workshop (SLT), 2014 IEEE
DOI :
10.1109/SLT.2014.7078572