DocumentCode :
2721716
Title :
From lattices of phonemes to sentences: a recurrent neural network approach
Author :
Iooss, Christine
Author_Institution :
CEA-CENS, Gif-sur-Yvette, France
fYear :
1991
fDate :
8-14 Jul 1991
Firstpage :
833
Abstract :
The author presents preliminary investigations concerning the use of a sequential neutral network for lexical decoding in continuous speech recognition. They explore the architecture introduced by J.L. Elman (1988) for predicting successive elements of a sequence. This recurrent network admits sequential inputs. This model is based on a multilayer architecture and contains special units, called context units, sensitive to the recent activation history of the network. It is suggested that this model be used for lexical decoding in continuous speech recognition. For that purpose, an extension of Elman´s model is presented in order to treat erroneous sequential inputs and in order to label patterns. It is suggested that the context units be updated considering their previous values and not only the values of the hidden units. Moreover, output units represent words instead of the prediction on the next phoneme. Preliminary experimental results are given
Keywords :
decoding; neural nets; speech recognition; continuous speech recognition; erroneous sequential inputs; lexical decoding; multilayer architecture; recurrent neural network approach; Context modeling; Decoding; Dynamic programming; History; Lattices; Neural networks; Recurrent neural networks; Speech analysis; Speech recognition; Vocabulary;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on
Conference_Location :
Seattle, WA
Print_ISBN :
0-7803-0164-1
Type :
conf
DOI :
10.1109/IJCNN.1991.155442
Filename :
155442
Link To Document :
بازگشت