DocumentCode
3246293
Title
Forward-backward modeling in statistical natural concept generation for interlingua-based speech-to-speech translation
Author
Gu, Liang ; Gao, Yuqing ; Picheny, Michael
Author_Institution
IBM T. J. Watson Res. Center, Yorktown Heights, NY, USA
fYear
2003
fDate
30 Nov.-3 Dec. 2003
Firstpage
646
Lastpage
651
Abstract
Natural concept generation is critical to statistical interlingua-based speech-to-speech translation performance. To improve maximum-entropy-based concept generation, a forward-backward modeling approach is proposed, which generates concept sequences in the target language by selecting the hypothesis with the highest combined conditional probability, based on both the forward and backward generation models. Statistical language models are further applied to utilize word-level context information. The concept generation error rate is reduced by over 20% in our speech translation corpus within limited domains. Improvements are also achieved in our experiments on speech translation.
Keywords
language translation; linguistics; maximum entropy methods; speech recognition; speech synthesis; statistical analysis; automatic speech recognition; concept generation error rate; forward-backward modeling; interlingua-based speech-to-speech translation; maximum-entropy-based concept generation; statistical interlingua-based translation; statistical language models; statistical natural concept generation; target language concept sequence generation; text-to-speech synthesis; word-level context information; Context modeling; Employment; Natural languages; Probability; Process control; Robustness; Scalability; Speech; Tree data structures; Weapons;
fLanguage
English
Publisher
ieee
Conference_Titel
Automatic Speech Recognition and Understanding, 2003. ASRU '03. 2003 IEEE Workshop on
Print_ISBN
0-7803-7980-2
Type
conf
DOI
10.1109/ASRU.2003.1318516
Filename
1318516
Link To Document