Title :
Statistical Machine Translation for Speech: A Perspective on Structures, Learning, and Decoding
Author_Institution :
IBM T. J. Watson Res. Center, New York, NY, USA
fDate :
5/1/2013 12:00:00 AM
Abstract :
In this paper, we survey and analyze state-of-the-art statistical machine translation (SMT) techniques for speech translation (ST). We review key learning problems, and investigate essential model structures in SMT, taking a unified perspective to reveal both connections and contrasts between automatic speech recognition (ASR) and SMT. We show that phrase-based SMT can be viewed as a sequence of finite-state transducer (FST) operations, similar in spirit to ASR. We further inspect the synchronous context-free grammar (SCFG)-based formalism that includes hierarchical phrase-based and many linguistically syntax-based models. Decoding for ASR, FST-based, and SCFG-based translation is also presented from a unified perspective as different realizations of the generic Viterbi algorithm on graphs or hypergraphs. These consolidated perspectives are helpful to catalyze tighter integrations for improved ST, and we discuss joint decoding and modeling toward coupling ASR and SMT.
Keywords :
Viterbi decoding; graph theory; language translation; learning (artificial intelligence); speech coding; speech recognition; statistical analysis; transducers; ASR; FST; SMT techniques; Viterbi algorithm; automatic speech recognition; decoding; finite state transducer; hypergraphs; learning; speech translation; statistical machine translation; structures; Automata; Context awareness; Information processing; Speech processing; Statistical learning; Training; Transducers; Viterbi algorithm; Discriminative training; Viterbi search; finite-state transducer (FST); graph; hypergraph; speech translation (ST); statistical machine translation (SMT); synchronous context-free grammar (SCFG);
Journal_Title :
Proceedings of the IEEE
DOI :
10.1109/JPROC.2013.2249491