Title :
Continuous Stochastic Feature Mapping Based on Trajectory HMMs
Author :
Zen, Heiga ; Nankaku, Yoshihiko ; Tokuda, Keiichi
Author_Institution :
Dept. of Comput. Sci. & Eng., Nagoya Inst. of Technol., Nagoya, Japan
Abstract :
This paper proposes a technique of continuous stochastic feature mapping based on trajectory hidden Markov models (HMMs), which have been derived from HMMs by imposing explicit relationships between static and dynamic features. Although Gaussian mixture model (GMM)- or HMM-based feature-mapping techniques work effectively, their accuracy occasionally degrades due to inappropriate dynamic characteristics caused by frame-by-frame mapping. While the use of dynamic-feature constraints at the mapping stage can alleviate this problem, it also introduces inconsistencies between training and mapping. The technique we propose can eliminate these inconsistencies while retaining the benefits of using dynamic-feature constraints, and it offers entire sequence-level transformation rather than frame-by-frame mapping. The results obtained from speaker-conversion, acoustic-to-articulatory inversion-mapping, and noise-compensation experiments demonstrated that our new approach outperformed the conventional one.
Keywords :
Gaussian processes; feature extraction; hidden Markov models; Gaussian mixture model; continuous stochastic feature mapping; dynamic-feature constraints; frame-by-frame mapping; sequence-level transformation; trajectory HMM; trajectory hidden Markov models; Computer science; Degradation; Hidden Markov models; Loudspeakers; Speech enhancement; Speech recognition; Stochastic processes; Stochastic resonance; Training data; Trajectory; Gaussian mixture model (GMM)-based mapping; speech recognition; trajectory hidden Markov model (HMM); voice conversion;
Journal_Title :
Audio, Speech, and Language Processing, IEEE Transactions on
DOI :
10.1109/TASL.2010.2049685