DocumentCode :
2829559
Title :
Normalized training for HMM-based visual speech recognition
Author :
Nankaku, Yoshihiko ; Tokuda, Keiichi ; Kitamura, Takamitsu ; Kobayashi, Tukao
Author_Institution :
Dept. of Comput. Sci., Nagoya Inst. of Technol., Japan
Volume :
3
fYear :
2000
fDate :
2000
Firstpage :
234
Abstract :
This paper presents an approach to estimating the parameters of continuous density HMMs for visual speech recognition. One of the key issues of image-based visual speech recognition is normalization of lip location and lighting conditions prior to estimating the parameters of HMMs. We presented a normalized training method in which the normalization process is integrated in the model training. This paper extends it for contrast normalization in addition to average-intensity and location normalization. The proposed method provides a theoretically-well-defined algorithm based on a maximum likelihood formulation, hence the likelihood for the training data is guaranteed to increase at each iteration of the normalized training. Experiments on the M2VTS database show that the recognition performance can be significantly improved by the normalized training
Keywords :
feature extraction; gesture recognition; hidden Markov models; image sequences; iterative methods; maximum likelihood estimation; speech recognition; HMM-based visual speech recognition; M2VTS database; continuous density HMMs; contrast normalization; iteration; lighting conditions; lip location; maximum likelihood formulation; model training; normalization process; normalized training; Data mining; Databases; Hidden Markov models; Lips; Maximum likelihood estimation; Mouth; Pixel; Speech recognition; Training data; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Image Processing, 2000. Proceedings. 2000 International Conference on
Conference_Location :
Vancouver, BC
ISSN :
1522-4880
Print_ISBN :
0-7803-6297-7
Type :
conf
DOI :
10.1109/ICIP.2000.899338
Filename :
899338
Link To Document :
بازگشت