Title :
Applying discretized articulatory knowledge to dysarthric speech
Author_Institution :
Dept. of Comput. Sci., Univ. of Toronto, Toronto, ON
Abstract :
This paper applies two dynamic Bayes networks that include theoretical and measured kinematic features of the vocal tract, respectively, to the task of labeling phoneme sequences in unsegmented dysarthric speech. Speaker dependent and adaptive versions of these models are compared against two acoustic-only baselines, namely a hidden Markov model and a latent dynamic conditional random field. Both theoretical and kinematic models of the vocal tract perform admirably on speaker-dependent speech, and we show that the statistics of the latter are not necessarily transferable between speakers during adaptation.
Keywords :
Bayes methods; speaker recognition; discretized articulatory knowledge; dynamic Bayes networks; dysarthric speech; hidden Markov model; latent dynamic conditional random field; phoneme sequences; speaker-dependent speech; vocal tract; Acoustic measurements; Electromagnetic measurements; Hidden Markov models; Kinematics; Labeling; Lips; Loudspeakers; Speech analysis; Speech enhancement; Tongue; Accessibility; articulatory information; conditional random fields; dynamic Bayes nets;
Conference_Titel :
Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on
Conference_Location :
Taipei
Print_ISBN :
978-1-4244-2353-8
Electronic_ISBN :
1520-6149
DOI :
10.1109/ICASSP.2009.4960630