DocumentCode :
179876
Title :
Using contextual information in joint factor eigenspace MLLR for speech recognition in diverse scenarios
Author :
Saz, Oscar ; Hain, Thomas
Author_Institution :
Speech & Hearing Res. Group, Univ. of Sheffield, Sheffield, UK
fYear :
2014
fDate :
4-9 May 2014
Firstpage :
6314
Lastpage :
6318
Abstract :
This paper presents a new approach for rapid adaptation in the presence of highly diverse scenarios that takes advantage of information describing the input signals. We introduce a new method for joint factorisation of the background and the speaker in an eigenspace MLLR framework: Joint Factor Eigenspace MLLR (JFEMLLR). We further propose to use contextual information describing the speaker and background, such as tags or more complex metadata, to provide an immediate estimation of the best MLLR transformation for the utterance. This provides instant adaptation, since it does not require any transcription from a previous decoding stage. Evaluation in a highly diverse Automatic Speech Recognition (ASR) task, a modified version of WSJCAM0, yields an improvement of 26.9% over the baseline, which is an extra 1.2% reduction over two-pass MLLR adaptation.
Keywords :
eigenvalues and eigenfunctions; speech recognition; automatic speech recognition; contextual information; diverse scenarios; joint factor eigenspace MLLR; Acoustics; Adaptation models; Hidden Markov models; Joints; Speech; Training; Training data; Speech recognition; adaptation; eigenspace MLLR; joint factorisation; metadata;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on
Conference_Location :
Florence
Type :
conf
DOI :
10.1109/ICASSP.2014.6854819
Filename :
6854819
Link To Document :
بازگشت