Title :
Simplified domain transfer multiple kernel learning for language recognition
Author :
Jiaming Xu ; Jia Liu ; Shanhong Xia
Author_Institution :
State Key Lab. on Transducing Technol., Inst. of Electron., Beijing, China
Abstract :
Distribution mismatch between training and test data can greatly deteriorate the performance of language recognition. Some effective methods for compensation have been proposed, such as nuisance attribute projection (NAP). In real-world applications, there are often sufficient training samples from a different domain and only a limited number of labeled training samples from target domain, performance of a system will be degraded and needs to be further improved. In this paper, we introduce transfer learning to solve this problem. We propose a novel transfer learning algorithm referred to as simplified domain transfer multiple kernel learning (SDTMKL). Our aim is to discover a good representation of feature space that minimizes the distribution mismatch between samples from the source and target domains. Robust models can be learned in this suitable feature space. Results on a NIST language recognition task show that the SDTMKL method is quite effective and can further improve system performance when combined with NAP.
Keywords :
learning (artificial intelligence); natural language processing; speech recognition; NAP; NIST language recognition task; SDTMKL method; distribution mismatch minimization; feature space representation; nuisance attribute projection; simplified domain transfer multiple-kernel learning; source domain; target domain; test data; training data; transfer learning algorithm; Equations; Feature extraction; Kernel; Mathematical model; Robustness; Support vector machines; Training; Language Recognition; Multiple Kernel Learning; Support Vector Machine; Transfer Learning;
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on
Conference_Location :
Vancouver, BC
DOI :
10.1109/ICASSP.2013.6638992