DocumentCode :
2712049
Title :
Using weak supervision in learning Gaussian mixture models
Author :
Ghosh, Soumya ; Srinivasan, Soundararajan ; Andrews, Burton
Author_Institution :
Dept. of Comput. Sci., Univ. of Colorado, Boulder, CO, USA
fYear :
2009
fDate :
14-19 June 2009
Firstpage :
973
Lastpage :
979
Abstract :
The expectation maximization algorithm is a popular approach to learning Gaussian mixture models from unlabeled data. In addition to the unlabeled data, in many applications, additional sources of information such as a-priori knowledge of mixing proportions are also available. We present a weakly supervised approach, in the form of a penalized expectation maximization algorithm that uses a-priori knowledge to guide the model training process. The algorithm penalizes those models whose predicted mixing proportions have high divergence from the a-priori mixing proportions. We also present an extension to incorporate both labeled and unlabeled data in a semi-supervised setting. Systematic evaluations on several publicly available datasets show that the proposed algorithms outperforms the expectation maximization algorithm. The performance gains are particularly significant when the amount of unlabeled data is limited and in the presence of noise.
Keywords :
data handling; expectation-maximisation algorithm; learning (artificial intelligence); Gaussian mixture models; datasets; expectation maximization algorithm; learning; mixing proportions; model training process; unlabeled data; weak supervision; Acoustic noise; Biological system modeling; Calibration; Clustering algorithms; Data models; Information resources; Neural networks; Noise robustness; Performance gain; Predictive models;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 2009. IJCNN 2009. International Joint Conference on
Conference_Location :
Atlanta, GA
ISSN :
1098-7576
Print_ISBN :
978-1-4244-3548-7
Electronic_ISBN :
1098-7576
Type :
conf
DOI :
10.1109/IJCNN.2009.5178922
Filename :
5178922
Link To Document :
بازگشت