DocumentCode :
3724065
Title :
Leveraging Implicit Relative Labeling-Importance Information for Effective Multi-label Learning
Author :
Yu-Kun Li;Min-Ling Zhang;Xin Geng
Author_Institution :
Sch. of Comput. Sci. &
fYear :
2015
Firstpage :
251
Lastpage :
260
Abstract :
In multi-label learning, each training example is represented by a single instance while associated with multiple labels, and the task is to predict a set of relevant labels for the unseen instance. Existing approaches learn from multi-label data by assuming equal labeling-importance, i.e. all the associated labels are regarded to be relevant while their relative importance for the training example are not differentiated. Nonetheless, this assumption fails to reflect the fact that the importance degree of each associated label is generally different, though the importance information is not explicitly accessible from the training examples. In this paper, we show that effective multi-label learning can be achieved by leveraging the implicit relative labeling-importance (RLI) information. Specifically, RLI degrees are formalized as multinomial distribution over the label space, which are estimated by adapting an iterative label propagation procedure. After that, the multi-label prediction model is learned by fitting the estimated multinomial distribution as regularized with popular multi-label empirical loss. Comprehensive experiments clearly validate the usefulness of leveraging implicit RLI information to learn from multi-label data.
Keywords :
"Yttrium","Training","Reliability","Predictive models","Symmetric matrices","Semantics","Estimation"
Publisher :
ieee
Conference_Titel :
Data Mining (ICDM), 2015 IEEE International Conference on
ISSN :
1550-4786
Type :
conf
DOI :
10.1109/ICDM.2015.41
Filename :
7373329
Link To Document :
بازگشت