Author :
Wang, Li-Wei ; Feng, Ju-fu
Author_Institution :
Sch. of Electron. Eng. & Comput. Sci., Peking Univ., Beijing, China
Abstract :
Transduction takes a set of training samples and aims at estimating class labels of given examples in one step as opposed to the traditional induction, which involves an intermediate learning step. The background philosophy of transduction is that one should not reduce an easier task (estimating labels of given examples) to a substantially more complex problem (learning a model). This paper proposes new scheme for transductive inference, which we call MDL transduction. It labels the given examples so that the stochastic complexity of the whole data is minimized. In the sense of minimum description length, MDL transduction outperforms induction in both generative and discriminative methods. A key property of MDL transduction is that it learns nothing about the model. This highly agrees with the afore-mentioned philosophy. Relation to transductive SVM (TSVM) is also discussed. We show that TSVM is an approximation of MDL transduction with discriminant models.
Keywords :
generalisation (artificial intelligence); inference mechanisms; learning by example; MDL transduction; discriminant models; minimum description length; model learning; semisupervised learning; training samples; transductive SVM; transductive inference; Computer science; Error analysis; Induction generators; Labeling; Parameter estimation; Semisupervised learning; Stochastic processes; Support vector machine classification; Support vector machines; Text categorization; Minimum Description Length; Semisupervised learning; Transduction; Transductive SVM;
Conference_Titel :
Machine Learning and Cybernetics, 2005. Proceedings of 2005 International Conference on
Conference_Location :
Guangzhou, China
Print_ISBN :
0-7803-9091-1
DOI :
10.1109/ICMLC.2005.1527470