• DocumentCode
    231904
  • Title

    Laplacian regularized co-training

  • Author

    Yang Li ; Weifeng Liu ; Yanjiang Wang

  • Author_Institution
    Coll. of Inf. & Control Eng., China Univ. of Pet. (East China), Qingdao, China
  • fYear
    2014
  • fDate
    19-23 Oct. 2014
  • Firstpage
    1408
  • Lastpage
    1412
  • Abstract
    Co-training is one promising paradigm of semi-supervised learning and has drawn considerable attentions and interests in recent years. It usually works in an iterative manner on two disjoint view features, in which two classifiers are trained on the different views and teach each other by adding the predictions of unlabeled data to the training set of the other view. However, the classifier performs not well with small number of labeled examples especially in the first rounds of interation. In this paper, we present Laplacian regularized co-training(LapCo) to address the above problem in standard co-training. During the training process, LapCo employs Laplacian regularization into the classifier to significantly boost the classification performance. The experiments on three popular UCI repository datasets are conducted and show that the proposed LapCo outperforms the traditional co-training method.
  • Keywords
    learning (artificial intelligence); pattern classification; LapCo; Laplacian regularized co-training; UCI repository datasets; classification performance; classifier; disjoint view features; semisupervised learning; unlabeled data prediction; Classification algorithms; Diabetes; Laplace equations; Partitioning algorithms; Standards; Support vector machines; Training; Laplacian regularization; Semi-supervised learning; co-traning;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Signal Processing (ICSP), 2014 12th International Conference on
  • Conference_Location
    Hangzhou
  • ISSN
    2164-5221
  • Print_ISBN
    978-1-4799-2188-1
  • Type

    conf

  • DOI
    10.1109/ICOSP.2014.7015231
  • Filename
    7015231