• DocumentCode
    57920
  • Title

    Towards Making Unlabeled Data Never Hurt

  • Author

    Yu-Feng Li ; Zhi-Hua Zhou

  • Author_Institution
    Nat. Key Lab. for Novel Software Technol., Nanjing Univ., Nanjing, China
  • Volume
    37
  • Issue
    1
  • fYear
    2015
  • fDate
    Jan. 2015
  • Firstpage
    175
  • Lastpage
    188
  • Abstract
    It is usually expected that learning performance can be improved by exploiting unlabeled data, particularly when the number of labeled data is limited. However, it has been reported that, in some cases existing semi-supervised learning approaches perform even worse than supervised ones which only use labeled data. For this reason, it is desirable to develop safe semi-supervised learning approaches that will not significantly reduce learning performance when unlabeled data are used. This paper focuses on improving the safeness of semi-supervised support vector machines (S3VMs). First, the S3VM-us approach is proposed. It employs a conservative strategy and uses only the unlabeled instances that are very likely to be helpful, while avoiding the use of highly risky ones. This approach improves safeness but its performance improvement using unlabeled data is often much smaller than S3VMs. In order to develop a safe and well-performing approach, we examine the fundamental assumption of S3VMs, i.e., low-density separation. Based on the observation that multiple good candidate low-density separators may be identified from training data, safe semi-supervised support vector machines (S4VMs) are here proposed. This approach uses multiple low-density separators to approximate the ground-truth decision boundary and maximizes the improvement in performance of inductive SVMs for any candidate separator. Under the assumption employed by S3VMs, it is here shown that S4VMs are provably safe and that the performance improvement using unlabeled data can be maximized. An out-of-sample extension of S4VMs is also presented. This extension allows S4VMs to make predictions on unseen instances. Our empirical study on a broad range of data shows that the overall performance of S4VMs is highly competitive with S3VMs, whereas in contrast to S3VMs which hurt performance significantly in many cases, S4VMs rarely perform worse than inductive SVMs.
  • Keywords
    data handling; learning (artificial intelligence); support vector machines; S3VM-us approach; candidate separator; conservative strategy; ground-truth decision boundary; learning performance; low-density separation; out-of-sample extension; performance improvement; semisupervised learning approach; semisupervised support vector machines; training data; unlabeled data; unlabeled instances; Data models; Optimization; Particle separators; Prediction algorithms; Reliability; Semisupervised learning; Support vector machines; S3VMs; S4VMs; Unlabeled data; safe; semi-supervised learning;
  • fLanguage
    English
  • Journal_Title
    Pattern Analysis and Machine Intelligence, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    0162-8828
  • Type

    jour

  • DOI
    10.1109/TPAMI.2014.2299812
  • Filename
    6710159