• DocumentCode
    2959862
  • Title

    Sparse support vector machines trained in the reduced empirical feature space

  • Author

    Iwamura, Kazuki ; Abe, Shigeo

  • Author_Institution
    Electr. Eng., Kobe Univ., Kobe
  • fYear
    2008
  • fDate
    1-8 June 2008
  • Firstpage
    2398
  • Lastpage
    2404
  • Abstract
    We discuss sparse support vector machines (sparse SVMs) trained in the reduced empirical feature space. Namely, we select the linearly independent training data by the Cholesky factorization of the kernel matrix, and train the SVM in the dual form in the reduced empirical feature space. Since the mapped linearly independent training data span the empirical feature space, the linearly independent training data become support vectors. Thus if the number of linearly independent data is smaller than the number of support vectors trained in the feature space, sparsity is increased. By computer experiments we show that in most cases we can reduce the number of support vectors without deteriorating the generalization ability.
  • Keywords
    matrix decomposition; support vector machines; Cholesky factorization; independent training data; reduced empirical feature space; sparse support vector machines; Constraint optimization; Eigenvalues and eigenfunctions; Kernel; Least squares methods; Newton method; Sparse matrices; Support vector machine classification; Support vector machines; Symmetric matrices; Training data;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence). IEEE International Joint Conference on
  • Conference_Location
    Hong Kong
  • ISSN
    1098-7576
  • Print_ISBN
    978-1-4244-1820-6
  • Electronic_ISBN
    1098-7576
  • Type

    conf

  • DOI
    10.1109/IJCNN.2008.4634131
  • Filename
    4634131