Title :
Extending kernel principal component analysis to general underlying loss functions
Author :
Alzate, Carlos ; Suykens, Johan A K
Author_Institution :
Katholieke Univ. Leuven, Belgium
fDate :
31 July-4 Aug. 2005
Abstract :
Kernel principal component analysis can be considered as a natural nonlinear generalization of PCA because it performs linear PCA in a kernel induced feature space. It allows us to extract nonlinear structures in the input data. The classical kernel PCA formulation leads to an eigendecomposition of the kernel matrix: eigenvectors with large eigenvalue correspond to the principal components in the feature space. Starting from the least squares support vector machine (LS-SVM) formulation to kernel PCA we extend it to general underlying loss functions. For classical kernel PCA, the underlying loss function is L2. In this approach, one can easily plug in other loss functions and solve a nonlinear optimization problem to achieve desirable properties. Simulations with Huber´s loss function for robustness and quadratic epsilon insensitive loss function for sparseness demonstrate the flexibility of our approach.
Keywords :
eigenvalues and eigenfunctions; feature extraction; optimisation; principal component analysis; support vector machines; Huber loss function; eigenvalue; eigenvectors; general underlying loss functions; kernel induced feature space; kernel matrix eigendecomposition; kernel principal component analysis; least squares support vector machine formulation; linear PCA; natural nonlinear generalization; nonlinear optimization; quadratic epsilon insensitive loss function; Constraint optimization; Covariance matrix; Data mining; Eigenvalues and eigenfunctions; Feature extraction; Kernel; Lagrangian functions; Noise reduction; Principal component analysis; Robustness;
Conference_Titel :
Neural Networks, 2005. IJCNN '05. Proceedings. 2005 IEEE International Joint Conference on
Print_ISBN :
0-7803-9048-2
DOI :
10.1109/IJCNN.2005.1555832