Title :
Parallel Approximate Matrix Factorization for Kernel Methods
Author :
Zhu, Kaihua ; Cui, Hang ; Bai, Hongjie ; Li, Jian ; Qiu, Zhihuan ; Wang, Hao ; Xu, Hui ; Chang, Edward Y.
Abstract :
The kernel methods play a pivotal role in machine learning algorithms. Unfortunately, working with the kernel methods must deal with an n times n kernel matrix, which is memory intensive. In this paper, we present a parallel, approximate matrix factorization algorithm, which loads only essential data to individual processors to enable parallel processing of data. Our method reduces space requirement for the kernel matrix from O(n2) to O(np/m), where n is the amount of data, p the reduced matrix dimension (p << n), and m the number of processors.
Keywords :
learning (artificial intelligence); matrix decomposition; parallel processing; kernel methods; machine learning algorithms; parallel approximate matrix factorization; parallel processing; Computational efficiency; Kernel; Large-scale systems; Machine learning; Machine learning algorithms; Parallel processing; Quadratic programming; Round robin; Support vector machine classification; Support vector machines;
Conference_Titel :
Multimedia and Expo, 2007 IEEE International Conference on
Conference_Location :
Beijing
Print_ISBN :
1-4244-1016-9
Electronic_ISBN :
1-4244-1017-7
DOI :
10.1109/ICME.2007.4284890