DocumentCode :
3200308
Title :
Parallel Approximate Matrix Factorization for Kernel Methods
Author :
Zhu, Kaihua ; Cui, Hang ; Bai, Hongjie ; Li, Jian ; Qiu, Zhihuan ; Wang, Hao ; Xu, Hui ; Chang, Edward Y.
fYear :
2007
fDate :
2-5 July 2007
Firstpage :
1275
Lastpage :
1278
Abstract :
The kernel methods play a pivotal role in machine learning algorithms. Unfortunately, working with the kernel methods must deal with an n times n kernel matrix, which is memory intensive. In this paper, we present a parallel, approximate matrix factorization algorithm, which loads only essential data to individual processors to enable parallel processing of data. Our method reduces space requirement for the kernel matrix from O(n2) to O(np/m), where n is the amount of data, p the reduced matrix dimension (p << n), and m the number of processors.
Keywords :
learning (artificial intelligence); matrix decomposition; parallel processing; kernel methods; machine learning algorithms; parallel approximate matrix factorization; parallel processing; Computational efficiency; Kernel; Large-scale systems; Machine learning; Machine learning algorithms; Parallel processing; Quadratic programming; Round robin; Support vector machine classification; Support vector machines;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Multimedia and Expo, 2007 IEEE International Conference on
Conference_Location :
Beijing
Print_ISBN :
1-4244-1016-9
Electronic_ISBN :
1-4244-1017-7
Type :
conf
DOI :
10.1109/ICME.2007.4284890
Filename :
4284890
Link To Document :
بازگشت