DocumentCode :
1013130
Title :
Parallel sequential minimal optimization for the training of support vector machines
Author :
Cao, L.J. ; Keerthi, S.S. ; Chong-Jin Ong ; Zhang, Jian Qiu ; Lee, Henry P
Author_Institution :
Dept. of Financial Studies, Fudan Univ., Shanghai, China
Volume :
17
Issue :
4
fYear :
2006
fDate :
7/1/2006 12:00:00 AM
Firstpage :
1039
Lastpage :
1049
Abstract :
Sequential minimal optimization (SMO) is one popular algorithm for training support vector machine (SVM), but it still requires a large amount of computation time for solving large size problems. This paper proposes one parallel implementation of SMO for training SVM. The parallel SMO is developed using message passing interface (MPI). Specifically, the parallel SMO first partitions the entire training data set into smaller subsets and then simultaneously runs multiple CPU processors to deal with each of the partitioned data sets. Experiments show that there is great speedup on the adult data set and the Mixing National Institute of Standard and Technology (MNIST) data set when many processors are used. There are also satisfactory results on the Web data set.
Keywords :
learning (artificial intelligence); message passing; multiprocessing systems; optimisation; support vector machines; Mixing National Institute of Standard and Technology data set; Web data set; message passing interface; multiple CPU processors; parallel sequential minimal optimization; partitioned data sets; support vector machine training; Kernel; Machine learning; Message passing; NIST; Parallel algorithms; Partitioning algorithms; Quadratic programming; Support vector machine classification; Support vector machines; Training data; Message passing interface (MPI); parallel algorithm; sequential minimal optimization (SMO); support vector machine (SVM);
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/TNN.2006.875989
Filename :
1650257
Link To Document :
بازگشت