DocumentCode :
3051209
Title :
Sample Selection Based on K-L Divergence for Effectively Training SVM
Author :
Junhai Zhai ; Chang Li ; Ta Li
Author_Institution :
Key Lab. of Machine Learning & Comput. Intell. of Hebei Province, Hebei Univ., Baoding, China
fYear :
2013
fDate :
13-16 Oct. 2013
Firstpage :
4837
Lastpage :
4842
Abstract :
The computational time and space complexity of support vector machine (SVM) are O(n3) and O(n2) respectively, where n is the number of training samples. It is inefficient or impracticable to train an SVM on relatively large datasets. Actually, the removal of training samples that are not support vector (SVs) has no effect on constructing the optimal hyper plane. Based on this idea, this paper proposed a sample selection method which can efficiently choose the candidate SVs from original datasets. The selected samples are used to train SVM. The experimental results show that the proposed method is effective and efficient, it can efficiently reduce the computational complexity both of time and space especially on relatively large datasets.
Keywords :
computational complexity; data mining; learning (artificial intelligence); support vector machines; K-L divergence; SVM training; computational O(n2) space complexity; computational O(n3) time complexity; optimal hyperplane; sample selection method; support vector machine; training samples; Accuracy; Classification algorithms; Educational institutions; Neural networks; Probabilistic logic; Support vector machines; Training; K-L divergence; PNN; SVM; samples selection;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Systems, Man, and Cybernetics (SMC), 2013 IEEE International Conference on
Conference_Location :
Manchester
Type :
conf
DOI :
10.1109/SMC.2013.823
Filename :
6722578
Link To Document :
بازگشت