DocumentCode :
2490899
Title :
Projection Vector Machine: One-stage learning algorithm from high-dimension small-sample data
Author :
Deng, Wanyu ; Zheng, Qinghua ; Lian, Shiguo ; Chen, Lin ; Wang, Xin
Author_Institution :
Dept. of Comput. Sci. & Technol., Xi´´an Jiaotong Univ., Xi´´an, China
fYear :
2010
fDate :
18-23 July 2010
Firstpage :
1
Lastpage :
8
Abstract :
The presence of fewer samples and large number of input features increases the complexity of the classifier and degrades the stability. Thus, dimension reduction was always carried before supervised learning algorithms such as neural network. This two-stage framework is somewhat redundant in dimension reduction and network training. This paper proposes a novel one-stage learning algorithm for high-dimension small-sample data, called Projection Vector Machine (PVM), which combines dimension reduction with network training and removes the redundancy. Through dimension reduction operation such as singular vector decomposition (SVD), we not only reduce the dimension but also obtain the size of single-hidden layer feedforward neural network (SLFN) and input weight values simultaneously. This size-fixed network will become linear programming system and thus the output weights can be determined by simple least square method. Unlike traditional backpropagation feedforward neural network (BP), parameters in PVM don´t need iterative tuning and thus its training speed is much faster than BP. Unlike extreme learning machine (ELM) proposed by Huang [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: theory and applications, Neurocomputing 70 (2006) 489-501] which assigns input weights randomly, PVM´s input weights are ranked by singular values and select the optimal weights order by singular value. We give proof that PVM is a universal approximator for high-dimension small-sample data. Experimental results show that the proposed one-stage algorithm PVM is faster than two-stage learning approach such as SVD+BP and SVD+ELM.
Keywords :
feedforward neural nets; learning (artificial intelligence); least squares approximations; linear programming; singular value decomposition; support vector machines; dimension reduction operation; extreme learning machine; high-dimension small-sample data; least square method; linear programming system; one-stage learning algorithm; projection vector machine; single-hidden layer feedforward neural network; supervised learning algorithms; Accuracy; Algorithm design and analysis; Artificial neural networks; Classification algorithms; Machine learning; Neurons; Training; Extreme Learning Machine; Neural network; Projection Vector Machine; Singular vector decomposition;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), The 2010 International Joint Conference on
Conference_Location :
Barcelona
ISSN :
1098-7576
Print_ISBN :
978-1-4244-6916-1
Type :
conf
DOI :
10.1109/IJCNN.2010.5596571
Filename :
5596571
Link To Document :
بازگشت