DocumentCode :
3118943
Title :
L1-regularized least squares sparse extreme learning machine for classification
Author :
Fakhr, Mohamed Waleed ; Youssef, El-Nasser S. ; El-Mahallawy, Mohamed S.
Author_Institution :
Coll. of Comput. & Inf. Technol., Cairo, Egypt
fYear :
2015
fDate :
17-19 May 2015
Firstpage :
222
Lastpage :
225
Abstract :
Extreme Learning Machines (ELM) is a class of supervised learning models that have three basic steps: A random projection of the input space followed by some nonlinear operation and finally a linear output layer of weights. The basic ELM uses pseudo matrix inverse to estimate the output layer weights which usually leads to over fitting. Recent research suggested the use of L2-norm regularization to enhance the sparsity of the output layer. This paper proposes the use of the L1-norm LASSO formulation, since the L1-norm promotes sparsity in the solution of the output layer weights and it has been shown to produce the sparsest solutions in many applications. Extensive comparison between the basic ELM, the L1-norm and the L2-norm is conducted over a number of classification tasks, with significant improvement in sparseness using the proposed approach with better performance than that reported in the literature.
Keywords :
learning (artificial intelligence); least squares approximations; matrix algebra; pattern classification; ELM; L1-norm LASSO formulation; L1-regularized least squares sparse extreme learning machine; L2-norm regularization; classification tasks; nonlinear operation; pseudo matrix inverse; random projection; supervised learning models; Accuracy; Diabetes; Fasteners; Neurons; Support vector machines; Tuning; L1-norm; LASSO; extreme learning machine; sparse ELM;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Information and Communication Technology Research (ICTRC), 2015 International Conference on
Conference_Location :
Abu Dhabi
Type :
conf
DOI :
10.1109/ICTRC.2015.7156462
Filename :
7156462
Link To Document :
بازگشت