DocumentCode
20132
Title
Local Receptive Fields Based Extreme Learning Machine
Author
Guang-Bin Huang ; Zuo Bai ; Kasun, Liyanaarachchi Lekamalage Chamara ; Chi Man Vong
Author_Institution
Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore, Singapore
Volume
10
Issue
2
fYear
2015
fDate
May-15
Firstpage
18
Lastpage
29
Abstract
Extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks (SLFNs), provides efficient unified learning solutions for the applications of feature learning, clustering, regression and classification. Different from the common understanding and tenet that hidden neurons of neural networks need to be iteratively adjusted during training stage, ELM theories show that hidden neurons are important but need not be iteratively tuned. In fact, all the parameters of hidden nodes can be independent of training samples and randomly generated according to any continuous probability distribution. And the obtained ELM networks satisfy universal approximation and classification capability. The fully connected ELM architecture has been extensively studied. However, ELM with local connections has not attracted much research attention yet. This paper studies the general architecture of locally connected ELM, showing that: 1) ELM theories are naturally valid for local connections, thus introducing local receptive fields to the input layer; 2) each hidden node in ELM can be a combination of several hidden nodes (a subnetwork), which is also consistent with ELM theories. ELM theories may shed a light on the research of different local receptive fields including true biological receptive fields of which the exact shapes and formula may be unknown to human beings. As a specific example of such general architectures, random convolutional nodes and a pooling structure are implemented in this paper. Experimental results on the NORB dataset, a benchmark for object recognition, show that compared with conventional deep learning solutions, the proposed local receptive fields based ELM (ELM-LRF) reduces the error rate from 6.5% to 2.7% and increases the learning speed up to 200 times.
Keywords
approximation theory; feedforward neural nets; learning (artificial intelligence); object recognition; ELM networks; ELM-LRF; NORB dataset; SLFN; continuous probability distribution; feature clustering; feature learning; generalized single-hidden layer feedforward neural networks; local receptive field based extreme learning machine; object recognition; unified learning solutions; universal approximation; universal classification capability; Approximation methods; Feature extraction; Feedforward neural networks; Iterative methods; Learning systems; Network architecture; Neural networks; Probability distribution; Regression analysis; Training;
fLanguage
English
Journal_Title
Computational Intelligence Magazine, IEEE
Publisher
ieee
ISSN
1556-603X
Type
jour
DOI
10.1109/MCI.2015.2405316
Filename
7083684
Link To Document