DocumentCode :
1902788
Title :
A dynamic neural network architecture by sequential partitioning of the input space
Author :
Shadafan, R.S. ; Niranjan, M.
Author_Institution :
Dept. of Eng., Cambridge Univ., UK
fYear :
1993
fDate :
1993
Firstpage :
226
Abstract :
A sequential approach to neural network training is presented. The network is presented with each item of data only once, and its architecture is dynamically adjusted during training. At the arrival of each example, a decision whether to increase the complexity of the network is made, based on three heuristic criteria. These criteria measure the position of the new item of data with respect to the information currently stored in the network. Each hidden unit in the network is trained in closed form by means of a recursive least squares algorithm. A local covariance matrix of the data is maintained at each node, and the closed form solution is recursively updated. The performance of the algorithm is illustrated on a small-scale problem, using two-dimensional speech data. The sequential nature of the algorithm has an efficient hardware implementation in the form of systolic arrays, and the incremental training idea has better biological plausibility
Keywords :
learning (artificial intelligence); least squares approximations; neural nets; pattern recognition; dynamic neural network architecture; hidden unit; incremental training; local covariance matrix; neural network training; recursive least squares algorithm; sequential partitioning; systolic arrays; Closed-form solution; Covariance matrix; Current measurement; Interpolation; Least squares approximation; Least squares methods; Multilayer perceptrons; Neural networks; Position measurement; Resonance light scattering;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1993., IEEE International Conference on
Conference_Location :
San Francisco, CA
Print_ISBN :
0-7803-0999-5
Type :
conf
DOI :
10.1109/ICNN.1993.298561
Filename :
298561
Link To Document :
بازگشت