Title :
Instant learning for supervised learning neural networks: a rank-expansion algorithm
Author :
Chen, C. L Philip ; Luo, Jiyang
Author_Institution :
Dept. of Comput. Sci. & Eng., Wright State Univ., Dayton, OH, USA
fDate :
27 Jun-2 Jul 1994
Abstract :
An one-hidden layer neural network architecture is presented. An instant learning algorithm is given to decide the weights of a supervised learning neural network. For an n dimensional, N-pattern training set, a maximum of N-r hidden nodes are required to learn all the patterns within a given precision (where r is the rank, usually the dimension, of the input patterns). Using the inverse of activation function, the algorithms transfer the output to the hidden layer, add bias nodes to the input, expand the rank of input dimension. The proposed architecture and algorithm can obtain either exact solution or minimum least square error of the inverse activation of the output. The learning error only occurs when applying the inverse of activation function. Usually, this can be controlled by the given precision. Several examples show the very promising result
Keywords :
learning (artificial intelligence); least squares approximations; neural nets; activation function; exact solution; instant learning algorithm; minimum least-square error; one-hidden layer neural network architecture; rank-expansion algorithm; supervised learning neural networks; Algorithm design and analysis; Computer architecture; Computer networks; Computer science; Least squares methods; Multi-layer neural network; Neural networks; Supervised learning;
Conference_Titel :
Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
Conference_Location :
Orlando, FL
Print_ISBN :
0-7803-1901-X
DOI :
10.1109/ICNN.1994.374277