Title :
Linear feature extraction in networks with lateral connections
Author :
Obradovic, D. ; Deco, G.
Author_Institution :
Corp. Res. & Dev., Siemens AG, Munich, Germany
fDate :
27 Jun-2 Jul 1994
Abstract :
Presents a novel unsupervised learning paradigm for feature extraction in linear networks with lateral connections under the constraint that no information distortion occurs in the input-output map. The latter is guaranteed by restricting the Jacobian of the linear input-output transformation to remain equal to one during the learning process. Under the assumption that the input signals are Gaussian, the presented learning progressively minimizes the redundancy at the output layer until a factorial output representation is obtained. The redundancy is characterized by a suitably chosen entropy function whose minimum corresponds to decorrelation of the network outputs. The learning paradigm is based on the Lyapunov arguments and it is derived for networks with symmetric and anti-symmetric lateral connections. Examples which validate the introduced learning paradigm are presented
Keywords :
Lyapunov methods; entropy; feature extraction; minimisation; neural net architecture; neural nets; redundancy; unsupervised learning; Gaussian input signals; Jacobian; Lyapunov arguments; antisymmetric lateral connections; entropy function; factorial output representation; information distortion; input-output map; linear feature extraction; linear input-output transformation; linear neural networks; network output decorrelation; output layer redundancy minimization; symmetric lateral connections; unsupervised learning paradigm; Covariance matrix; Decorrelation; Entropy; Feature extraction; Intelligent networks; Jacobian matrices; Mutual information; Random variables; Unsupervised learning; Vectors;
Conference_Titel :
Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
Conference_Location :
Orlando, FL
Print_ISBN :
0-7803-1901-X
DOI :
10.1109/ICNN.1994.374259