Title :
Comments on "Principal component extraction using recursive least squares learning"
Author_Institution :
Dept. of Electr. & Electron. Eng., Melbourne Univ., Parkville, Vic., Australia
fDate :
7/1/1996 12:00:00 AM
Abstract :
In the above paper, (Bannour and Azimi-Sanjadi, 1995) we point out and correct flaws in the proofs of the orthonormal property of the optimal weight vectors of a two-layer linear auto-associative network used for sequentially extracting the principal components of a stationary vector stochastic process.
Keywords :
covariance matrices; learning (artificial intelligence); multilayer perceptrons; optimal weight vectors; orthonormal property; principal component extraction; recursive least squares learning; stationary vector stochastic process; two-layer linear auto-associative network; Covariance matrix; Eigenvalues and eigenfunctions; Least squares methods; Multilayer perceptrons; Neural networks; Neurons; Resonance light scattering; Stochastic processes; Vectors;
Journal_Title :
Neural Networks, IEEE Transactions on