DocumentCode :
294934
Title :
A convergence analysis for neural networks with constant learning rates and non-stationary inputs
Author :
Liu, R. ; Dong, G. ; Ling, X.
Author_Institution :
Dept. of Electr. Eng., Notre Dame Univ., IN, USA
Volume :
2
fYear :
1995
fDate :
13-15 Dec 1995
Firstpage :
1278
Abstract :
A novel deterministic approach to the convergence analysis of (stochastic) temporal neural networks is presented. The link between the two is a new concept of time-average invariance (TAI) which is a property of deterministic signals but with applications to stochastic signals. With this new concept, the conventional ODE method can be extended to the case of constant learning rate. With weaker conditions, not requiring mutually independence, it is shown that a temporal neural network is ε-convergent to x0, if its associated (autonomous) equations are asymptotically stable at x0. This result is then extended to the case of perturbed TAI signals. A temporal neural network for blind signal separation is used as an example
Keywords :
convergence; neural nets; signal processing; unsupervised learning; ϵ-convergence; blind signal separation; constant learning rates; convergence analysis; deterministic approach; deterministic signals; nonstationary inputs; stochastic signals; stochastic temporal neural networks; time-average invariance; Blind source separation; Convergence; Counting circuits; Differential equations; Helium; Intelligent networks; Neural networks; Random variables; Signal processing; Stochastic processes;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control, 1995., Proceedings of the 34th IEEE Conference on
Conference_Location :
New Orleans, LA
ISSN :
0191-2216
Print_ISBN :
0-7803-2685-7
Type :
conf
DOI :
10.1109/CDC.1995.480273
Filename :
480273
Link To Document :
بازگشت