DocumentCode :
352975
Title :
Estimation of the training efficiency of recurrent neural networks
Author :
Pu Sun ; Arko, Kennethm
Author_Institution :
Res. Lab., Ford Motor Co., Dearborn, MI, USA
Volume :
4
fYear :
2000
fDate :
2000
Firstpage :
583
Abstract :
In our studies of the capabilities of neural networks, we have relied on time-lagged recurrent neural networks (TLRNN) to learn to emulate the behavior of complex dynamic systems. In this study, we take data from the physical system and train a suitable TLRNN to convergence. We then use that trained neural network to generate a set of noise-free data over the same input manifold. A variety of disturbances are introduced into the generated data so that for a given disturbance a minimum RMS error may be computed from the deviations of the perturbed outputs from the true outputs. These perturbed outputs are then used as targets to train an identically structured TLRNN to determine how close to the global minimum the training proceeds. The results indicate that, depending on the type of the noise introduced, the global Kalman filter method as well as properly formulated gradient descent method produce TLRNNs which have RMS errors deviated from the global minimum from less than 3% to about 10%
Keywords :
Gaussian distribution; Kalman filters; convergence; delays; gradient methods; learning (artificial intelligence); recurrent neural nets; Kalman filter; complex dynamic systems; convergence; gradient descent method; learning algorithm; time-lagged recurrent neural networks; Acceleration; Emulation; Engine cylinders; Feedforward systems; Laboratories; Neural networks; Noise generators; Recurrent neural networks; Sun; Vehicles;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on
Conference_Location :
Como
ISSN :
1098-7576
Print_ISBN :
0-7695-0619-4
Type :
conf
DOI :
10.1109/IJCNN.2000.860834
Filename :
860834
Link To Document :
بازگشت