DocumentCode :
1064633
Title :
Recurrent neural network training with feedforward complexity
Author :
Olurotimi, Oluseyi
Author_Institution :
Dept. of Electr. & Comput. Eng., George Mason Univ., Fairfax, VA, USA
Volume :
5
Issue :
2
fYear :
1994
fDate :
3/1/1994 12:00:00 AM
Firstpage :
185
Lastpage :
197
Abstract :
This paper presents a training method that is of no more than feedforward complexity for fully recurrent networks. The method is not approximate, but rather depends on an exact transformation that reveals an embedded feedforward structure in every recurrent network. It turns out that given any unambiguous training data set, such as samples of the state variables and their derivatives, we need only to train this embedded feedforward structure. The necessary recurrent network parameters are then obtained by an inverse transformation that consists only of linear operators. As an example of modeling a representative nonlinear dynamical system, the method is applied to learn Bessel´s differential equation, thereby generating Bessel functions within, as well as outside the training set
Keywords :
Bessel functions; differential equations; learning (artificial intelligence); recurrent neural nets; Bessel differential equation; Bessel functions; derivatives; embedded feedforward structure; feedforward complexity; inverse transformation; linear operators; modeling; nonlinear dynamical system; recurrent neural networks; state variables; Calculus; Differential equations; Feedforward neural networks; Neural networks; Neurofeedback; Nonlinear dynamical systems; Recurrent neural networks; Robot control; Steady-state; Training data;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.279184
Filename :
279184
Link To Document :
بازگشت