Title :
Recurrent neural network training with feedforward complexity
Author :
Olurotimi, Oluseyi
Author_Institution :
Dept. of Electr. & Comput. Eng., George Mason Univ., Fairfax, VA, USA
fDate :
3/1/1994 12:00:00 AM
Abstract :
This paper presents a training method that is of no more than feedforward complexity for fully recurrent networks. The method is not approximate, but rather depends on an exact transformation that reveals an embedded feedforward structure in every recurrent network. It turns out that given any unambiguous training data set, such as samples of the state variables and their derivatives, we need only to train this embedded feedforward structure. The necessary recurrent network parameters are then obtained by an inverse transformation that consists only of linear operators. As an example of modeling a representative nonlinear dynamical system, the method is applied to learn Bessel´s differential equation, thereby generating Bessel functions within, as well as outside the training set
Keywords :
Bessel functions; differential equations; learning (artificial intelligence); recurrent neural nets; Bessel differential equation; Bessel functions; derivatives; embedded feedforward structure; feedforward complexity; inverse transformation; linear operators; modeling; nonlinear dynamical system; recurrent neural networks; state variables; Calculus; Differential equations; Feedforward neural networks; Neural networks; Neurofeedback; Nonlinear dynamical systems; Recurrent neural networks; Robot control; Steady-state; Training data;
Journal_Title :
Neural Networks, IEEE Transactions on