Title :
Convergence of neural network weights for stochastic systems
Author_Institution :
Dept. of Math., Kansas Univ., Lawrence, KS, USA
Abstract :
This paper considers the convergence of neural network weights in identification of a deterministic system dx=φ(x, u)*dt with stochastic observation y=x+ξ. Since the backpropagation learning algorithm is based on the error between the output of the neural network and the desired output (i.e. the observation in our system), and the observation is stochastic in our system, it is not possible to use the steepest descent to estimate the weights directly. Therefore we consider the use of the stochastic approximation method to estimate the weights, and prove its convergence in the presence of noise
Keywords :
approximation theory; backpropagation; identification; neural nets; stochastic systems; backpropagation learning; convergence; gradient method; identification; neural network weights; stochastic approximation; stochastic observation; stochastic systems; Approximation methods; Convergence; Gradient methods; Markov processes; Mathematics; Multi-layer neural network; Neural networks; Noise cancellation; Stochastic resonance; Stochastic systems;
Conference_Titel :
Decision and Control, 1994., Proceedings of the 33rd IEEE Conference on
Conference_Location :
Lake Buena Vista, FL
Print_ISBN :
0-7803-1968-0
DOI :
10.1109/CDC.1994.411512