DocumentCode :
1088797
Title :
Learning algorithms for feedforward networks based on finite samples
Author :
Rao, Nageswara S V ; Protopopescu, Vladimir ; Mann, Reinhold C. ; Oblow, E.M. ; Iyengar, S. Sitharama
Author_Institution :
Center for Eng. Syst. Adv. Res., Oak Ridge Nat. Lab., TN, USA
Volume :
7
Issue :
4
fYear :
1996
fDate :
7/1/1996 12:00:00 AM
Firstpage :
926
Lastpage :
940
Abstract :
We present two classes of convergent algorithms for learning continuous functions and regressions that are approximated by feedforward networks. The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. (1970). The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods (1951). Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems
Keywords :
approximation theory; feedforward neural nets; learning (artificial intelligence); continuous functions; convergent algorithms; error bounds; feedforward networks; learning algorithms; martingale-type inequalities; neural networks; potential function methods; regressions; stochastic approximation methods; wavelet networks; Approximation methods; Backpropagation algorithms; Computer networks; Convergence; Feedforward neural networks; Helium; Machine learning; Machine learning algorithms; Neural networks; Stochastic processes;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.508936
Filename :
508936
Link To Document :
بازگشت