DocumentCode :
767984
Title :
Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter
Author :
Reed, Russell ; Marks, Robert J. ; Oh, Seho
Author_Institution :
Dept. of Electr. Eng., Washington Univ., Seattle, WA, USA
Volume :
6
Issue :
3
fYear :
1995
fDate :
5/1/1995 12:00:00 AM
Firstpage :
529
Lastpage :
538
Abstract :
The generalization performance of feedforward layered perceptrons can, in many cases, be improved either by smoothing the target via convolution, regularizing the training error with a smoothing constraint, decreasing the gain (i.e., slope) of the sigmoid nonlinearities, or adding noise (i.e., jitter) to the input training data, In certain important cases, the results of these procedures yield highly similar results although at different costs. Training with jitter, for example, requires significantly more computation than sigmoid scaling
Keywords :
convolution; feedforward neural nets; generalisation (artificial intelligence); jitter; learning (artificial intelligence); multilayer perceptrons; smoothing methods; convolution; error regularization; feedforward layered perceptrons; generalization performance; jitter; multilayer perceptrons; sigmoid gain scaling; sigmoid nonlinearities; smoothing constraint; target smoothing; training; training error regularization; Convolution; Costs; Jitter; Lagrangian functions; Noise cancellation; Performance gain; Probability density function; Sampling methods; Smoothing methods; Training data;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.377960
Filename :
377960
Link To Document :
بازگشت