Title :
Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter
Author :
Reed, Russell ; Marks, Robert J. ; Oh, Seho
Author_Institution :
Dept. of Electr. Eng., Washington Univ., Seattle, WA, USA
fDate :
5/1/1995 12:00:00 AM
Abstract :
The generalization performance of feedforward layered perceptrons can, in many cases, be improved either by smoothing the target via convolution, regularizing the training error with a smoothing constraint, decreasing the gain (i.e., slope) of the sigmoid nonlinearities, or adding noise (i.e., jitter) to the input training data, In certain important cases, the results of these procedures yield highly similar results although at different costs. Training with jitter, for example, requires significantly more computation than sigmoid scaling
Keywords :
convolution; feedforward neural nets; generalisation (artificial intelligence); jitter; learning (artificial intelligence); multilayer perceptrons; smoothing methods; convolution; error regularization; feedforward layered perceptrons; generalization performance; jitter; multilayer perceptrons; sigmoid gain scaling; sigmoid nonlinearities; smoothing constraint; target smoothing; training; training error regularization; Convolution; Costs; Jitter; Lagrangian functions; Noise cancellation; Performance gain; Probability density function; Sampling methods; Smoothing methods; Training data;
Journal_Title :
Neural Networks, IEEE Transactions on