DocumentCode :
1277916
Title :
Training neural networks with additive noise in the desired signal
Author :
Wang, Chuan ; Principe, Jose C.
Author_Institution :
AT&T Bell Labs., Murray Hill, NJ, USA
Volume :
10
Issue :
6
fYear :
1999
fDate :
11/1/1999 12:00:00 AM
Firstpage :
1511
Lastpage :
1517
Abstract :
A global optimization strategy for training adaptive systems such as neural networks and adaptive filters (finite or infinite impulse response) is proposed. Instead of adding random noise to the weights as proposed in the past, additive random noise is injected directly into the desired signal. Experimental results show that this procedure also speeds up greatly the backpropagation algorithm. The method is very easy to implement in practice, preserving the backpropagation algorithm and requiring a single random generator with a monotonically decreasing step size per output channel. Hence, this is an ideal strategy to speed up supervised learning, and avoid local minima entrapment when the noise variance is appropriately scheduled
Keywords :
FIR filters; IIR filters; adaptive filters; backpropagation; multilayer perceptrons; random noise; adaptive systems; additive random noise; global optimization strategy; noise variance; random generator; supervised learning; Adaptive filters; Adaptive systems; Additive noise; Backpropagation algorithms; Convergence; IIR filters; Intelligent networks; Neural networks; Simulated annealing; Supervised learning;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.809097
Filename :
809097
Link To Document :
بازگشت