Title :
Training neural networks with additive noise in the desired signal
Author :
Wang, Chug ; Principe, Jose C.
Author_Institution :
Lucent Technol., AT&T Bell Labs., Middletown, NJ, USA
Abstract :
A new global optimization strategy for training adaptive systems such as neural networks and adaptive filters (finite or infinite impulse response (FIR or IIR)) is proposed in this paper. Instead of adding random noise to the weights as proposed in the past, additive random noise is injected directly into the desired signal. Experimental results show that this procedure also speeds up greatly the backpropagation algorithm. The method is very easy to implement in practice, preserving the backpropagation algorithm and requiring a single random generator with a monotonically decreasing step size per output channel. Hence, this is an ideal strategy to speed up supervised learning, and avoid local minima entrapment when the noise variance is appropriately scheduled
Keywords :
adaptive systems; backpropagation; filtering theory; neural nets; optimisation; random noise; FIR filters; IIR filters; adaptive filters; additive noise; additive random noise; backpropagation algorithm; global optimization strategy; local minima entrapment; monotonically decreasing step size; random generator; training neural networks; Additive noise; Character generation; Cost function; Intelligent networks; Least squares approximation; Minimization methods; Neural networks; Neurons; Noise figure; White noise;
Conference_Titel :
Neural Networks Proceedings, 1998. IEEE World Congress on Computational Intelligence. The 1998 IEEE International Joint Conference on
Conference_Location :
Anchorage, AK
Print_ISBN :
0-7803-4859-1
DOI :
10.1109/IJCNN.1998.685923