Title :
Comments on "Noise injection into inputs in back propagation learning"
Author :
Grandvalet, Yves ; Canu, Stéphane
Author_Institution :
Centre de Recherches de Royallieu, Univ. de Technol. de Compiegne, France
fDate :
4/1/1995 12:00:00 AM
Abstract :
The generalization capacity of neural networks learning from examples is important. Several authors showed experimentally that training a neural network with noise injected inputs could improve its generalization abilities. In the original paper (ibid., vol. 22, no. 3. p. 436-40, 1992), Matsuoka explained this fact in a formal way, claiming that using noise injected inputs is equivalent to reduce the sensitivity of the network. However, the author states that an error in Matsuoka´s calculations lead him to inadequate conclusions. This paper corrects these calculations and conclusions.<>
Keywords :
backpropagation; generalisation (artificial intelligence); learning by example; neural nets; backpropagation learning; example-based learning; generalization capacity; neural networks; noise injection; Bayesian methods; Computational efficiency; Computer networks; Intelligent networks; Jacobian matrices; Neural networks; Noise reduction; Risk management; Surface reconstruction; Transfer functions;
Journal_Title :
Systems, Man and Cybernetics, IEEE Transactions on