• DocumentCode
    303201
  • Title

    Modelling weight- and input-noise in MLP learning

  • Author

    Edwards, Peter J. ; Murray, Alan F.

  • Author_Institution
    Dept. of Electr. Eng., Edinburgh Univ., UK
  • Volume
    1
  • fYear
    1996
  • fDate
    3-6 Jun 1996
  • Firstpage
    78
  • Abstract
    This paper presents a study of weight- and input-noise in feedforward network training algorithms. In theory for the optimal least-squares case noise can be modelled by a single cost function term. However we believe that such ideal conditions are uncommon in practice. Both first and second derivative terms are shown to have the potential to de-sensitize the trained network´s outputs to weight- or input-corruption. Simulation experiments illustrate these points by comparing the ideal case with a more realistic real-world example. The results show that although the second derivative term can influence the network solution in the practical case, the first derivative term is dominant
  • Keywords
    feedforward neural nets; learning (artificial intelligence); least squares approximations; modelling; multilayer perceptrons; noise; Tikhonov regularisation; derivatives; feedforward neural network; input-noise; learning; modelling; multilayer perceptron; weight noise; Additive noise; Cost function; Fault tolerance; Feedforward systems; Hardware; Intelligent networks; Least squares approximation; Noise robustness; Phase noise; Taylor series;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1996., IEEE International Conference on
  • Conference_Location
    Washington, DC
  • Print_ISBN
    0-7803-3210-5
  • Type

    conf

  • DOI
    10.1109/ICNN.1996.548870
  • Filename
    548870