• DocumentCode
    2767751
  • Title

    A Note on Conjugate Natural Gradient Training of Multilayer Perceptrons

  • Author

    González, Ana ; Dorronsoro, José R.

  • Author_Institution
    Univ. Autonoma de Madrid, Madrid
  • fYear
    0
  • fDate
    0-0 0
  • Firstpage
    887
  • Lastpage
    891
  • Abstract
    Natural gradient has been shown to greatly accelerate on-line multilayer perceptron (MLP) training. It also improves standard batch gradient descent, as it gives a Gauss-Newton approximation to quasi-Newton mean square error minimization, but now it should be slower than other superlinear minimization methods, such as the full Gaussian-quasi Newton method or the less complex but equally effective conjugate gradient descent method. In this work we shall investigate how to use natural gradients in a conjugate gradient setting, showing numerically that when applied to batch MLP learning, they can lead to faster convergence to better minimae than what is achieved by standard euclidean conjugate gradient descent.
  • Keywords
    Newton method; conjugate gradient methods; learning (artificial intelligence); mean square error methods; minimisation; multilayer perceptrons; Gauss-Newton approximation; batch MLP learning; batch gradient descent; conjugate natural gradient training; function minimization; multilayer perceptrons; online MLP training; quasiNewton mean square error minimization; Acceleration; Computer architecture; Convergence of numerical methods; Extraterrestrial measurements; Gaussian approximation; Mean square error methods; Minimization methods; Multilayer perceptrons; Newton method; Optimization methods;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 2006. IJCNN '06. International Joint Conference on
  • Conference_Location
    Vancouver, BC
  • Print_ISBN
    0-7803-9490-9
  • Type

    conf

  • DOI
    10.1109/IJCNN.2006.246779
  • Filename
    1716190