• DocumentCode
    423663
  • Title

    Parallel tangent methods with variable stepsize

  • Author

    Petalas, Y.G. ; Vrahatis, M.N.

  • Author_Institution
    Dept. of Math., Patras Univ., Greece
  • Volume
    2
  • fYear
    2004
  • fDate
    25-29 July 2004
  • Firstpage
    1063
  • Abstract
    The most widely used algorithm for training multilayer feedforward neural networks is backpropagation. Back-propagation is an iterative gradient descent algorithm. Since its appearance, various methods which modify the conventional BP have been created to improve its efficiency. One such algorithm which uses an adaptive learning rate is backpropagation with variable stepsize, is proposed. Parallel tangent methods are used in global optimization to modify and improve the simple gradient descent algorithm by using from time to time the difference between the current point and the point before two steps as a search direction, instead of using the gradient. In this study, we investigate the combination of the BPVS method with the parallel tangent approach for neural network training. We perform experimental results on well-known test problems to evaluate the efficiency of the method.
  • Keywords
    backpropagation; feedforward neural nets; gradient methods; optimisation; search problems; BP variable stepsize; adaptive learning; backpropagation; global optimization; iterative gradient descent algorithm; multilayer feedforward neural networks; neural network training; parallel tangent methods; search problems; Artificial intelligence; Artificial neural networks; Backpropagation algorithms; Electronic mail; Equations; Error correction; Feedforward neural networks; Iterative algorithms; Mathematics; Neural networks;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on
  • ISSN
    1098-7576
  • Print_ISBN
    0-7803-8359-1
  • Type

    conf

  • DOI
    10.1109/IJCNN.2004.1380081
  • Filename
    1380081