• DocumentCode
    423754
  • Title

    Training multilayer perceptrons parameter by parameter

  • Author

    Li, Yan-Lai ; Wang, Kuan-Quan

  • Author_Institution
    Dept. of Comput. Sci. & Technol., Harbin Inst. of Technol., China
  • Volume
    6
  • fYear
    2004
  • fDate
    26-29 Aug. 2004
  • Firstpage
    3397
  • Abstract
    A new fast training algorithm for multi-layer perceptrons (MLP) is presented. This new algorithm, named parameter by parameter optimization algorithm (PBPOA), is proposed based on the idea of layer by layer (LBL) algorithm. The inputs errors of output layer and hidden layer are taken into consideration. Four classes of solution equations for parameters of networks are deducted respectively. The presented algorithm doesn´t need the calculation of the gradient of error function at all. In each iteration step, the weight or threshold can be optimized directly one by one with other variables fixed. Effectiveness of the presented algorithm is demonstrated by two benchmarks, in which faster convergence rate of training are obtained in contrast with the BP algorithm with momentum (BPM) and the conventional LBL algorithm.
  • Keywords
    convergence; gradient methods; learning (artificial intelligence); multilayer perceptrons; optimisation; error function gradient; iteration step; multilayer perceptrons training; parameter by parameter optimization algorithm; training convergence rate; Computational complexity; Computer science; Convergence; Cost function; Equations; Large-scale systems; Least squares methods; Multilayer perceptrons; Nonhomogeneous media; Optimization methods;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Machine Learning and Cybernetics, 2004. Proceedings of 2004 International Conference on
  • Print_ISBN
    0-7803-8403-2
  • Type

    conf

  • DOI
    10.1109/ICMLC.2004.1380373
  • Filename
    1380373