• DocumentCode
    288333
  • Title

    An incremental learning algorithm that optimizes network size and sample size in one trial

  • Author

    Zhang, Byoung-Tak

  • Author_Institution
    German Nat. Res. Center for Comput. Sci., St. Augustin, Germany
  • Volume
    1
  • fYear
    1994
  • fDate
    27 Jun-2 Jul 1994
  • Firstpage
    215
  • Abstract
    A constructive learning algorithm is described that builds a feedforward neural network with an optimal number of hidden units to balance convergence and generalization. The method starts with a small training set and a small network, and expands the training set incrementally after training. If the training does not converge, the network grows incrementally to increase its learning capacity. This process, called selective learning with flexible neural architectures (SELF), results in a construction of an optimal size network for learning all the given data using only a minimal subset of them. The author shows that the network size optimization combined with active example selection generalizes significantly better and converges faster than conventional methods
  • Keywords
    convergence; feedforward neural nets; generalisation (artificial intelligence); learning (artificial intelligence); active example selection; convergence; feedforward neural network; generalization; incremental learning algorithm; learning capacity; network size optimization; Buildings; Computer science; Convergence; Electronic mail; Feedforward neural networks; Intelligent networks; Multilayer perceptrons; Neural networks; Optimization methods; Radial basis function networks;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
  • Conference_Location
    Orlando, FL
  • Print_ISBN
    0-7803-1901-X
  • Type

    conf

  • DOI
    10.1109/ICNN.1994.374165
  • Filename
    374165