• DocumentCode
    3416993
  • Title

    Supervised learning on large redundant training sets

  • Author

    Moller, Martin

  • Author_Institution
    Dept. of Comput. Sci., Aarhus Univ., Denmark
  • fYear
    1992
  • fDate
    31 Aug-2 Sep 1992
  • Firstpage
    79
  • Lastpage
    89
  • Abstract
    A novel algorithm combining the good properties of offline and online algorithms is introduced. The efficiency of supervised learning algorithms on small-scale problems does not necessarily scale up to large-scale problems. The redundancy of large training sets is reflected as redundancy gradient vectors in the network. Accumulating these gradient vectors implies redundant computations. In order to avoid these redundant computations a learning algorithm has to be able to update weights independently of the size of the training set. The stochastic learning algorithm proposed, the stochastic scaled conjugate gradient (SSCG) algorithm, has this property. Experimentally, it is shown that SSCG converges faster than the online backpropagation algorithm on the nettalk problem
  • Keywords
    convergence; feedforward neural nets; learning (artificial intelligence); redundancy; convergence; feedforward neural nets; nettalk problem; redundancy gradient vectors; redundant training sets; stochastic scaled conjugate gradient algorithm; supervised learning algorithms; Backpropagation algorithms; Code standards; Computational efficiency; Computer science; Feedforward neural networks; Large-scale systems; Neural networks; Redundancy; Stochastic processes; Supervised learning;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks for Signal Processing [1992] II., Proceedings of the 1992 IEEE-SP Workshop
  • Conference_Location
    Helsingoer
  • Print_ISBN
    0-7803-0557-4
  • Type

    conf

  • DOI
    10.1109/NNSP.1992.253705
  • Filename
    253705