• DocumentCode
    1242382
  • Title

    Generalization and PAC learning: some new results for the class of generalized single-layer networks

  • Author

    Holden, Sean B. ; Rayner, Peter J W

  • Author_Institution
    Dept. of Eng., Cambridge Univ., UK
  • Volume
    6
  • Issue
    2
  • fYear
    1995
  • fDate
    3/1/1995 12:00:00 AM
  • Firstpage
    368
  • Lastpage
    380
  • Abstract
    The ability of connectionist networks to generalize is often cited as one of their most important properties. We analyze the generalization ability of the class of generalized single-layer networks (GSLNs), which includes Volterra networks, radial basis function networks, regularization networks, and the modified Kanerva model, using techniques based on the theory of probably approximately correct (PAC) learning which have previously been used to analyze the generalization ability of feedforward networks of linear threshold elements (LTEs). An introduction to the relevant computational learning theory is included. We derive necessary and sufficient conditions on the number of training examples required by a GSLN to guarantee a particular generalization performance. We compare our results to those given previously for feedforward networks of LTEs and show that, on the basis of the currently available bounds, the sufficient number of training examples for GSLNs is typically considerably less than for feedforward networks of LTEs with the same number of weights. We show that the use of self-structuring techniques for GSLNs may reduce the number of training examples sufficient to guarantee good generalization performance, and we provide an explanation for the fact that GSLNs can require a relatively large number of weights
  • Keywords
    feedforward neural nets; generalisation (artificial intelligence); learning by example; threshold elements; PAC learning; Volterra networks; computational learning theory; connectionist networks; feedforward networks; generalization ability; generalization performance; generalized single-layer networks; linear threshold elements; modified Kanerva model; probably approximately correct learning; radial basis function networks; regularization networks; self-structuring techniques; training examples; weights; Convergence; Frequency; Helium; Performance analysis; Radial basis function networks; Reliability theory; Signal processing; Sufficient conditions;
  • fLanguage
    English
  • Journal_Title
    Neural Networks, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1045-9227
  • Type

    jour

  • DOI
    10.1109/72.363472
  • Filename
    363472