• DocumentCode
    274186
  • Title

    A new learning paradigm for neural networks

  • Author

    Lucas, S.M. ; Damper, R.I.

  • Author_Institution
    Dept. of Electron. & Comput. Sci., Southampton Univ., UK
  • fYear
    1989
  • fDate
    16-18 Oct 1989
  • Firstpage
    346
  • Lastpage
    350
  • Abstract
    Introduces a new way of inferring the structure of a temporal neural network from a set of training data. The approach is to learn a grammar which describes and generalises the input patterns, and then to map this grammar onto a connectionist architecture so allowing the network topology to be specialised to the training data. The resulting network has as many levels as are necessary, and arbitrary connections between levels. The resulting grammars are called strictly hierarchical and map straightforwardly onto a connectionist architecture using a relatively small number of neurons. The authors have performed some experiments on the recognition of hand-written isolated digits, using the simplest possible (supervised) grammatical inference algorithm to generalise a nonstochastic context-free grammar from the training data
  • Keywords
    character recognition; context-free grammars; inference mechanisms; learning systems; network topology; neural nets; parallel architectures; character recognition; connectionist architecture; inference; learning paradigm; network topology; neural networks; nonstochastic context-free grammar; pattern recognition; training data;
  • fLanguage
    English
  • Publisher
    iet
  • Conference_Titel
    Artificial Neural Networks, 1989., First IEE International Conference on (Conf. Publ. No. 313)
  • Conference_Location
    London
  • Type

    conf

  • Filename
    51990