• DocumentCode
    330301
  • Title

    Neural networks: life after training

  • Author

    Salerno, John J.

  • Author_Institution
    Air Force Res. Lab., Rome, NY, USA
  • Volume
    2
  • fYear
    1998
  • fDate
    11-14 Oct 1998
  • Firstpage
    1680
  • Abstract
    There has been much work done in the use of neural networks to model an existing problem, but little has been done to address what happens after training has been completed and the model must continue to learn new information. How well does the model work on information that it has not seen before? How does it adapt to new information? In this paper we address these issues, beginning our discussion with a neural model that has been trained on parsing simple natural language phrases and how well the model can generalize. Based on these results we then investigate two techniques which attempt to allow the model to “grow” or learn information that it has never before seen
  • Keywords
    generalisation (artificial intelligence); learning (artificial intelligence); neural nets; generalisation; information learning; natural language; neural networks; update policy; Animals; Backpropagation; Concatenated codes; Data preprocessing; Feeds; Humans; Instruments; Laboratories; Natural languages; Neural networks;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Systems, Man, and Cybernetics, 1998. 1998 IEEE International Conference on
  • Conference_Location
    San Diego, CA
  • ISSN
    1062-922X
  • Print_ISBN
    0-7803-4778-1
  • Type

    conf

  • DOI
    10.1109/ICSMC.1998.728135
  • Filename
    728135