• DocumentCode
    3565877
  • Title

    What can memorization learning do?

  • Author

    Hirabayashi, Akira ; Ogawa, Hidemitsu

  • Author_Institution
    Dept. of Comput. Sci., Tokyo Inst. of Technol., Japan
  • Volume
    1
  • fYear
    1999
  • fDate
    6/21/1905 12:00:00 AM
  • Firstpage
    659
  • Abstract
    Memorization learning (ML) is a method for supervised learning which reduces the training errors only. However, it does not guarantee good generalization capability in principle. This observation leads to two problems: 1) to clarify the reason why good generalization capability is obtainable by ML; and 2) to clarify to what extent memorization learning can be used. Ogawa (1995) introduced the concept of `admissibility´ and provided a clear answer to the first problem. In this paper, we solve the second problem when training examples are noiseless. It is theoretically shown that ML can provide the same generalization capability as any learning method in `the family of projection learning´ when proper training examples are chosen
  • Keywords
    generalisation (artificial intelligence); inverse problems; learning (artificial intelligence); neural nets; admissibility; generalization; inverse problem; memorization learning; projection learning; supervised learning; Computer errors; Computer science; Function approximation; Information science; Inverse problems; Learning systems; Pediatrics; Probability distribution; Sufficient conditions; Supervised learning;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1999. IJCNN '99. International Joint Conference on
  • ISSN
    1098-7576
  • Print_ISBN
    0-7803-5529-6
  • Type

    conf

  • DOI
    10.1109/IJCNN.1999.831578
  • Filename
    831578