Title :
What can memorization learning do?
Author :
Hirabayashi, Akira ; Ogawa, Hidemitsu
Author_Institution :
Dept. of Comput. Sci., Tokyo Inst. of Technol., Japan
fDate :
6/21/1905 12:00:00 AM
Abstract :
Memorization learning (ML) is a method for supervised learning which reduces the training errors only. However, it does not guarantee good generalization capability in principle. This observation leads to two problems: 1) to clarify the reason why good generalization capability is obtainable by ML; and 2) to clarify to what extent memorization learning can be used. Ogawa (1995) introduced the concept of `admissibility´ and provided a clear answer to the first problem. In this paper, we solve the second problem when training examples are noiseless. It is theoretically shown that ML can provide the same generalization capability as any learning method in `the family of projection learning´ when proper training examples are chosen
Keywords :
generalisation (artificial intelligence); inverse problems; learning (artificial intelligence); neural nets; admissibility; generalization; inverse problem; memorization learning; projection learning; supervised learning; Computer errors; Computer science; Function approximation; Information science; Inverse problems; Learning systems; Pediatrics; Probability distribution; Sufficient conditions; Supervised learning;
Conference_Titel :
Neural Networks, 1999. IJCNN '99. International Joint Conference on
Print_ISBN :
0-7803-5529-6
DOI :
10.1109/IJCNN.1999.831578