Title :
Difficulty in learning vs. network size
Author :
Chakraborty, Goutam ; Noguchi, Shoichi
Author_Institution :
Univ. of Aizu, Fukushima, Japan
Abstract :
While training neural network to represent distribution of a sample set, generally two aspects are considered: its representation capability to be able to describe the complex distribution of the sample set and the ability to generalize so that novel samples could be mapped correctly. The general conclusion is that, the smallest network capable of representing the sample distribution is the best choice, as far as generalization is concerned. We here have introduced a term difficulty in learning. We have shown that for smallest network the difficulty in learning is very high. This is especially true when the sample distribution is complex. A slightly bigger network is more suitable, especially when noise is low, as far as ease of learning is concerned
Keywords :
learning (artificial intelligence); neural nets; complex sample distribution; learning difficulty; neural network size; noise; Annealing; Arthritis; Electronic mail; Genetic algorithms; Intelligent networks; Interference; Multilayer perceptrons; Neural networks; Transfer functions;
Conference_Titel :
Neural Networks,1997., International Conference on
Print_ISBN :
0-7803-4122-8
DOI :
10.1109/ICNN.1997.614211