DocumentCode :
288322
Title :
Randomness in generalization ability: a source to improve it?
Author :
Sarkar, Dilip
Author_Institution :
Dept. of Math. & Comput. Sci., Miami Univ., Coral Gables, FL, USA
Volume :
1
fYear :
1994
fDate :
27 Jun-2 Jul 1994
Firstpage :
131
Abstract :
The question of generalization ability of artificial neural networks is of great interest for both theoretical understanding and practical use. This paper reports our observations about randomness in generalization ability of feedforward artificial neural networks (FFANNs). A novel method for measuring generalization ability is defined. This definition can be used to identify degree of randomness in generalization ability of learning systems. If an FFANN architecture shows randomness in generalization ability for a given problem then multiple networks can be used to improve it. We have developed a model, called voting model, for predicting generalization ability of multiple networks. It has been shown that if correct classification probability of a single network is greater than half, then as the number of networks is increased so does the generalization ability
Keywords :
feedforward neural nets; generalisation (artificial intelligence); learning (artificial intelligence); learning systems; majority logic; random processes; classification probability; feedforward neural networks; generalization ability; learning systems; majority XOR; multiple networks; randomness; voting model; Artificial neural networks; Computer science; Information processing; Intelligent networks; Learning systems; Mathematics; Neurons; Predictive models; System testing; Voting;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
Conference_Location :
Orlando, FL
Print_ISBN :
0-7803-1901-X
Type :
conf
DOI :
10.1109/ICNN.1994.374151
Filename :
374151
Link To Document :
بازگشت