Title :
Randomness in generalization ability: a source to improve it?
Author_Institution :
Dept. of Math. & Comput. Sci., Miami Univ., Coral Gables, FL, USA
fDate :
27 Jun-2 Jul 1994
Abstract :
The question of generalization ability of artificial neural networks is of great interest for both theoretical understanding and practical use. This paper reports our observations about randomness in generalization ability of feedforward artificial neural networks (FFANNs). A novel method for measuring generalization ability is defined. This definition can be used to identify degree of randomness in generalization ability of learning systems. If an FFANN architecture shows randomness in generalization ability for a given problem then multiple networks can be used to improve it. We have developed a model, called voting model, for predicting generalization ability of multiple networks. It has been shown that if correct classification probability of a single network is greater than half, then as the number of networks is increased so does the generalization ability
Keywords :
feedforward neural nets; generalisation (artificial intelligence); learning (artificial intelligence); learning systems; majority logic; random processes; classification probability; feedforward neural networks; generalization ability; learning systems; majority XOR; multiple networks; randomness; voting model; Artificial neural networks; Computer science; Information processing; Intelligent networks; Learning systems; Mathematics; Neurons; Predictive models; System testing; Voting;
Conference_Titel :
Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on
Conference_Location :
Orlando, FL
Print_ISBN :
0-7803-1901-X
DOI :
10.1109/ICNN.1994.374151