DocumentCode :
3129436
Title :
Statistical properties of artificial neural networks
Author :
Barron, Andrew R.
Author_Institution :
Dept. of Electr. & Comput. Eng., Illinois Univ., Champaign, IL, USA
fYear :
1989
fDate :
13-15 Dec 1989
Firstpage :
280
Abstract :
Convergence properties of empirically estimated neural networks are examined. In this theory, an appropriate size feedforward network is automatically determined from the data. The networks studied include two- and three-layer networks with an increasing number of simple sigmoidal nodes, multiple-layer polynomial networks, and networks with certain fixed structures but an increasing complexity in each unit. Each of these classes of networks is dense in the space of continuous functions on compact subsets of d-dimensional Euclidean space, with respect to the topology of uniform convergence. It is shown how, with the use of an appropriate complexity regularization criterion, the statistical risk of network estimators converges to zero as the sample size increases. Bounds on the rate of convergence are given in terms of an index of the approximation capability of the class of networks
Keywords :
convergence; neural nets; polynomials; statistics; complexity regularization criterion; feedforward network; multiple-layer polynomial networks; neural networks; sigmoidal nodes; statistical properties; three-layer networks; two-layer networks; uniform convergence; Artificial neural networks; Convergence; Input variables; Network topology; Neural networks; Neurons; Polynomials; Statistics; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control, 1989., Proceedings of the 28th IEEE Conference on
Conference_Location :
Tampa, FL
Type :
conf
DOI :
10.1109/CDC.1989.70117
Filename :
70117
Link To Document :
بازگشت