DocumentCode :
1553316
Title :
Asymptotic statistical theory of overtraining and cross-validation
Author :
Amari, Shun-Ichi ; Murata, Noboru ; Müller, Klaus-Robert ; Finke, Michael ; Yang, Howard Hua
Author_Institution :
RIKEN, Inst. of Phys. & Chem. Res., Saitama, Japan
Volume :
8
Issue :
5
fYear :
1997
fDate :
9/1/1997 12:00:00 AM
Firstpage :
985
Lastpage :
996
Abstract :
A statistical theory for overtraining is proposed. The analysis treats general realizable stochastic neural networks, trained with Kullback-Leibler divergence in the asymptotic case of a large number of training examples. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the optimal stopping time. Based on the cross-validation stopping we consider the ratio the examples should be divided into training and cross-validation sets in order to obtain the optimum performance. Although cross-validated early stopping is useless in the asymptotic region, it surely decreases the generalization error in the nonasymptotic region. Our large scale simulations done on a CM5 are in good agreement with our analytical findings
Keywords :
error analysis; feedforward neural nets; generalisation (artificial intelligence); learning (artificial intelligence); optimisation; statistical analysis; Kullback-Leibler divergence; asymptotic gain; asymptotic statistical theory; cross-validation; early stopping; generalization error; multilayer neural networks; optimal stopping time; overtraining; stochastic neural networks; Analytical models; Large-scale systems; Neural networks; Performance gain; Physics; Risk management; Stochastic processes; Terrorism;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.623200
Filename :
623200
Link To Document :
بازگشت