DocumentCode :
2207882
Title :
Generative approximation of generalization error
Author :
Yamazaki, Keisuke
Author_Institution :
Precision & Intell. Lab., Tokyo Inst. of Technol., Yokohama, Japan
fYear :
2009
fDate :
1-4 Sept. 2009
Firstpage :
1
Lastpage :
6
Abstract :
The generalization ability of a learning model is one of the key elements of machine learning and data mining. Cross-validation is a common technique by which to evaluate the generalization error and to select the optimal model. However, the calculation required for sequential data processing by cross-validation is expensive in some generative models, such as hidden Markov models, stochastic context-free grammars (SCFGs), and Bayesian networks. Therefore, the present paper proposes a fast approximation of the generalization error, by which the computational cost of the cross-validation can be reduced considerably. The results of experiments revealed that the proposed method accurately approximated the error and was successful in a model selection task for SCFGs.
Keywords :
approximation theory; generalisation (artificial intelligence); learning (artificial intelligence); optimisation; data mining; generalization error; generative approximation; machine learning; optimal model selection; sequential data processing; Bayesian methods; Computational efficiency; Data mining; Data processing; Hidden Markov models; Laboratories; Learning systems; Machine learning; Stochastic processes; Testing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Machine Learning for Signal Processing, 2009. MLSP 2009. IEEE International Workshop on
Conference_Location :
Grenoble
Print_ISBN :
978-1-4244-4947-7
Electronic_ISBN :
978-1-4244-4948-4
Type :
conf
DOI :
10.1109/MLSP.2009.5306243
Filename :
5306243
Link To Document :
بازگشت