DocumentCode :
2915075
Title :
Forecaster performance evaluation with cross-validation and variants
Author :
Bergmeir, Christoph ; Benítez, José M.
Author_Institution :
Dept. of Comput. Sci. & Artificial Intell., Univ. of Granada, Granada, Spain
fYear :
2011
fDate :
22-24 Nov. 2011
Firstpage :
849
Lastpage :
854
Abstract :
In time series prediction, there is currently no consensus for a best practice of how predictors should be compared and evaluated. We investigate this issue through an empirical study. First, we discuss forecast types, error calculation, and error averaging methods in use, and then we focus on model selection procedures. We consider using ordinary cross-validation techniques and the common time series approach of choosing a test set from the end of a series, as well as less common approaches such as non-dependent cross-validation or blocked cross-validation. The study uses different error measures, various machine learning methods, and synthetic time series data. The results indicate that cross-validation can be a useful tool also in time series evaluation. Theoretical problems can be prevented by using it in the blocked form.
Keywords :
forecasting theory; learning (artificial intelligence); time series; cross-validation; error averaging methods; error calculation; forecaster performance evaluation; machine learning methods; synthetic time series data; time series prediction; Computational modeling; Data models; Forecasting; Measurement uncertainty; Robustness; Time series analysis; Training; blocked cross-validation; cross-validation; forecaster evaluation; time series;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Systems Design and Applications (ISDA), 2011 11th International Conference on
Conference_Location :
Cordoba
ISSN :
2164-7143
Print_ISBN :
978-1-4577-1676-8
Type :
conf
DOI :
10.1109/ISDA.2011.6121763
Filename :
6121763
Link To Document :
بازگشت