DocumentCode :
3660199
Title :
Bounded learning for neural network ensembles
Author :
Yong Liu;Qiangfu Zhao;Yan Pei
Author_Institution :
School of Computer Science and Engineering, The University of Aizu, Aizu-Wakamatsu, Fukushima 965-8580, Japan
fYear :
2015
Firstpage :
1216
Lastpage :
1221
Abstract :
Two error bounds were introduced in the learning process of balanced ensemble learning. They are the lower bound of error rate (LBER) and the upper bound of error output (UBEO) on the training set, respectively. These two error bounds would decide whether a training data point should be further learned or not after balanced ensemble learning has reached certain stage. Before the error rates are higher than LBER, the whole training set is fed to balanced ensemble learning. After the error rates are lower than LBER, not the whole training set but only those data points near to the learned decision boundary should be learned. Other data points further away from the decision boundary could either be learned well or not be learned at all. In order to cope with these not-yet-learned data far away from the learned decision boundary, balanced ensemble learning has too make so big changes to the learned decision boundary that the ensemble could grow too complex for the applications. Therefore, these not-yet-learned data should be excluded from the training set. There is no much impact to the learned decision boundary by removing those well-learned data points that are far away from the decision boundary. Experimental results would display how LBER and UBEO would let balanced ensemble learning avoid overfitting.
Keywords :
"Training","Error analysis","Training data","Credit cards","Diabetes","Neural networks"
Publisher :
ieee
Conference_Titel :
Information and Automation, 2015 IEEE International Conference on
Type :
conf
DOI :
10.1109/ICInfA.2015.7279472
Filename :
7279472
Link To Document :
بازگشت