Title :
Create weak learners with small neural networks by balanced ensemble learning
Author_Institution :
Sch. of Comput. Sci. & Eng., Univ. of Aizu Aizu-Wakamatsu, Fukushima, Japan
Abstract :
It has been shown that as the number of weak learners in a majority voting model is increased so does its generalization if those weak learners are uncorrelated or negatively correlated. Although some learning algorithms including bagging and boosting have been developed to create such weak learners, learners trained by these learning algorithms are actually not so weak in many applications. This paper presents a simple balanced ensemble learning method for producing weak learners. The idea of balanced ensemble learning is to change the learning force in the training process so that the training data points near to the decision boundaries would push the decision boundaries further while the training data points far away from the decision boundaries would drag the decision boundaries to themselves. The experimental results suggest that balanced ensemble learning is able to create learners being both weak and negatively correlated.
Keywords :
decision theory; learning (artificial intelligence); neural nets; bagging algorithms; balanced ensemble learning method; boosting algorithms; decision boundary; learning algorithms; majority voting model; neural networks; training data; training process; weak learners; Bagging; Boosting; Correlation; Error analysis; Training; Training data;
Conference_Titel :
Signal Processing, Communications and Computing (ICSPCC), 2011 IEEE International Conference on
Conference_Location :
Xi´an
Print_ISBN :
978-1-4577-0893-0
DOI :
10.1109/ICSPCC.2011.6061781