DocumentCode :
3446697
Title :
Learning aggregation for combining classifier ensembles
Author :
Wanas, N.M. ; Kamel, Mohamed S.
Author_Institution :
PALMI Lab., Waterloo Univ., Ont., Canada
Volume :
4
fYear :
2002
fDate :
18-22 Nov. 2002
Firstpage :
1729
Abstract :
Creating classifier ensembles and combining their outputs to achieve higher accuracy have been of recent interest. It was noted that when using such multiple classifier approaches the members of the ensemble should be error-independent. The ideal, in terms of ensembles of classifiers, would be a set of classifiers which do not show any coincident errors. That is, each of the classifiers generalized well, and when they did make errors on the test set, these errors were not shared with any other classifier. Various approaches for achieving this have been presented. This paper compares two approaches introduced for training multiple classifiers systems. These approaches are based on the feature based aggregation architecture and the adaptive training algorithm. An empirical evaluation using two data sets shows a reduction in the number of training cycles when applying the algorithm on the overall architecture, while maintaining the same or improved performance. The performance of these approaches is also compared to standard approaches proposed in the literature. The results substantiate the use of adaptive a-dining for both the ensemble and the aggregation architecture.
Keywords :
learning (artificial intelligence); pattern classification; classifier ensemble combination; learning aggregation; multiple classifier system training; Boosting; Convergence; Network topology; Sampling methods; Testing; Training data; Voting;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Information Processing, 2002. ICONIP '02. Proceedings of the 9th International Conference on
Print_ISBN :
981-04-7524-1
Type :
conf
DOI :
10.1109/ICONIP.2002.1198971
Filename :
1198971
Link To Document :
بازگشت