Title :
Minimum classification error vs. maximum margin: How should we penalize unseen samples?
Author :
Katagiri, Shigeru ; Watanabe, Hideyuki
Author_Institution :
Fac. of Sci. & Eng., Doshisha Univ., Kyotanabe, Japan
Abstract :
One of the ultimate goals for classifier training is to achieve the classifier parameters that correspond to the minimum classification error probability status that should be derived using a classification error count loss. Recently, to pursue this ideal status, Minimum Classification Error (MCE) training has been successfully revised as Large Geometric Margin MCE training and Kernel MCE training. This paper gives an overview of the recent advancements of the MCE training methodology and discusses related issues.
Keywords :
pattern classification; probability; classification error count loss; classifier parameters; classifier training; kernel MCE training; large geometric margin MCE training; maximum margin; minimum classification error probability; unseen samples; Kernel; Loss measurement; Measurement uncertainty; Minimization; Prototypes; Robustness; Training;
Conference_Titel :
Cognitive Information Processing (CIP), 2012 3rd International Workshop on
Conference_Location :
Baiona
Print_ISBN :
978-1-4673-1877-8
DOI :
10.1109/CIP.2012.6232891