DocumentCode :
3332794
Title :
Boundary Detection Benchmarking: Beyond F-Measures
Author :
Xiaodi Hou ; Yuille, A.L. ; Koch, Christian
Author_Institution :
Comput. & Neural Syst., Caltech, Pasadena, CA, USA
fYear :
2013
fDate :
23-28 June 2013
Firstpage :
2123
Lastpage :
2130
Abstract :
For an ill-posed problem like boundary detection, human labeled datasets play a critical role. Compared with the active research on finding a better boundary detector to refresh the performance record, there is surprisingly little discussion on the boundary detection benchmark itself. The goal of this paper is to identify the potential pitfalls of today\´s most popular boundary benchmark, BSDS 300. In the paper, we first introduce a psychophysical experiment to show that many of the "weak" boundary labels are unreliable and may contaminate the benchmark. Then we analyze the computation of f-measure and point out that the current benchmarking protocol encourages an algorithm to bias towards those problematic "weak" boundary labels. With this evidence, we focus on a new problem of detecting strong boundaries as one alternative. Finally, we assess the performances of 9 major algorithms on different ways of utilizing the dataset, suggesting new directions for improvements.
Keywords :
edge detection; BSDS 300; benchmarking protocol; boundary detection benchmarking; f-measure; strong boundaries; weak boundary labels; Algorithm design and analysis; Benchmark testing; Classification algorithms; Computer vision; Detectors; Image segmentation; Reliability; Boundary detection; benchmarking; dataset bias;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on
Conference_Location :
Portland, OR
ISSN :
1063-6919
Type :
conf
DOI :
10.1109/CVPR.2013.276
Filename :
6619120
Link To Document :
بازگشت