Title :
Boundary Detection Benchmarking: Beyond F-Measures
Author :
Xiaodi Hou ; Yuille, A.L. ; Koch, Christian
Author_Institution :
Comput. & Neural Syst., Caltech, Pasadena, CA, USA
Abstract :
For an ill-posed problem like boundary detection, human labeled datasets play a critical role. Compared with the active research on finding a better boundary detector to refresh the performance record, there is surprisingly little discussion on the boundary detection benchmark itself. The goal of this paper is to identify the potential pitfalls of today\´s most popular boundary benchmark, BSDS 300. In the paper, we first introduce a psychophysical experiment to show that many of the "weak" boundary labels are unreliable and may contaminate the benchmark. Then we analyze the computation of f-measure and point out that the current benchmarking protocol encourages an algorithm to bias towards those problematic "weak" boundary labels. With this evidence, we focus on a new problem of detecting strong boundaries as one alternative. Finally, we assess the performances of 9 major algorithms on different ways of utilizing the dataset, suggesting new directions for improvements.
Keywords :
edge detection; BSDS 300; benchmarking protocol; boundary detection benchmarking; f-measure; strong boundaries; weak boundary labels; Algorithm design and analysis; Benchmark testing; Classification algorithms; Computer vision; Detectors; Image segmentation; Reliability; Boundary detection; benchmarking; dataset bias;
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on
Conference_Location :
Portland, OR
DOI :
10.1109/CVPR.2013.276