DocumentCode
39163
Title
Relative Saliency Model over Multiple Images with an Application to Yarn Surface Evaluation
Author
Zhen Liang ; Bingang Xu ; Zheru Chi ; Feng, David Dagan
Author_Institution
Hong Kong Polytech. Univ., Hong Kong, China
Volume
44
Issue
8
fYear
2014
fDate
Aug. 2014
Firstpage
1249
Lastpage
1258
Abstract
Saliency models have been developed and widely demonstrated to benefit applications in computer vision and image understanding. In most of existing models, saliency is evaluated within an individual image. That is, saliency value of an item (object/region/pixel) represents the conspicuity of it as compared with the remaining items in the same image. We call this saliency as absolute saliency, which is uncomparable among images. However, saliency should be determined in the context of multiple images for some visual inspection tasks. For example, in yarn surface evaluation, saliency of a yarn image should be measured with regard to a set of graded standard images. We call this saliency the relative saliency, which is comparable among images. In this paper, a study of visual attention model for comparison of multiple images is explored, and a relative saliency model of multiple images is proposed based on a combination of bottom-up and top-down mechanisms, to enable relative saliency evaluation for the cases where other image contents are involved. To fully characterize the differences among multiple images, a structural feature extraction strategy is proposed, where two levels of feature (high-level, low-level) and three types of feature (global, local-local, local-global) are extracted. Mapping functions between features and saliency values are constructed and their outputs reflect relative saliency for multiimage contents instead of single image content. The performance of the proposed relative saliency model is well demonstrated in a yarn surface evaluation. Furthermore, the eye tracking technique is employed to verify the proposed concept of relative saliency for multiple images.
Keywords
automatic optical inspection; computer vision; feature extraction; gaze tracking; yarn; bottom-up mechanism; computer vision; eye tracking technique; image understanding; mapping function; multiimage content; relative saliency evaluation; relative saliency model; structural feature extraction strategy; top-down mechanism; visual attention model; visual inspection; yarn surface evaluation; Computational modeling; Feature extraction; Inspection; Integrated circuits; Standards; Visualization; Yarn; Comparison of multiple images; relative saliency map; visual attention; yarn surface evaluation;
fLanguage
English
Journal_Title
Cybernetics, IEEE Transactions on
Publisher
ieee
ISSN
2168-2267
Type
jour
DOI
10.1109/TCYB.2013.2281618
Filename
6826551
Link To Document