Title :
Context-Consistent stereo matching
Author :
Fan, Shufei ; Ferrie, Frank P.
Author_Institution :
McGill Univ., Montreal, QC, Canada
fDate :
Sept. 27 2009-Oct. 4 2009
Abstract :
Although our two eyes view the world from different perspectives, our brain can effortlessly associate items seen by one eye with those by the other - leading to the binocular depth sensation. We would like computers to make this match as well as humans, so that an intelligent system can perceive 3-D using binocular inputs from cameras. Current methods still have difficulty when two cameras are widely separated; more so when the images are of poor quality or contain repetitive patterns, because they can no longer distinguish features by examining only the local patches they occupy. Here we propose to improve the feature-matching by further involving global image information. We proposed a topological graph, called the Salient Feature Graph (SFG), to describe intrinsic structure of a scene based on its image. We then used the SFG to compare semi-local structures extracted from different perspectives. This semi-local comparison enabled our new algorithm, Context-Consistent Assignment (or CCA), to establish feature correspondences by dynamically involving both local appearance and global structures. We ran our algorithm and conventional methods on images of 3D urban scenes and counted how many correct matches they made. Our approach consistently outperformed competitors on difficult images such as low resolution inputs and noisy images.
Keywords :
image matching; stereo image processing; binocular depth sensation; context consistent assignment; context consistent stereo matching; feature matching; global image information; intelligent system; salient feature graph; topological graph; Cameras; Computer vision; Conferences; Context awareness; Detectors; Eyes; Humans; Intelligent systems; Layout; Stereo vision;
Conference_Titel :
Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on
Conference_Location :
Kyoto
Print_ISBN :
978-1-4244-4442-7
Electronic_ISBN :
978-1-4244-4441-0
DOI :
10.1109/ICCVW.2009.5457487