DocumentCode :
3748551
Title :
A Deep Visual Correspondence Embedding Model for Stereo Matching Costs
Author :
Zhuoyuan Chen;Xun Sun;Liang Wang;Yinan Yu;Chang Huang
fYear :
2015
Firstpage :
972
Lastpage :
980
Abstract :
This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.
Keywords :
"Feature extraction","Computational modeling","Visualization","Data models","Training","Neural networks","Machine learning"
Publisher :
ieee
Conference_Titel :
Computer Vision (ICCV), 2015 IEEE International Conference on
Electronic_ISBN :
2380-7504
Type :
conf
DOI :
10.1109/ICCV.2015.117
Filename :
7410474
Link To Document :
بازگشت