DocumentCode :
3672258
Title :
Sketch-based 3D shape retrieval using Convolutional Neural Networks
Author :
Fang Wang; Le Kang; Yi Li
Author_Institution :
NICTA and ANU, USA
fYear :
2015
fDate :
6/1/2015 12:00:00 AM
Firstpage :
1875
Lastpage :
1883
Abstract :
Retrieving 3D models from 2D human sketches has received considerable attention in the areas of graphics, image retrieval, and computer vision. Almost always in state of the art approaches a large amount of “best views” are computed for 3D models, with the hope that the query sketch matches one of these 2D projections of 3D models using predefined features. We argue that this two stage approach (view selection - matching) is pragmatic but also problematic because the “best views” are subjective and ambiguous, which makes the matching inputs obscure. This imprecise nature of matching further makes it challenging to choose features manually. Instead of relying on the elusive concept of “best views” and the hand-crafted features, we propose to define our views using a minimalism approach and learn features for both sketches and views. Specifically, we drastically reduce the number of views to only two predefined directions for the whole dataset. Then, we learn two Siamese Convolutional Neural Networks (CNNs), one for the views and one for the sketches. The loss function is defined on the within-domain as well as the cross-domain similarities. Our experiments on three benchmark datasets demonstrate that our method is significantly better than state of the art approaches, and outperforms them in all conventional metrics.
Keywords :
"Three-dimensional displays","Shape","Solid modeling","Measurement","Computational modeling","Training","Rendering (computer graphics)"
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
Electronic_ISBN :
1063-6919
Type :
conf
DOI :
10.1109/CVPR.2015.7298797
Filename :
7298797
Link To Document :
بازگشت