Abstract :
We present a new algorithm 3DNN (3D Nearest-Neighbor), which is capable of matching an image with 3D data, independently of the viewpoint from which the image was captured. By leveraging rich annotations associated with each image, our algorithm can automatically produce precise and detailed 3D models of a scene from a single image. Moreover, we can transfer information across images to accurately label and segment objects in a scene. The true benefit of 3DNN compared to a traditional 2D nearest-neighbor approach is that by generalizing across viewpoints, we free ourselves from the need to have training examples captured from all possible viewpoints. Thus, we are able to achieve comparable results using orders of magnitude less data, and recognize objects from never-before-seen viewpoints. In this work, we describe the 3DNN algorithm and rigorously evaluate its performance for the tasks of geometry estimation and object detection/segmentation. By decoupling the viewpoint and the geometry of an image, we develop a scene matching approach which is truly 100% viewpoint invariant, yielding state-of-the-art performance on challenging data.
Keywords :
geometry; image matching; image segmentation; object detection; 2D nearest-neighbor approach; 3D Nearest-Neighbor algorithm; 3D models; 3DNN algorithm; geometry estimation; image matching; object detection; object segmentation; scene matching approach; scene understanding; viewpoint invariant 3D geometry matching; Cameras; Estimation; Geometry; Image edge detection; Layout; Solid modeling; Three-dimensional displays; 3D Data; Computer Vision; Geometry Estimation; Machine Learning; Scene Understanding;