DocumentCode :
2551824
Title :
Visual localization in fused image and laser range data
Author :
Carlevaris-Bianco, Nicholas ; Mohan, Anush ; McBride, James R. ; Eustice, Ryan M.
Author_Institution :
Dept. Electrical Eng. & Computer Science, University of Michigan, Ann Arbor, 48109, USA
fYear :
2011
fDate :
25-30 Sept. 2011
Firstpage :
4378
Lastpage :
4385
Abstract :
This paper reports on a method for tracking a camera system within an a priori known map constructed from co-registered 3D light detection and ranging (LIDAR) and omnidirectional image data. Our method pre-processes the raw 3D LIDAR and camera data to produce a sparse map that can scale to city-size environments. From the original LIDAR and camera data we extract visual features and identify those that are most robust to varying viewpoint. This allows us to include only the visual features that are most useful for localization in the map. Additionally, we quantize the visual features using a vocabulary tree to further reduce the map´s file size. We then use vision-based localization to track the vehicle´s motion through the map. We present results on urban data collected with Ford Motor Company´s autonomous vehicle testbed. In our experiments the map is built using urban data from winter 2009, and localization is performed using data collected in fall 2010 and winter 2011. This demonstrates our algorithm´s robustness to temporal changes in the environment.
Keywords :
Azimuth; Cameras; Feature extraction; Instruments; Three dimensional displays; Visualization; Vocabulary;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on
Conference_Location :
San Francisco, CA
ISSN :
2153-0858
Print_ISBN :
978-1-61284-454-1
Type :
conf
DOI :
10.1109/IROS.2011.6094944
Filename :
6094944
Link To Document :
بازگشت