DocumentCode
767433
Title
Vision-based localization algorithm based on landmark matching, triangulation, reconstruction, and comparison
Author
Yuen, David C K ; MacDonald, Bruce A.
Author_Institution
Dept. of Electr. & Comput. Eng., Univ. of Auckland, New Zealand
Volume
21
Issue
2
fYear
2005
fDate
4/1/2005 12:00:00 AM
Firstpage
217
Lastpage
226
Abstract
Many generic position-estimation algorithms are vulnerable to ambiguity introduced by nonunique landmarks. Also, the available high-dimensional image data is not fully used when these techniques are extended to vision-based localization. This paper presents the landmark matching, triangulation, reconstruction, and comparison (LTRQ global localization algorithm, which is reasonably immune to ambiguous landmark matches. It extracts natural landmarks for the (rough) matching stage before generating the list of possible position estimates through triangulation. Reconstruction and comparison then rank the possible estimates. The LTRC algorithm has been implemented using an interpreted language, onto a robot equipped with a panoramic vision system. Empirical data shows remarkable improvement in accuracy when compared with the established random sample consensus method. LTRC is also robust against inaccurate map data.
Keywords
image sensors; mobile robots; position control; robot vision; generic position-estimation algorithms; high-dimensional image data; landmark matching; panoramic vision system; vision-based localization algorithm; Data mining; Image reconstruction; Image sensors; Insects; Machine vision; Mobile robots; Navigation; Robot localization; Robot sensing systems; Robot vision systems; Landmark matching, triangulation, reconstruction, and comparison (LTRC); natural landmark; panoramic image; random sample consensus (RANSAC); triangulation; vision-based localization;
fLanguage
English
Journal_Title
Robotics, IEEE Transactions on
Publisher
ieee
ISSN
1552-3098
Type
jour
DOI
10.1109/TRO.2004.835452
Filename
1416973
Link To Document