Title :
Robust feature matching for robot visual learning
Author :
Gao, Ce ; Song, Yixu ; Jia, Peifa
Author_Institution :
Department of Computer Science & Technology, Tsinghua University, Beijing, 100084, China
Abstract :
Affine-invariant feature matching plays an important role in many robot vision applications, such as robot visual navigation, object detection, visual tracking and visual SLAM, etc. In the early stages, invariant keypoints are used to detect the affine transformation. But the accuracy is very low. In recent years, some people introduce SIFT method into robot vision field, which greatly enhances the accuracy. But it is too time-consuming to meet the requirements of real-time robot vision applications. In this paper, we propose a novel learning-based feature matching approach to address the problem. First, it uses a fast algorithm to extract keypoints. Then, our method identifies keypoints that belong to different objects or background by color and texture representation. The keypoints are clustered into corresponding groups. At last, a two-stage multilayer ferns classifier is trained to recognize the local patches and get the estimate of viewpoint. We test our approach on public datasets and apply it in a visual SLAM application. The result demonstrates that our method can provide robust and powerful matching ability. Even on some difficult matching cases, it also performs remarkably well. Further more, because there is no need to compute descriptors for the image, our method is very fast at run-time.
Keywords :
Accuracy; Detectors; Feature extraction; Image color analysis; Robots; Training; Visualization; Computer Vision for Robotics; Local affine invariant feature; Visual Learning; Visual Navigation; affine transformation detection; learning-based;
Conference_Titel :
Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on
Conference_Location :
San Francisco, CA
Print_ISBN :
978-1-61284-454-1
DOI :
10.1109/IROS.2011.6095137