DocumentCode
1205877
Title
A visual navigation system for autonomous land vehicles
Author
Waxman, Allen M. ; LeMoigne, Jacqueline J. ; Davis, Larry S. ; Srinivasan, Babu ; Kushner, Todd R. ; Liang, Eli ; Siddalingaiah, Tharakesh
Author_Institution
Boston University, Cummington St., Boston, MA, USA
Volume
3
Issue
2
fYear
1987
fDate
4/1/1987 12:00:00 AM
Firstpage
124
Lastpage
141
Abstract
A modular system architecture has been developed to support visual navigation by an autonomous land vehicle. The system consists of vision modules performing image processing, three-dimensional shape recovery, and geometric reasoning, as well as modules for planning, navigating, and piloting. The system runs in two distinct modes, bootstrap and feedforward. The bootstrap mode requires analysis of entire images to find and model the objects of interest in the scene (e.g., roads). In the feedforward mode (while the vehicle is moving), attention is focused on small parts of the visual field as determined by prior views of the scene, to continue to track and model the objects of interest. General navigational tasks are decomposed into three categories, all of which contribute to planning a vehicle path. They are called long-, intermediate-, and short-range navigation, reflecting the scale to which they apply. The system has been implemented as a set of concurrent communicating modules and used to drive a camera (carried by a robot arm) over a scale model road network on a terrain board. A large subset of the system has been reimplemented on a VICOM image processor and has driven the DARPA Autonomous Land Vehicle (ALV) at Martin Marietta´s test site in Denver, CO.
Keywords
Land vehicles; Robot vision systems; Robots, locomotion; Image analysis; Image processing; Land vehicles; Layout; Machine vision; Navigation; Path planning; Process planning; Roads; Shape;
fLanguage
English
Journal_Title
Robotics and Automation, IEEE Journal of
Publisher
ieee
ISSN
0882-4967
Type
jour
DOI
10.1109/JRA.1987.1087089
Filename
1087089
Link To Document