DocumentCode :
921418
Title :
Rapid, safe, and incremental learning of navigation strategies
Author :
Millan, J.del.R.
Author_Institution :
Joint Res. Centre, Commission of the Eur. Communities, Ispra
Volume :
26
Issue :
3
fYear :
1996
fDate :
6/1/1996 12:00:00 AM
Firstpage :
408
Lastpage :
420
Abstract :
In this paper we propose a reinforcement connectionist learning architecture that allows an autonomous robot to acquire efficient navigation strategies in a few trials. Besides rapid learning, the architecture has three further appealing features. First, the robot improves its performance incrementally as it interacts with an initially unknown environment, and it ends up learning to avoid collisions even in those situations in which its sensors cannot detect the obstacles. This is a definite advantage over nonlearning reactive robots. Second, since it learns from basic reflexes, the robot is operational from the very beginning and the learning process is safe. Third, the robot exhibits high tolerance to noisy sensory data and good generalization abilities. All these features make this learning robot´s architecture very well suited to real-world applications. We report experimental results obtained with a real mobile robot in an indoor environment that demonstrate the appropriateness of our approach to real autonomous robot control
Keywords :
learning (artificial intelligence); mobile robots; autonomous robot; autonomous robot control; avoid collisions; incremental learning; learning of navigation strategies; mobile robot; navigation strategies; reinforcement connectionist learning; Automatic control; Control systems; Indoor environments; Informatics; Mobile robots; Navigation; Robot control; Robot sensing systems; Robotics and automation; Working environment noise;
fLanguage :
English
Journal_Title :
Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on
Publisher :
ieee
ISSN :
1083-4419
Type :
jour
DOI :
10.1109/3477.499792
Filename :
499792
Link To Document :
بازگشت