DocumentCode :
3324524
Title :
Vision-based reinforcement learning for robot navigation
Author :
Zhu, Weiyu ; Levinson, Stephen
Author_Institution :
Dept. of Electr. & Comput. Eng., Illinois Univ., Urbana, IL, USA
Volume :
2
fYear :
2001
fDate :
2001
Firstpage :
1025
Abstract :
We present a novel vision-based learning approach for autonomous robot navigation. A hybrid state-mapping model, which combines the merits of both static and dynamic state assigning strategies, is proposed to solve the problem of state organization in navigation-learning tasks. Specifically, the continuous feature space, which could be very large in general, is first mapped to a small-sized conceptual state space for learning in static. Then, ambiguities among the aliasing states, i.e., the same conceptual state is accidentally mapped to several physical states that require different action policies in reality, are efficiently eliminated in learning with a recursive state-splitting process. The proposed method has been applied to simulate the navigation learning by a simulated robot with very encouraging results
Keywords :
computerised navigation; learning (artificial intelligence); mobile robots; robot vision; state-space methods; stereo image processing; autonomous robot; conceptual state space; navigation; reinforcement learning; robot vision; state-mapping model; stereo vision; Computer vision; Delay; Feedback; Learning; Navigation; Orbital robotics; Performance evaluation; Robot control; Robot vision systems; State-space methods;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 2001. Proceedings. IJCNN '01. International Joint Conference on
Conference_Location :
Washington, DC
ISSN :
1098-7576
Print_ISBN :
0-7803-7044-9
Type :
conf
DOI :
10.1109/IJCNN.2001.939501
Filename :
939501
Link To Document :
بازگشت