DocumentCode :
3614011
Title :
Reinforcement learning in partially observable mobile robot domains using unsupervised event extraction
Author :
B. Bakker;F. Linaker;J. Schmidhuber
Author_Institution :
IDSIA, Manno-Lugano, Switzerland
Volume :
1
fYear :
2002
fDate :
6/24/1905 12:00:00 AM
Firstpage :
938
Abstract :
This paper describes how learning tasks in partially observable mobile robot domains can be solved by combining reinforcement learning with an unsupervised learning "event extraction" mechanism, called ARAVQ. ARAVQ transforms the robot´s continuous, noisy, high-dimensional sensory input stream into a compact sequence of high-level events. The resulting hierarchical control system uses an LSTM recurrent neural network as the reinforcement learning component, which learns high-level actions in response to the history of high-level events. The high-level actions select low-level behaviors which take care of the real-time motor control. Illustrative experiments based on the Khepera mobile robot simulator are presented.
Keywords :
"Mobile robots","Robot sensing systems","Unsupervised learning","Observability","Learning systems","Computer science","Mobile computing","Control systems","Neural networks","History"
Publisher :
ieee
Conference_Titel :
Intelligent Robots and Systems, 2002. IEEE/RSJ International Conference on
Print_ISBN :
0-7803-7398-7
Type :
conf
DOI :
10.1109/IRDS.2002.1041511
Filename :
1041511
Link To Document :
بازگشت