DocumentCode :
1889082
Title :
A study of reinforcement learning with knowledge sharing for distributed autonomous system
Author :
Ito, Kazuyuki ; Gofuku, Akio ; Imoto, Yoshiaki ; Takeshita, Mitsuo
Author_Institution :
Dept. Syst. Eng., Okayama Univ., Japan
Volume :
3
fYear :
2003
fDate :
16-20 July 2003
Firstpage :
1120
Abstract :
Reinforcement learning is one of effective controller for autonomous robots. Because it does not need priori knowledge and behaviors to complete given tasks are obtained automatically be repeating trial and error. However a large number of trials are required to realize complex tasks. So the task that can be obtained using the real robot is restricted to simple ones. Considering these points, various methods that prove the learning cost of reinforcement learning have been proposed. In the method that uses priori knowledge, the methods lose the autonomy that is most important feature of reinforcement learning in applying it to the robots. In the Dyna-Q, that is one of simple and effective reinforcement learning architecture integrating online planning, a model of environment is learned from real experience and by utilizing the model to learn, the learning time is decreased. In this architecture, the autonomy is held, however the model depends on the task, so acquired knowledge of environment cannot be reused to other tasks. In the real world, human beings can learn various behaviors to complete complex tasks without priori knowledge of the tasks. We can try to realize the task in our image without moving our body. After the training in the image, by trying to the real environment, we save time to learn. It means that we have model of environment and we utilize the model to learn. We consider that the key ability that makes the learning process faster is construction of environment model and utilization of it. In this paper, we have proposed a method to obtain an environment model that is independent of the task. And by utilizing the model we have decreased learning time. We consider distributed autonomous agents, and we show that the environment model is constructed quickly by sharing the experience of each agent, even when each agent has own independent task. To demonstrate the effectiveness of the proposed method, we have applied the method to the Q-learning and simulations of a puddle world are carried out. As a result effective behaviors have been obtained quickly.
Keywords :
knowledge based systems; learning (artificial intelligence); planning (artificial intelligence); robots; Dyna-Q; Q-learning; autonomous robots; complex task; distributed autonomous system; knowledge sharing; learning process; online planning; priori knowledge; puddle world; reinforcement learning; Automatic control; Control systems; Costs; Humans; Indium tin oxide; Knowledge engineering; Learning; Robot control; Robotics and automation; Systems engineering and theory;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computational Intelligence in Robotics and Automation, 2003. Proceedings. 2003 IEEE International Symposium on
Print_ISBN :
0-7803-7866-0
Type :
conf
DOI :
10.1109/CIRA.2003.1222154
Filename :
1222154
Link To Document :
بازگشت