DocumentCode
1634331
Title
Q-learning using fuzzified states and weighted actions and its application to omni-direnctional mobile robot control
Author
Lee, Dong-Hyun ; Park, In-Won ; Kim, Jong-Hwan
Author_Institution
Dept. of Electr. Eng., KAIST, Daejeon, South Korea
fYear
2009
Firstpage
102
Lastpage
107
Abstract
The conventional Q-learning algorithm is described by a finite number of discretized states and discretized actions. When the system is represented in continuous domain, this may cause an abrupt transition of action as the state rapidly changes. To avoid this abrupt transition of action, the learning system requires fine-tuned states. However, the learning time significantly increases and the system becomes computationally expensive as the number of states increases. To solve this problem, this paper proposes a novel Q-learning algorithm, which uses fuzzified states and weighted actions to update its state-action value. By applying the concept of fuzzy set to the states of Q-learning and using the weighted actions, the agent efficiently responds to the rapid changes of the states. The proposed algorithm is applied to omni-directional mobile robot and the results demonstrate the effectiveness of the proposed approach.
Keywords
fuzzy set theory; learning (artificial intelligence); mobile robots; Q-learning algorithm; discretized actions; discretized states; fine tuned states; fuzzified states; omnidirectional mobile robot control; state action value; weighted actions; Control systems; Fuzzy control; Fuzzy logic; Fuzzy sets; Fuzzy systems; Learning systems; Mobile robots; Optimal control; Power system modeling; Robot control;
fLanguage
English
Publisher
ieee
Conference_Titel
Computational Intelligence in Robotics and Automation (CIRA), 2009 IEEE International Symposium on
Conference_Location
Daejeon
Print_ISBN
978-1-4244-4808-1
Electronic_ISBN
978-1-4244-4809-8
Type
conf
DOI
10.1109/CIRA.2009.5423227
Filename
5423227
Link To Document