Title :
Implementation of fuzzy Q-learning for a soccer agent
Author :
Nakashima, Tomoharu ; Udo, Masayo ; Ishibuchi, Hisao
Author_Institution :
Dept. of Ind. Eng., Osaka Prefecture Univ., Japan
Abstract :
In this paper, we propose a reinforcement learning method called a fuzzy Q-learning where an agent determines its action based on the inference result by a fuzzy rule-based system. We apply the proposed method to a soccer agent that tries to learn to intercept a passed ball, i.e., it tries to catch up with a passed ball by another agent. In the proposed method, the state space is represented by internal information that the learning agent maintains such as the relative velocity and the relative position of the ball to the learning agent. We divide the state space into several fuzzy subspaces. We define each fuzzy subspace by specifying the fuzzy partition of each axis of the state space. A reward is given to the learning agent if the distance between the ball and the agent becomes smaller or if the agent catches up with the ball. It is expected that the learning agent finally obtains the efficient positioning skill through trial-and-error.
Keywords :
fuzzy set theory; inference mechanisms; learning (artificial intelligence); multi-agent systems; pattern classification; software agents; sport; state-space methods; continuous action space; expected long-term reward; fuzzy Q-learning; fuzzy partition; fuzzy rule-based system; fuzzy subspaces; inference; learning agent; passed ball intercept; pattern classification; reinforcement learning; relative position; relative velocity; soccer agent; state space; trial-and-error; Autonomous agents; Fuzzy control; Fuzzy systems; Genetic algorithms; Industrial engineering; Knowledge based systems; Learning; Pattern classification; State-space methods; Tiles;
Conference_Titel :
Fuzzy Systems, 2003. FUZZ '03. The 12th IEEE International Conference on
Print_ISBN :
0-7803-7810-5
DOI :
10.1109/FUZZ.2003.1209420