DocumentCode :
2493746
Title :
Reinforcement learning via kernel temporal difference
Author :
Bae, Jihye ; Chhatbar, Pratik ; Francis, Joseph T. ; Sanchez, Justin C. ; Principe, Jose C.
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of Florida, Gainesville, FL, USA
fYear :
2011
fDate :
Aug. 30 2011-Sept. 3 2011
Firstpage :
5662
Lastpage :
5665
Abstract :
This paper introduces a kernel adaptive filter implemented with stochastic gradient on temporal differences, kernel Temporal Difference (TD)(λ), to estimate the state-action value function in reinforcement learning. The case λ=0 will be studied in this paper. Experimental results show the method´s applicability for learning motor state decoding during a center-out reaching task performed by a monkey. The results are compared to the implementation of a time delay neural network (TDNN) trained with backpropagation of the temporal difference error. From the experiments, it is observed that kernel TD(0) allows faster convergence and a better solution than the neural network.
Keywords :
adaptive filters; delays; gradient methods; learning (artificial intelligence); neural nets; neurophysiology; stochastic processes; backpropagation; kernel adaptive filter; kernel temporal difference; motor state; reinforcement learning; state-action value function; stochastic gradient; temporal difference error; time delay neural network; Decoding; Kernel; Learning; Least squares approximation; Machine learning algorithms; Prosthetics; Vectors; Algorithms; Artificial Intelligence; Biomimetics; Brain; Electroencephalography; Humans; Pattern Recognition, Automated; Reinforcement (Psychology); User-Computer Interface;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Engineering in Medicine and Biology Society, EMBC, 2011 Annual International Conference of the IEEE
Conference_Location :
Boston, MA
ISSN :
1557-170X
Print_ISBN :
978-1-4244-4121-1
Electronic_ISBN :
1557-170X
Type :
conf
DOI :
10.1109/IEMBS.2011.6091370
Filename :
6091370
Link To Document :
بازگشت