Title :
Beyond simple rule extraction: the extraction of planning knowledge from reinforcement learners
Author_Institution :
Dept. of CECS, Missouri Univ., Columbia, MO, USA
Abstract :
This paper discusses learning in hybrid models that goes beyond simple rule extraction from backpropagation networks. Although simple rule extraction has received a lot of research attention, to further develop hybrid learning models that include both symbolic and subsymbolic knowledge and that learn autonomously, it is necessary to study autonomous learning of both subsymbolic and symbolic knowledge in integrated architectures. This paper describes knowledge extraction from neural reinforcement learning. It includes two approaches towards extracting plan knowledge: the extraction of explicit, symbolic rules from neural reinforcement learning, and the extraction of complete plans. This work points to the creation of a general framework for achieving the subsymbolic to symbolic transition in an integrated autonomous learning framework
Keywords :
backpropagation; knowledge acquisition; neural nets; planning (artificial intelligence); symbol manipulation; backpropagation; neural networks; planning knowledge extraction; reinforcement learning; rule extraction; subsymbolic knowledge; symbolic knowledge; Backpropagation; Boltzmann distribution; Collaborative work; Learning; State estimation; Stochastic processes; Sun; Usability;
Conference_Titel :
Neural Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on
Conference_Location :
Como
Print_ISBN :
0-7695-0619-4
DOI :
10.1109/IJCNN.2000.857882