Title :
Nonlinear inverse reinforcement learning with mutual information and Gaussian process
Author :
Li, De C. ; Yu Quing He ; Feng Fu
Author_Institution :
Shenyang Inst. of Autom., Shenyang, China
Abstract :
In this paper, a mutual information (MI) and Extreme Learning Machine (ELM) based inverse reinforcement learning (IRL) algorithm, which termed as MEIRL, is proposed to construct nonlinear reward function. The basic idea of MIIRL is that, similar to GPIRL, the reward function is learned by using Gaussian process and the importance of each feature is obtained by using automatic relevance determination (ARD). Then mutual information is employed to evaluate the impact of each feature to the reward function, based on which extreme learning machine is introduced along with an adaptive model construction procedure to choose the optimal subset of features and the performance of the original GPIRL algorithm is enhanced as well. Furthermore, to demonstrate the effectiveness of MEIRL, a simulation called highway driving is constructed. The simulation results show that MEIRL is comparable with the state of art IRL algorithms in terms of generalization capability, but more efficient while the number of features is large.
Keywords :
Gaussian processes; learning (artificial intelligence); ARD; ELM; GPIRL algorithm; Gaussian process; MEIRL; MIIRL; adaptive model construction procedure; automatic relevance determination; extreme learning machine; generalization capability; highway driving; inverse reinforcement learning algorithm; mutual information; nonlinear inverse reinforcement learning; nonlinear reward function; Adaptation models; Algorithm design and analysis; Approximation error; Bayes methods; Gaussian processes; Learning (artificial intelligence); Mutual information;
Conference_Titel :
Robotics and Biomimetics (ROBIO), 2014 IEEE International Conference on
DOI :
10.1109/ROBIO.2014.7090537