DocumentCode :
671527
Title :
Mixture kernel least mean square
Author :
Pokharel, R. ; Seth, Sachin ; Principe, Jose C.
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of Florida, Gainesville, FL, USA
fYear :
2013
fDate :
4-9 Aug. 2013
Firstpage :
1
Lastpage :
7
Abstract :
Instead of using single kernel, different approaches of using multiple kernels have been proposed recently in kernel learning literature, one of which is multiple kernel learning (MKL). In this paper, we propose an alternative to MKL in order to select the appropriate kernel given a pool of predefined kernels, for a family of online kernel filters called kernel adaptive filters (KAF). The need for an alternative is that, in a sequential learning method where the hypothesis is updated at every incoming sample, MKL would provide a new kernel, and thus a new hypothesis in the new reproducing kernel Hilbert space (RKHS) associated with the kernel. This does not fit well in the KAF framework, as learning a hypothesis in a fixed RKHS is the core of the KAF algorithms. Hence, we introduce an adaptive learning method to address the kernel selection problem for the KAF, based on competitive mixture of models. We propose mixture kernel least mean square (MxKLMS) adaptive filtering algorithm, where the kernel least mean square (KLMS) filters learned with different kernels, act in parallel at each input instance and are competitively combined such that the filter with the best kernel is an expert for each input regime. The competition among these experts is created by using a performance based gating, that chooses the appropriate expert locally. Therefore, the individual filter parameters as well as the weights for combination of these filters are learned simultaneously in an online fashion. The results obtained suggest that the model not only selects the best kernel, but also significantly improves the prediction accuracy.
Keywords :
Hilbert spaces; adaptive filters; learning (artificial intelligence); least mean squares methods; KAF algorithms; KAF framework; KLMS filters; MKL; adaptive filtering algorithm; adaptive learning method; fixed RKHS; kernel adaptive filters; kernel selection problem; mixture kernel least mean square filters; multiple kernel learning literature; online kernel filters; prediction accuracy; reproducing kernel Hilbert space; sequential learning method; Cost function; Kernel; Least squares approximations; Logic gates; Prediction algorithms; Time series analysis; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), The 2013 International Joint Conference on
Conference_Location :
Dallas, TX
ISSN :
2161-4393
Print_ISBN :
978-1-4673-6128-6
Type :
conf
DOI :
10.1109/IJCNN.2013.6706867
Filename :
6706867
Link To Document :
بازگشت