Title :
A fully pipelined kernel normalised least mean squares processor for accelerated parameter optimisation
Author :
Nicholas J. Fraser;Duncan J.M. Moss; JunKyu Lee;Stephen Tridgell;Craig T. Jin;Philip H.W. Leong
Author_Institution :
School of Electrical and Information Engineering, Building J03, The University Of Sydney, 2006, Australia
Abstract :
Kernel adaptive filters (KAFs) are online machine learning algorithms which are amenable to highly efficient streaming implementations. They require only a single pass through the data during training and can act as universal approximators, i.e. approximate any continuous function with arbitrary accuracy. KAFs are members of a family of kernel methods which apply an implicit nonlinear mapping of input data to a high dimensional feature space, permitting learning algorithms to be expressed entirely as inner products. Such an approach avoids explicit projection into the feature space, enabling computational efficiency. In this paper, we propose the first fully pipelined floating point implementation of the kernel normalised least mean squares algorithm for regression. Independent training tasks necessary for parameter optimisation fill L cycles of latency ensuring the pipeline does not stall. Together with other optimisations to reduce resource utilisation and latency, our core achieves 160 GFLOPS on a Virtex 7 XC7VX485T FPGA, and the PCI-based system implementation is 70× faster than an optimised software implementation on a desktop processor.
Keywords :
"Kernel","Dictionaries","Adders","Optimization","Training","Accuracy","Computer architecture"
Conference_Titel :
Field Programmable Logic and Applications (FPL), 2015 25th International Conference on
DOI :
10.1109/FPL.2015.7293952