Author_Institution :
Key laboratory of intelligent control and decision of complex systems, Beijing Institute of Technology, Beijing 100081
Abstract :
Scale space generation is a fundamental problem in almost all feature extraction algorithms. Often, it is a critical prior step of most image/video analytic applications that are based on the invariance or co-invariance of local features, such as SIFT based recognition, matching, and tracking applications. However, it is still quite a challenging problem to enable real-time applications of the extraction of local features due to the fact that scale space generation has a rather large computational complexity. This paper proposes the optimal FPGA design for acceleration of scale space generation. First, in order to derive the mathematical model for scale space generation that fits best in with the FPGA, we have discarded the conventional template convolution based Gaussian filtering scheme and adopted a novel IIR filter based recursive Gaussian blurring algorithm. Then, an approach based on the Retiming technique, which could find the minimal operational period for any given IIR filter, is used to finalize the overall design. For 1024×768 video, the proposed design is able to generate scale spaces at almost 400fps, which is fast enough to support most real-time applications like object recognition, object matching, and 3D reconstruction.