Title :
Multi-modal feature fusion for action recognition in RGB-D sequences
Author :
Shahroudy, Amir ; Gang Wang ; Tian-Tsong Ng
Author_Institution :
Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore, Singapore
Abstract :
Microsoft Kinect´s output is a multi-modal signal which gives RGB videos, depth sequences and skeleton information simultaneously. Various action recognition techniques focused on different single modalities of the signals and built their classifiers over the features extracted from one of these channels. For better recognition performance, it´s desirable to fuse these multi-modal information into an integrated set of discriminative features. Most of current fusion methods merged heterogeneous features in a holistic manner and ignored the complementary properties of these modalities in finer levels. In this paper, we proposed a new hierarchical bag-of-words feature fusion technique based on multi-view structured spar-sity learning to fuse atomic features from RGB and skeletons for the task of action recognition.
Keywords :
image fusion; image recognition; image sequences; video signal processing; Microsoft Kinect; RGB videos; RGB-D sequences; action recognition techniques; depth sequences; feature extraction; fuse atomic features; hierarchical bag-of-words feature fusion technique; multimodal feature fusion; multimodal information fusion; multiview structured sparsity learning; skeleton information; Dictionaries; Feature extraction; Fuses; Joints; Vectors; Videos; Action Recognition; Feature Fusion; Kinect; Structured Sparsity;
Conference_Titel :
Communications, Control and Signal Processing (ISCCSP), 2014 6th International Symposium on
Conference_Location :
Athens
DOI :
10.1109/ISCCSP.2014.6877819