• DocumentCode
    1783716
  • Title

    Multi-modal feature fusion for action recognition in RGB-D sequences

  • Author

    Shahroudy, Amir ; Gang Wang ; Tian-Tsong Ng

  • Author_Institution
    Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore, Singapore
  • fYear
    2014
  • fDate
    21-23 May 2014
  • Firstpage
    1
  • Lastpage
    4
  • Abstract
    Microsoft Kinect´s output is a multi-modal signal which gives RGB videos, depth sequences and skeleton information simultaneously. Various action recognition techniques focused on different single modalities of the signals and built their classifiers over the features extracted from one of these channels. For better recognition performance, it´s desirable to fuse these multi-modal information into an integrated set of discriminative features. Most of current fusion methods merged heterogeneous features in a holistic manner and ignored the complementary properties of these modalities in finer levels. In this paper, we proposed a new hierarchical bag-of-words feature fusion technique based on multi-view structured spar-sity learning to fuse atomic features from RGB and skeletons for the task of action recognition.
  • Keywords
    image fusion; image recognition; image sequences; video signal processing; Microsoft Kinect; RGB videos; RGB-D sequences; action recognition techniques; depth sequences; feature extraction; fuse atomic features; hierarchical bag-of-words feature fusion technique; multimodal feature fusion; multimodal information fusion; multiview structured sparsity learning; skeleton information; Dictionaries; Feature extraction; Fuses; Joints; Vectors; Videos; Action Recognition; Feature Fusion; Kinect; Structured Sparsity;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Communications, Control and Signal Processing (ISCCSP), 2014 6th International Symposium on
  • Conference_Location
    Athens
  • Type

    conf

  • DOI
    10.1109/ISCCSP.2014.6877819
  • Filename
    6877819