• DocumentCode
    249259
  • Title

    An augmented representation of activity in video using semantic-context information

  • Author

    Khoualed, Samir ; Chateau, Thierry ; Castellan, Umberto ; Samir, Chafik

  • Author_Institution
    ISIT, Univ. of Clermont, Clermont, France
  • fYear
    2014
  • fDate
    27-30 Oct. 2014
  • Firstpage
    4171
  • Lastpage
    4175
  • Abstract
    Learning and recognizing activity in videos is an especially important task in computer vision. However, it is hard to perform. In this paper, we propose a new method by combining local and global context information to extract a bag-of-words-like representation of a single space-time point. Each spacetime point is described by a bag of visual words that encodes its relationships with the remaining space-time points in the video, defining the space-time context. Experiments on the KTH benchmark of action recognition, show that our approach performs accurately compared to the state-of-the-art.
  • Keywords
    augmented reality; computer vision; video signal processing; augmented representation; bag-of-words-like representation; computer vision; global context information; local context information; semantic-context information; space-time context; Accuracy; Context; Shape; Support vector machines; Trajectory; Visualization; Vocabulary; Action recognition; SVM classification; semantic shape context; space-time interest point;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Image Processing (ICIP), 2014 IEEE International Conference on
  • Conference_Location
    Paris
  • Type

    conf

  • DOI
    10.1109/ICIP.2014.7025847
  • Filename
    7025847