Title :
An augmented representation of activity in video using semantic-context information
Author :
Khoualed, Samir ; Chateau, Thierry ; Castellan, Umberto ; Samir, Chafik
Author_Institution :
ISIT, Univ. of Clermont, Clermont, France
Abstract :
Learning and recognizing activity in videos is an especially important task in computer vision. However, it is hard to perform. In this paper, we propose a new method by combining local and global context information to extract a bag-of-words-like representation of a single space-time point. Each spacetime point is described by a bag of visual words that encodes its relationships with the remaining space-time points in the video, defining the space-time context. Experiments on the KTH benchmark of action recognition, show that our approach performs accurately compared to the state-of-the-art.
Keywords :
augmented reality; computer vision; video signal processing; augmented representation; bag-of-words-like representation; computer vision; global context information; local context information; semantic-context information; space-time context; Accuracy; Context; Shape; Support vector machines; Trajectory; Visualization; Vocabulary; Action recognition; SVM classification; semantic shape context; space-time interest point;
Conference_Titel :
Image Processing (ICIP), 2014 IEEE International Conference on
Conference_Location :
Paris
DOI :
10.1109/ICIP.2014.7025847