DocumentCode :
2714638
Title :
A combined pose, object, and feature model for action understanding
Author :
Packer, Ben ; Saenko, Kate ; Koller, Daphne
fYear :
2012
fDate :
16-21 June 2012
Firstpage :
1378
Lastpage :
1385
Abstract :
Understanding natural human activity involves not only identifying the action being performed, but also locating the semantic elements of the scene and describing the person´s interaction with them. We present a system that is able to recognize complex, fine-grained human actions involving the manipulation of objects in realistic action sequences. Our method takes advantage of recent advances in sensors and pose trackers in learning an action model that draws on successful discriminative techniques while explicitly modeling both pose trajectories and object manipulations. By combining these elements in a single model, we are able to simultaneously recognize actions and track the location and manipulation of objects. To showcase this ability, we introduce a novel Cooking Action Dataset that contains video, depth readings, and pose tracks from a Kinect sensor. We show that our model outperforms existing state of the art techniques on this dataset as well as the VISINT dataset with only video sequences.
Keywords :
gesture recognition; object tracking; pose estimation; Kinect sensor; action understanding; cooking action dataset; depth readings; feature model; fine-grained human actions; natural human activity; object manipulations; object model; pose model; pose trackers; pose tracks; pose trajectories; realistic action sequences; sensors; video; Dynamics; Feature extraction; Humans; Sensors; Training; Trajectory; Visualization;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on
Conference_Location :
Providence, RI
ISSN :
1063-6919
Print_ISBN :
978-1-4673-1226-4
Electronic_ISBN :
1063-6919
Type :
conf
DOI :
10.1109/CVPR.2012.6247824
Filename :
6247824
Link To Document :
بازگشت