DocumentCode
178583
Title
Grassmannian Representation of Motion Depth for 3D Human Gesture and Action Recognition
Author
Slama, R. ; Wannous, H. ; Daoudi, M.
Author_Institution
LIFL, France
fYear
2014
fDate
24-28 Aug. 2014
Firstpage
3499
Lastpage
3504
Abstract
Recently developed commodity depth sensors open up new possibilities of dealing with rich descriptors, which capture geometrical features of the observed scene. Here, we propose an original approach to represent geometrical features extracted from depth motion space, which capture both geometric appearance and dynamic of human body simultaneously. In this approach, sequence features are modeled temporally as subspaces lying on the Grassmann manifold. Classification task is carried out via computation of probability density functions on tangent space of each class tacking benefit from the geometric structure of the Grassmann manifold. The experimental evaluation is performed on three existing datasets containing various challenges, including MSR-action 3D, UT-kinect and MSR-Gesture3D. Results reveal that our approach outperforms the state-of-the-art methods, with accuracy of 98.21% on MSR-Gesture3D and 95.25% on UT-kinect, and achieves a competitive performance of 86.21% on MSR-action 3D.
Keywords
feature extraction; geometry; gesture recognition; image classification; image representation; image sequences; probability; 3D human gesture recognition; Grassmannian representation; MSR-Gesture3D; MSR-action 3D; UT-kinect; action recognition; classification task; depth motion space; geometrical feature extraction; probability density functions; sequence features; Accuracy; Computational modeling; Feature extraction; Joints; Manifolds; Three-dimensional displays; Vectors;
fLanguage
English
Publisher
ieee
Conference_Titel
Pattern Recognition (ICPR), 2014 22nd International Conference on
Conference_Location
Stockholm
ISSN
1051-4651
Type
conf
DOI
10.1109/ICPR.2014.602
Filename
6977314
Link To Document