• DocumentCode
    3707199
  • Title

    UTD-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor

  • Author

    Chen Chen;Roozbeh Jafari;Nasser Kehtarnavaz

  • Author_Institution
    Department of Electrical Engineering, University of Texas at Dallas, USA
  • fYear
    2015
  • Firstpage
    168
  • Lastpage
    172
  • Abstract
    Human action recognition has a wide range of applications including biometrics, surveillance, and human computer interaction. The use of multimodal sensors for human action recognition is steadily increasing. However, there are limited publicly available datasets where depth camera and inertial sensor data are captured at the same time. This paper describes a freely available dataset, named UTD-MHAD, which consists of four temporally synchronized data modalities. These modalities include RGB videos, depth videos, skeleton positions, and inertial signals from a Kinect camera and a wearable inertial sensor for a comprehensive set of 27 human actions. Experimental results are provided to show how this database can be used to study fusion approaches that involve using both depth camera data and inertial sensor data. This public domain dataset is of benefit to multimodality research activities being conducted for human action recognition by various research groups.
  • Keywords
    "Cameras","Videos","Biomedical monitoring","Skeleton","Accelerometers","Wrist","Thigh"
  • Publisher
    ieee
  • Conference_Titel
    Image Processing (ICIP), 2015 IEEE International Conference on
  • Type

    conf

  • DOI
    10.1109/ICIP.2015.7350781
  • Filename
    7350781