• DocumentCode
    3673971
  • Title

    Mining discriminative states of hands and objects to recognize egocentric actions with a wearable RGBD camera

  • Author

    Shaohua Wan;J.K. Aggarwal

  • Author_Institution
    Dept. of Electrical and Computer Engineering, The University of Texas at Austin, United States
  • fYear
    2015
  • fDate
    6/1/2015 12:00:00 AM
  • Firstpage
    36
  • Lastpage
    43
  • Abstract
    Of increasing interest to the computer vision community is to recognize egocentric actions. Conceptually, an egocentric action is largely identifiable by the states of hands and objects. For example, “drinking soda” is essentially composed of two sequential states where one first “takes up the soda can”, then “drinks from the soda can”. While existing algorithms commonly use manually defined states to train action classifiers, we present a novel model that automatically mines discriminative states for recognizing egocentric actions. To mine discriminative states, we propose a novel kernel function and formulate a Multiple Kernel Learning based framework to learn adaptive weights for different states. Experiments on three benchmark datasets, i.e., RGBD-Ego, ADL, and GTEA, clearly show that our recognition algorithm outperforms state-of-the-art algorithms.
  • Keywords
    "Skin","Kernel","Feature extraction","Object segmentation","Videos","Histograms","Cameras"
  • Publisher
    ieee
  • Conference_Titel
    Computer Vision and Pattern Recognition Workshops (CVPRW), 2015 IEEE Conference on
  • Electronic_ISBN
    2160-7516
  • Type

    conf

  • DOI
    10.1109/CVPRW.2015.7301346
  • Filename
    7301346