• DocumentCode
    138686
  • Title

    Automatic segmentation and recognition of human activities from observation based on semantic reasoning

  • Author

    Ramirez-Amaro, Karinne ; Beetz, Michael ; Cheng, Gordon

  • Author_Institution
    Inst. for Cognitive Syst., Tech. Univ. of Munich, Munich, Germany
  • fYear
    2014
  • fDate
    14-18 Sept. 2014
  • Firstpage
    5043
  • Lastpage
    5048
  • Abstract
    Automatically segmenting and recognizing human activities from observations typically requires a very complex and sophisticated perception algorithm. Such systems would be unlikely implemented on-line into a physical system, such as a robot, due to the pre-processing step(s) that those vision systems usually demand. In this work, we present and demonstrate that with an appropriate semantic representation of the activity, and without such complex perception systems, it is sufficient to infer human activities from videos. First, we will present a method to extract the semantic rules based on three simple hand motions, i.e. move, not move and tool use. Additionally, the information of the object properties either ObjectActedOn or ObjectInHand are used. Such properties encapsulate the information of the current context. The above data is used to train a decision tree to obtain the semantic rules employed by a reasoning engine. This means, we extract lower-level information from videos and we reason about the intended human behaviors (high-level). The advantage of the abstract representation is that it allows to obtain more generic models out of human behaviors, even when the information is obtained from different scenarios. The results show that our system correctly segments and recognizes human behaviors with an accuracy of 85%. Another important aspect of our system is its scalability and adaptability toward new activities, which can be learned on-demand. Our system has been fully implemented on a humanoid robot, the iCub to experimentally validate the performance and the robustness of our system during on-line execution of the robot.
  • Keywords
    decision trees; humanoid robots; image representation; image segmentation; inference mechanisms; robot vision; ObjectActedOn properties; ObjectInHand properties; automatic human activity recognition; automatic human activity segmentation; complex perception systems; decision tree; hand motions; human behaviors; humanoid robot; iCub; intended human behaviors; physical system; reasoning engine; semantic activity representation; semantic reasoning based observation; semantic rule extraction; sophisticated perception algorithm; Accuracy; Cognition; Motion segmentation; Semantics; Training; Videos;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on
  • Conference_Location
    Chicago, IL
  • Type

    conf

  • DOI
    10.1109/IROS.2014.6943279
  • Filename
    6943279