• DocumentCode
    3672319
  • Title

    Learning a non-linear knowledge transfer model for cross-view action recognition

  • Author

    Hossein Rahmani;Ajmal Mian

  • Author_Institution
    Computer Science and Software Engineering, The University of Western Australia, Australia
  • fYear
    2015
  • fDate
    6/1/2015 12:00:00 AM
  • Firstpage
    2458
  • Lastpage
    2466
  • Abstract
    This paper concerns action recognition from unseen and unknown views. We propose unsupervised learning of a non-linear model that transfers knowledge from multiple views to a canonical view. The proposed Non-linear Knowledge Transfer Model (NKTM) is a deep network, with weight decay and sparsity constraints, which finds a shared high-level virtual path from videos captured from different unknown viewpoints to the same canonical view. The strength of our technique is that we learn a single NKTM for all actions and all camera viewing directions. Thus, NKTM does not require action labels during learning and knowledge of the camera viewpoints during training or testing. NKTM is learned once only from dense trajectories of synthetic points fitted to mocap data and then applied to real video data. Trajectories are coded with a general codebook learned from the same mocap data. NKTM is scalable to new action classes and training data as it does not require re-learning. Experiments on the IXMAS and N-UCLA datasets show that NKTM outperforms existing state-of-the-art methods for cross-view action recognition.
  • Keywords
    "Computational modeling","Training"
  • Publisher
    ieee
  • Conference_Titel
    Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
  • Electronic_ISBN
    1063-6919
  • Type

    conf

  • DOI
    10.1109/CVPR.2015.7298860
  • Filename
    7298860