• DocumentCode
    3672250
  • Title

    A discriminative CNN video representation for event detection

  • Author

    Zhongwen Xu;Yi Yang;Alexander G. Hauptmann

  • Author_Institution
    QCIS, University of Technology, Sydney, Australia
  • fYear
    2015
  • fDate
    6/1/2015 12:00:00 AM
  • Firstpage
    1798
  • Lastpage
    1807
  • Abstract
    In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkits. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6% to 36.8% for the TRECVID MEDTest 14 dataset and from 34.0% to 44.6% for the TRECVID MEDTest 13 dataset.
  • Keywords
    "Event detection","Encoding","Feature extraction","Trajectory","Standards","Training","Visualization"
  • Publisher
    ieee
  • Conference_Titel
    Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
  • Electronic_ISBN
    1063-6919
  • Type

    conf

  • DOI
    10.1109/CVPR.2015.7298789
  • Filename
    7298789