DocumentCode
3673955
Title
Exploring Fisher vector and deep networks for action spotting
Author
Zhe Wang;Limin Wang; Wenbin Du;Yu Qiao
Author_Institution
Shenzhen key lab of Comp. Vis. &
fYear
2015
fDate
6/1/2015 12:00:00 AM
Firstpage
10
Lastpage
14
Abstract
This paper describes our method and attempt on track 2 at the ChaLearn Looking at People (LAP) challenge 2015. Our approach utilizes Fisher vector and iDT features for action spotting, and improve its performance from two aspects: (i) We take account of interaction labels into the training process; (ii) By visualizing our results on validation set, we find that our previous method [10] is weak in detecting action class 2, and improve it by introducing multiple thresholds. Moreover, we exploit deep neural networks to extract both appearance and motion representation for this task. However, our current deep network fails to yield better performance than our Fisher vector based approach and may need further exploration. For this reason, we submit the results obtained by our Fisher vector approach which achieves a Jaccard Index of 0.5385 and ranks the 1st place in track 2.
Keywords
"Videos","Neural networks","Feature extraction","Training","Yttrium","Convolutional codes","Optical computing"
Publisher
ieee
Conference_Titel
Computer Vision and Pattern Recognition Workshops (CVPRW), 2015 IEEE Conference on
Electronic_ISBN
2160-7516
Type
conf
DOI
10.1109/CVPRW.2015.7301330
Filename
7301330
Link To Document