DocumentCode :
3672439
Title :
Video co-summarization: Video summarization by visual co-occurrence
Author :
Wen-Sheng Chu; Yale Song;Alejandro Jaimes
Author_Institution :
Robotics Institute, Carnegie Mellon University, USA
fYear :
2015
fDate :
6/1/2015 12:00:00 AM
Firstpage :
3584
Lastpage :
3592
Abstract :
We present video co-summarization, a novel perspective to video summarization that exploits visual co-occurrence across multiple videos. Motivated by the observation that important visual concepts tend to appear repeatedly across videos of the same topic, we propose to summarize a video by finding shots that co-occur most frequently across videos collected using a topic keyword. The main technical challenge is dealing with the sparsity of co-occurring patterns, out of hundreds to possibly thousands of irrelevant shots in videos being considered. To deal with this challenge, we developed a Maximal Biclique Finding (MBF) algorithm that is optimized to find sparsely co-occurring patterns, discarding less co-occurring patterns even if they are dominant in one video. Our algorithm is parallelizable with closed-form updates, thus can easily scale up to handle a large number of videos simultaneously. We demonstrate the effectiveness of our approach on motion capture and self-compiled YouTube datasets. Our results suggest that summaries generated by visual co-occurrence tend to match more closely with human generated summaries, when compared to several popular unsupervised techniques.
Keywords :
"Visualization","Feature extraction","Image color analysis","Bipartite graph","YouTube","Clustering algorithms","Standards"
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
Electronic_ISBN :
1063-6919
Type :
conf
DOI :
10.1109/CVPR.2015.7298981
Filename :
7298981
Link To Document :
بازگشت