Title :
Space-Time Video Montage
Author :
Hong-Wen Kang ; Matsushita, Yuki ; Xiaoou Tang ; Xue-Quan Chen
Author_Institution :
University of Science and Technology of China, Hefei, China
Abstract :
Conventional video summarization methods focus predominantly on summarizing videos along the time axis, such as building a movie trailer: The resulting video trailer tends to retain much empty space in the background of the video frames while discarding much informative video content due to size limit. In this paper we propose a novel spacetime video summarization method which we call space-time video montage. The method simultaneously analyzes both the spatial and temporal injbrmation distribution in a video sequence, and extracts the visually informative space-time portions of the input videos. The informative video porlions are represented in volumetric layers. The layers are then packrd together in a smull ouzput video volume such that the total amount of visual information in the video volume is maximized. To achieve the packing process, we develop a new algorithm based upon the first-fit and Graph cut optimization techniques. Since our method is uble to cut off spatially und temporally less informative portions, it is uble to generate much more compact yet highly informative output videos. The effecliveness of our method is validated by extensive experiments over a wide variety of videos.
Keywords :
Asia; Computer vision; Data mining; Data security; Motion pictures; Pattern recognition; Space technology; Streaming media; Video sequences; Videoconference;
Conference_Titel :
Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on
Conference_Location :
New York, NY, USA
Print_ISBN :
0-7695-2597-0
DOI :
10.1109/CVPR.2006.284