DocumentCode
2496930
Title
An approach to generating two-level video abstraction
Author
Cheng, Wen-Gang ; Xu, De
Author_Institution
Dept. of Comput. Sci & Technol., Northern Jiaotong Univ., Beijing, China
Volume
5
fYear
2003
fDate
2-5 Nov. 2003
Firstpage
2896
Abstract
Video abstraction is a short summary of the content of a longer video document. Most existing video abstraction methods are based on shot-level, which is not sufficient to meaningful browsing and is too fine to users sometimes. In this paper, we propose a novel approach of generating video abstraction at two levels, namely, the shot-level and the scene-level. We put up a method of extracting key frames from shots, according to the content variation of the latter. An updated time-adaptive algorithm is used to group the shots into scene and representative frames are extracted in the region of that scene using the method of generating Minimum Spanning Tree. Key frames and representative frames can represent the content of shots and scenes, respectively. The organized sequences of key frames and representative frames are the two-level video abstraction. Experiments based on real-world movies show that the method above can provide users with better video summary at different levels.
Keywords
image representation; video signal processing; key frames; longer video document; minimum spanning tree; scene level; shot level; time adaptive algorithm; two level video abstraction; Computer science; Data mining; Digital TV; Focusing; Indexing; Layout; Motion pictures; Road transportation; Sampling methods; Videoconference;
fLanguage
English
Publisher
ieee
Conference_Titel
Machine Learning and Cybernetics, 2003 International Conference on
Print_ISBN
0-7803-8131-9
Type
conf
DOI
10.1109/ICMLC.2003.1260057
Filename
1260057
Link To Document