DocumentCode :
2705333
Title :
Video Summarization based on Film Grammar
Author :
Yoshitaka, Atsuo ; Deguchi, Yoshiki
Author_Institution :
Graduate Sch. of Eng., Hiroshima Univ.
fYear :
2005
fDate :
Oct. 30 2005-Nov. 2 2005
Firstpage :
1
Lastpage :
4
Abstract :
Searching time-intrinsic contents, such as movies, drama, is often a time consuming task, since it is not satisfactory to see some of screen shots to catch the whole story. Summarizing such contents is one of the solutions to diminish the cost in browsing and grasp the contents rapidly. Most of the video summarization methods ever proposed extract `conspicuous´ shots from audio/visual point of view, and combine them together to create a summary. These methods imply an issue of disregarding contextual dependency between shots or scenes. We propose a method of summary generation based on `film grammar´, which keeps the dependency between shots or scenes. Experimental results show this method provides a viewer better summaries to comprehend the context
Keywords :
audio-visual systems; context-free grammars; feature extraction; video signal processing; audio-visual system; film grammar; shots extraction; time-intrinsic content; video summarization; Atmosphere; Cameras; Character generation; Costs; Gunshot detection systems; High-speed networks; Layout; Motion pictures; Network servers; Time factors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Multimedia Signal Processing, 2005 IEEE 7th Workshop on
Conference_Location :
Shanghai
Print_ISBN :
0-7803-9288-4
Electronic_ISBN :
0-7803-9289-2
Type :
conf
DOI :
10.1109/MMSP.2005.248620
Filename :
4014041
Link To Document :
بازگشت