DocumentCode :
896717
Title :
Bottom-up spatiotemporal visual attention model for video analysis
Author :
Rapantzikos, K. ; Tsapatsoulis, N. ; Avrithis, Y. ; Kollias, S.
Author_Institution :
Sch. of Electr. & Comput. Eng., Nat. Tech. Univ. of Athens, Zografou
Volume :
1
Issue :
2
fYear :
2007
fDate :
6/1/2007 12:00:00 AM
Firstpage :
237
Lastpage :
248
Abstract :
The human visual system (HVS) has the ability to fixate quickly on the most informative (salient) regions of a scene and therefore reducing the inherent visual uncertainty. Computational visual attention (VA) schemes have been proposed to account for this important characteristic of the HVS. A video analysis framework based on a spatiotemporal VA model is presented. A novel scheme has been proposed for generating saliency in video sequences by taking into account both the spatial extent and dynamic evolution of regions. To achieve this goal, a common, image-oriented computational model of saliency-based visual attention is extended to handle spatiotemporal analysis of video in a volumetric framework. The main claim is that attention acts as an efficient preprocessing step to obtain a compact representation of the visual content in the form of salient events/objects. The model has been implemented, and qualitative as well as quantitative examples illustrating its performance are shown.
Keywords :
image sequences; video signal processing; bottom-up spatiotemporal visual attention model; image-oriented computational model; regions dynamic evolution; saliency-based visual attention; video analysis; video sequences; visual content; visual uncertainty;
fLanguage :
English
Journal_Title :
Image Processing, IET
Publisher :
iet
ISSN :
1751-9659
Type :
jour
DOI :
10.1049/iet-ipr:20060040
Filename :
4225407
Link To Document :
بازگشت