DocumentCode :
3770313
Title :
Fixation prediction through multimodal analysis
Author :
Xiongkuo Min;Guangtao Zhai;Chunjia Hu;Ke Gu
Author_Institution :
Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, Shanghai, China
fYear :
2015
Firstpage :
1
Lastpage :
4
Abstract :
In this paper, we propose to predict human fixations by incorporating both audio and visual cues. Traditional visual attention models generally make the utmost of stimuli´s visual features, while discarding all audio information. But in the real world, we human beings not only direct our gaze according to visual saliency but also may be attracted by some salient audio. Psychological experiments show that audio may have some influence on visual attention, and subjects tend to be attracted the sound sources. Therefore, we propose to fuse both audio and visual information to predict fixations. In our framework, we first localize the moving-sounding objects through multimodal analysis and generate an audio attention map, in which greater value denotes higher possibility of a position being the sound source. Then we calculate the spatial and temporal attention maps using only the visual modality. At last, the audio, spatial and temporal attention maps are fused, generating our final audio-visual saliency map. We gather a set of videos and collect eye-tracking data under audio-visual test conditions. Experiment results show that we can achieve better performance when considering both audio and visual cues.
Keywords :
"Visualization","Videos","Correlation","Feature extraction","Predictive models","Psychology","Computational modeling"
Publisher :
ieee
Conference_Titel :
Visual Communications and Image Processing (VCIP), 2015
Type :
conf
DOI :
10.1109/VCIP.2015.7457921
Filename :
7457921
Link To Document :
بازگشت