DocumentCode :
902963
Title :
Extracting moods from pictures and sounds: towards truly personalized TV
Author :
Hanjalic, Alan
Volume :
23
Issue :
2
fYear :
2006
fDate :
3/1/2006 12:00:00 AM
Firstpage :
90
Lastpage :
100
Abstract :
This paper considers how we feel about the content we see or hear. As opposed to the cognitive content information composed of the facts about the genre, temporal content structures and spatiotemporal content elements, we are interested in obtaining the information about the feelings, emotions, and moods evoked by a speech, audio, or video clip. We refer to the latter as the affective content, and to the terms such as happy or exciting as the affective labels of an audiovisual signal. In the first part of the paper, we explore the possibilities for representing and modeling the affective content of an audiovisual signal to effectively bridge the affective gap. Without loosing generality, we refer to this signal simply as video, which we see as an image sequence with an accompanying soundtrack. Then, we show the high potential of the affective video content analysis for enhancing the content recommendation functionalities of the future PVRs and VOD systems. We conclude this paper by outlining some interesting research challenges in the field
Keywords :
audio signal processing; content-based retrieval; feature extraction; music; video retrieval; video signal processing; audiovisual signal content; cognitive content information; content recommendation functionalities; genre contents; image sequence; mood extraction; personalized TV; spatiotemporal content elements; temporal content structures; video content analysis; Algorithm design and analysis; Data mining; Image retrieval; Information analysis; Layout; Mood; Motion pictures; Signal analysis; Signal processing algorithms; TV broadcasting;
fLanguage :
English
Journal_Title :
Signal Processing Magazine, IEEE
Publisher :
ieee
ISSN :
1053-5888
Type :
jour
DOI :
10.1109/MSP.2006.1621452
Filename :
1621452
Link To Document :
بازگشت