• DocumentCode
    3607057
  • Title

    Deep Multimodal Learning for Affective Analysis and Retrieval

  • Author

    Lei Pang ; Shiai Zhu ; Chong-Wah Ngo

  • Author_Institution
    Dept. of Comput. Sci., City Univ. of Hong Kong, Kowloon, China
  • Volume
    17
  • Issue
    11
  • fYear
    2015
  • Firstpage
    2008
  • Lastpage
    2020
  • Abstract
    Social media has been a convenient platform for voicing opinions through posting messages, ranging from tweeting a short text to uploading a media file, or any combination of messages. Understanding the perceived emotions inherently underlying these user-generated contents (UGC) could bring light to emerging applications such as advertising and media analytics. Existing research efforts on affective computation are mostly dedicated to single media, either text captions or visual content. Few attempts for combined analysis of multiple media are made, despite that emotion can be viewed as an expression of multimodal experience. In this paper, we explore the learning of highly non-linear relationships that exist among low-level features across different modalities for emotion prediction. Using the deep Bolzmann machine (DBM), a joint density model over the space of multimodal inputs, including visual, auditory, and textual modalities, is developed. The model is trained directly using UGC data without any labeling efforts. While the model learns a joint representation over multimodal inputs, training samples in absence of certain modalities can also be leveraged. More importantly, the joint representation enables emotion-oriented cross-modal retrieval, for example, retrieval of videos using the text query “crazy cat”. The model does not restrict the types of input and output, and hence, in principle, emotion prediction and retrieval on any combinations of media are feasible. Extensive experiments on web videos and images show that the learnt joint representation could be very compact and be complementary to hand-crafted features, leading to performance improvement in both emotion classification and cross-modal retrieval.
  • Keywords
    Boltzmann machines; image retrieval; learning (artificial intelligence); social networking (online); DBM; UGC; affective analysis; affective retrieval; cross-modal retrieval; deep Bolzmann machine; deep multimodal learning; emotion classification; emotion prediction; joint density model; media file; posting messages; social media; text captions; user generated contents; visual content; voicing opinions; Feature extraction; Joints; Media; Semantics; Training; Videos; Visualization; Cross-modal retrieval; deep Boltzmann machine; emotion analysis; multimodal learning;
  • fLanguage
    English
  • Journal_Title
    Multimedia, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1520-9210
  • Type

    jour

  • DOI
    10.1109/TMM.2015.2482228
  • Filename
    7277066