• DocumentCode
    179227
  • Title

    Evaluation of HMM-based visual laughter synthesis

  • Author

    Cakmak, Huseyin ; Urbain, Jerome ; Tilmanne, Joelle ; Dutoit, Thierry

  • Author_Institution
    TCTS Lab., Univ. of Mons, Mons, Belgium
  • fYear
    2014
  • fDate
    4-9 May 2014
  • Firstpage
    4578
  • Lastpage
    4582
  • Abstract
    In this paper we apply speaker-dependent training of Hidden Markov Models (HMMs) to audio and visual laughter synthesis separately. The two modalities are synthesized with a forced durations approach and are then combined together to render audio-visual laughter on a 3D avatar. This paper focuses on visual synthesis of laughter and its perceptive evaluation when combined with synthesized audio laughter. Previous work on audio and visual synthesis has been successfully applied to speech. The extrapolation to audio laughter synthesis has already been done. This paper shows that it is possible to extrapolate to visual laughter synthesis as well.
  • Keywords
    audio-visual systems; avatars; extrapolation; hidden Markov models; speech synthesis; 3D avatar; HMM based visual laughter synthesis evaluation; audio-visual laughter synthesis; extrapolation; hidden Markov model; speaker-dependent training; Databases; Face; Hidden Markov models; Pipelines; Speech; Videos; Visualization; Audio; HMM; laughter; synthesis; visual;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on
  • Conference_Location
    Florence
  • Type

    conf

  • DOI
    10.1109/ICASSP.2014.6854469
  • Filename
    6854469