Title :
GMM-based synchronization rules for HMM-based audio-visual laughter synthesis
Author :
H?seyin ?akmak;K?vin El Haddad;Thierry Dutoit
Author_Institution :
UMONS, Place du Parc 20, 7000 Mons
Abstract :
In this paper we propose synchronization rules between acoustic and visual laughter synthesis systems. Previous works have addressed separately the acoustic and visual laughter synthesis following an HMM-based approach. The need of synchronization rules comes from the constraint that in laughter, HMM-based synthesis cannot be performed using a unified system where common transcriptions may be used as it has been shown to be the case for audio-visual speech synthesis. Therefore acoustic and visual models are trained independently without any synchronization constraints. In this work, we propose rules derived from the analysis of audio and visual laughter transcriptions in order to be able to generate a visual laughter transcriptions corresponding to an audio laughter data.
Keywords :
"Visualization","Hidden Markov models","Synchronization","Acoustics","Feature extraction","Databases","Face"
Conference_Titel :
Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on
Electronic_ISBN :
2156-8111
DOI :
10.1109/ACII.2015.7344606