DocumentCode :
730367
Title :
Synchronization rules for HMM-based audio-visual laughter synthesis
Author :
Cakmak, Huseyin ; Urbain, Jerome ; Dutoit, Thierry
Author_Institution :
TCTS Lab., Univ. of Mons, Mons, Belgium
fYear :
2015
fDate :
19-24 April 2015
Firstpage :
2304
Lastpage :
2308
Abstract :
In this paper we propose synchronization rules between acoustic and visual laughter synthesis systems. This work follows up our previous studies on acoustics laughter synthesis and visual laughter synthesis. The need of synchronization rules comes from the constraint that in laughter, HMM-based synthesis of laughter cannot be performed using a unified system where common transcriptions may be used. Therefore acoustic and visual models are trained independently without any synchronization constraints. In this work, we propose simple rules derived from the analysis of audio and visual laughter transcriptions in order to generate visual laughter transcriptions starting from acoustic transcriptions. A perceptive Mean Opinion Score (MOS) test is conducted to evaluate the method.
Keywords :
audio-visual systems; speech synthesis; synchronisation; HMM-based synthesis; acoustic laughter synthesis systems; acoustic transcriptions; audio-visual laughter synthesis systems; perceptive mean opinion score test; synchronization rules; visual laughter transcriptions; Acoustics; Databases; Hidden Markov models; Speech; Synchronization; Trajectory; Visualization; Audio-visual; HMM; laughter; synchronization; synthesis;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on
Conference_Location :
South Brisbane, QLD
Type :
conf
DOI :
10.1109/ICASSP.2015.7178382
Filename :
7178382
Link To Document :
بازگشت