DocumentCode :
3651884
Title :
A real-time speech driven talking avatar based on deep neural network
Author :
Kai Zhao; Zhiyong Wu; Lianhong Cai
Author_Institution :
Tsinghua-CUHK Joint Res. Center for Media Sci., Tsinghua Univ., Shenzhen, China
fYear :
2013
Firstpage :
1
Lastpage :
4
Abstract :
This paper describes our initial work in developing a real-time speech driven talking avatar system with deep neural network. The input of the system is the acoustic speech and the output is the articulatory movements (that are synchronized with the input speech) on a 3-dimentional avatar. The mapping from the input acoustic features to the output articulatory features is achieved by virtue of deep neural network (DNN). Experiments on the well known acoustic-articulatory English speech corpus MNGU0 demonstrate that the proposed audio-visual mapping method based on DNN can achieve good performance.
Keywords :
"Speech","Acoustics","Avatars","Artificial neural networks","Training","Hidden Markov models"
Publisher :
ieee
Conference_Titel :
Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2013 Asia-Pacific
Type :
conf
DOI :
10.1109/APSIPA.2013.6694335
Filename :
6694335
Link To Document :
بازگشت