DocumentCode :
3661096
Title :
Direct conversion from facial myoelectric signals to speech using Deep Neural Networks
Author :
Lorenz Diener;Matthias Janke;Tanja Schultz
Author_Institution :
Cognitive Systems Lab, Karlsruhe Institute of Technology (KIT), Germany
fYear :
2015
fDate :
7/1/2015 12:00:00 AM
Firstpage :
1
Lastpage :
7
Abstract :
This paper presents our first results using Deep Neural Networks for surface electromyographic (EMG) speech synthesis. The proposed approach enables a direct mapping from EMG signals captured from the articulatory muscle movements to the acoustic speech signal. Features are processed from multiple EMG channels and are fed into a feed forward neural network to achieve a mapping to the target acoustic speech output. We show that this approach is feasible to generate speech output from the input EMG signal and compare the results to a prior mapping technique based on Gaussian mixture models. The comparison is conducted via objective Mel-Cepstral distortion scores and subjective listening test evaluations. It shows that the proposed Deep Neural Network approach gives substantial improvements for both evaluation criteria.
Keywords :
Electromyography
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), 2015 International Joint Conference on
Electronic_ISBN :
2161-4407
Type :
conf
DOI :
10.1109/IJCNN.2015.7280404
Filename :
7280404
Link To Document :
بازگشت