DocumentCode :
730762
Title :
Voice conversion using deep Bidirectional Long Short-Term Memory based Recurrent Neural Networks
Author :
Lifa Sun ; Shiyin Kang ; Kun Li ; Meng, Helen
Author_Institution :
Dept. of Syst. Eng. & Eng. Manage., Chinese Univ. of Hong Kong, Hong Kong, China
fYear :
2015
fDate :
19-24 April 2015
Firstpage :
4869
Lastpage :
4873
Abstract :
This paper investigates the use of Deep Bidirectional Long Short-Term Memory based Recurrent Neural Networks (DBLSTM-RNNs) for voice conversion. Temporal correlations across speech frames are not directly modeled in frame-based methods using conventional Deep Neural Networks (DNNs), which results in a limited quality of the converted speech. To improve the naturalness and continuity of the speech output in voice conversion, we propose a sequence-based conversion method using DBLSTM-RNNs to model not only the frame-wised relationship between the source and the target voice, but also the long-range context-dependencies in the acoustic trajectory. Experiments show that DBLSTM-RNNs outperform DNNs where Mean Opinion Scores are 3.2 and 2.3 respectively. Also, DBLSTM-RNNs without dynamic features have better performance than DNNs with dynamic features.
Keywords :
recurrent neural nets; speech processing; DBLSTM-RNNs; DNNs; acoustic trajectory; deep bidirectional long short-term memory based recurrent neural networks; frame-based methods; long-range context-dependency; mean opinion scores; sequence-based conversion method; speech frames; temporal correlations; voice conversion; Acoustics; Context; Logic gates; Recurrent neural networks; Speech; Training; bidirectional long short-term memory; dynamic features; recurrent neural networks; voice conversion;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on
Conference_Location :
South Brisbane, QLD
Type :
conf
DOI :
10.1109/ICASSP.2015.7178896
Filename :
7178896
Link To Document :
بازگشت