DocumentCode :
2969047
Title :
The use of vector quantization in neural speech synthesis
Author :
Cawley, G.C. ; Noakes, P.D.
Author_Institution :
Neural & VLSI Syst. Lab., Essex Univ., Colchester, UK
Volume :
3
fYear :
1993
fDate :
25-29 Oct. 1993
Firstpage :
2227
Abstract :
Our previous work has indicated that multilayer perceptrons trained using the backpropagation algorithm, have great difficulty in learning continuous mappings with sufficient accuracy for speech synthesis. The use of vector quantization allows networks to be trained to select a sequence of entries from a codebook of speech parameter vectors. For the network to be able to generalise meaningfully some correlation must exist between codebook vectors and the indices by which they are recalled (otherwise the network will be attempting to learn an essentially random mapping). This paper describes the use of the Hamming learning vector quantizer (H-LVQ), which is used to generate a codebook of speech vectors in which such a correlation exists.
Keywords :
Hamming codes; backpropagation; multilayer perceptrons; speech coding; speech synthesis; vector quantisation; Hamming learning vector quantizer; backpropagation; codebook; multilayer perceptrons; neural speech synthesis; speech parameter vectors; vector quantization; Bit rate; Laboratories; Linear predictive coding; Neurons; Speech analysis; Speech coding; Speech synthesis; Systems engineering and theory; Vector quantization; Very large scale integration;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1993. IJCNN '93-Nagoya. Proceedings of 1993 International Joint Conference on
Print_ISBN :
0-7803-1421-2
Type :
conf
DOI :
10.1109/IJCNN.1993.714169
Filename :
714169
Link To Document :
بازگشت