DocumentCode :
1937208
Title :
What type of inputs will we need for expressive speech synthesis?
Author :
Campbell, Nick
Author_Institution :
ATR Human Inf. Sci. Labs., Kyoto, Japan
fYear :
2002
fDate :
11-13 Sept. 2002
Firstpage :
95
Lastpage :
98
Abstract :
Speech synthesis is not necessarily synonymous with text-to-speech. This paper describes an implementation for a talking machine that produces multilingual conversational utterances from a combination of speaker, language, speaking-style, and content information, using icon-based input. The paper addresses the problems of specifying the text-content of a conversational utterance from a combination of conceptual icons, in conjunction with language and speaker information. It concludes that in order to specify the speech content (text details and speaking-style) adequately, further selection options for speaker-commitment will be required.
Keywords :
graphical user interfaces; speech synthesis; speech-based user interfaces; content information; expressive speech synthesis; icon-based input; language information; multilingual conversational utterances; speaker information; speaking-style information; talking machine; Electrostatic precipitators; Human voice; IEEE news; Information science; Laboratories; Natural languages; Personnel; Speech processing; Speech synthesis; Synthesizers;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Speech Synthesis, 2002. Proceedings of 2002 IEEE Workshop on
Print_ISBN :
0-7803-7395-2
Type :
conf
DOI :
10.1109/WSS.2002.1224381
Filename :
1224381
Link To Document :
بازگشت