Title :
Vocal imitation using physical vocal tract model
Author :
Kanda, Hisashi ; Ogata, Tetsuya ; Komatani, Kazunori ; Okuno, Hiroshi G.
Author_Institution :
Kyoto Univ., Kyoto
fDate :
Oct. 29 2007-Nov. 2 2007
Abstract :
A vocal imitation system was developed using a computational model that supports the motor theory of speech perception. A critical problem in vocal imitation is how to generate speech sounds produced by adults, whose vocal tracts have physical properties (i.e., articulatory motions) differing from those of infants´ vocal tracts. To solve this problem, a model based on the motor theory of speech perception, was constructed. This model suggests that infants simulate the speech generation by estimating their own articulatory motions in order to interpret the speech sounds of adults. Applying this model enables the vocal imitation system to estimate articulatory motions for unexperienced speech sounds that have not actually been generated by the system. The system was implemented by using Recurrent Neural Network with Parametric Bias (RNNPB) and a physical vocal tract model, called the Maeda model. Experimental results demonstrated that the system was sufficiently robust with respect to individual differences in speech sounds and could imitate unexperienced vowel sounds.
Keywords :
humanoid robots; motion estimation; recurrent neural nets; speech processing; Maeda model; articulatory motion estimation; motor theory; physical vocal tract model; recurrent neural network with parametric bias; speech perception; speech sound generation; vocal imitation; vowel sound; Computational modeling; Humans; Intelligent robots; Learning systems; Motion estimation; Notice of Violation; Pediatrics; Recurrent neural networks; Speech processing; USA Councils;
Conference_Titel :
Intelligent Robots and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on
Conference_Location :
San Diego, CA
Print_ISBN :
978-1-4244-0912-9
Electronic_ISBN :
978-1-4244-0912-9
DOI :
10.1109/IROS.2007.4399137