Title :
Speech articulatory analysis through time delay neural networks
Author :
Lavagetto, Fabio
Author_Institution :
DIST, Genoa Univ., Italy
Abstract :
The approach described is based on the use of time delay neural networks for solving the task of articulatory estimation from acoustic speech and on image vector quantization as far as the visual synthesis is concerned. Once the system has been trained on a reference speaker, the association of visual cues is performed in real time to each 20 ms of incoming speech. Preliminary results are reported with reference to the ongoing experimentation both with normal hearing people and with deaf persons to estimate some of the many perceptual thresholds involved in the complex task of speech reading from synthetic images. This experimental phase is carried on in cooperation with FIADDA, the Italian association of the families of hearing impaired children, and is based on a flexible simulation environment
Keywords :
delays; image coding; neural nets; real-time systems; speech processing; speech recognition; speech synthesis; vector quantisation; FIADDA; acoustic speech; articulatory estimation; deaf persons; flexible simulation environment; hearing impaired children; image vector quantization; incoming speech; normal hearing people; perceptual thresholds; real time; reference speaker; speech articulatory analysis; synthetic images; time delay neural networks; visual cues; visual synthesis; Auditory system; Delay effects; Delay estimation; Loudspeakers; Network synthesis; Neural networks; Real time systems; Speech analysis; Speech synthesis; Vector quantization;
Conference_Titel :
Artificial Neural Networks and Expert Systems, 1995. Proceedings., Second New Zealand International Two-Stream Conference on
Conference_Location :
Dunedin
Print_ISBN :
0-8186-7174-2
DOI :
10.1109/ANNES.1995.499495