Title :
Speech visualization research based on combined feature and neural network
Author :
Wang, Jian ; Han, Zhiyan ; Lun, Shuxian
Author_Institution :
Coll. of Inf. Sci. & Eng., Bohai Univ., Jinzhou, China
Abstract :
In view of the stronger superiority of deaf-mute in visual identification ability and visual memory ability for color, a new speech visualization method for speech signals with very good classification and location ability was proposed. It created readable patterns by integrating different speech features into a single picture. Firstly, series preprocessing of speech signals were done. Secondly, extracting features were done, among them, using three formant features mapped principal color information, using intonation features mapped pattern information via neural network 1, and then all features used as the inputs of neural network 2. Finally, the outputs of neural network mapped the position information. We evaluated the visualized speech in a preliminary test and contrasted with spectrogram, the test result shows that the visualization approach is very effective for assisting deaf-mute learning and has very good robustness.
Keywords :
data visualisation; feature extraction; handicapped aids; neural nets; speech processing; deaf-mute learning; feature extraction; formant features mapped principal color information; intonation features mapped pattern information; neural network; readable patterns; spectrogram; speech signal preprocessing series; speech visualization research; visual identification ability; visual memory ability; Artificial neural networks; Auditory system; Feature extraction; Spectrogram; Speech; Training; Visualization; combined feature; neural network; speech signal; speech visualization;
Conference_Titel :
Image and Signal Processing (CISP), 2010 3rd International Congress on
Conference_Location :
Yantai
Print_ISBN :
978-1-4244-6513-2
DOI :
10.1109/CISP.2010.5646690