Title :
Robot recognizes three simultaneous speech by active audition
Author :
Nakadai, Kazuhiro ; Okuno, Hiroshi G. ; Kitano, Hiroaki
Author_Institution :
Kitano Symbiotic Syst. Project, Japan Sci. & Tech. Corp., Tokyo, Japan
Abstract :
Robots should listen to and recognize speeches with their own ears under noisy environments and simultaneous speeches to attain smooth communications with people in a real world. This paper presents three simultaneous speech recognition based on active audition which integrates audition with motion. Our robot audition system consists of three modules - a real-time human tracking system, an active direction-pass filter (ADPF) and a speech recognition system using multiple acoustic models. The real-time human tracking realizes robust and accurate sound source localization and tracking by audio-visual integration. The performance of localization shows that the resolution of the center of the robot is much higher than that of the peripheral. We call this phenomenon "auditory fovea" because it is similar to visual fovea (high resolution in the center of the human eye). Active motions such as being directed at the sound source improve localization because of making the best use if the auditory fovea. The ADPF realizes accurate and fast sound separation by using a pair of microphones. The ADPF separates sounds originating from the specified direction obtained by the real-time human tracking system. Because the performance of separation depends on the accuracy of localization, the extraction of sound from the front direction is more accurate than that of sound from the periphery. This means that the pass range of ADPF should be narrower in the front direction than in periphery. In other words, such active pass range control improves sound separation. The separated speech is recognized by the speech recognition using multiple acoustic models that integrates multiple results to output the result with the maximum likelihood. Active motions such as being directed at a sound source improve speech recognition because it realizes not only improvement of sound extraction but also easier integration of the results using face ID by face recognition. The robot audition system improved by active audition is implemented on an upper-torso humanoid. The system attains localization, separation and recognition of three simultaneous speeches and the results proves the efficiency of active audition.
Keywords :
audio-visual systems; maximum likelihood detection; real-time systems; robots; source separation; speech processing; speech recognition; tracking; active audition; active direction-pass filter; audio-visual integration; auditory fovea; face recognition; maximum likelihood; multiple acoustic models; pass range control; real-time human tracking system; robot audition system; robot interaction; simultaneous speech recognition; sound extraction; sound separation; sound source localization; upper-torso humanoid; Acoustic noise; Active filters; Ear; Humans; Microphones; Real time systems; Robots; Robustness; Speech recognition; Working environment noise;
Conference_Titel :
Robotics and Automation, 2003. Proceedings. ICRA '03. IEEE International Conference on
Print_ISBN :
0-7803-7736-2
DOI :
10.1109/ROBOT.2003.1241628