DocumentCode :
2529353
Title :
Detecting robot-directed speech by situated understanding in object manipulation tasks
Author :
Zuo, Xiang ; Iwahashi, Naoto ; Taguchi, Ryo ; Funakoshi, Kotaro ; Nakano, Mikio ; Matsuda, Shigeki ; Sugiura, Komei ; Oka, Natsuki
Author_Institution :
Adv. Telecommun. Res. Labs., Kyoto Inst. of Technol., Kyoto, Japan
fYear :
2010
fDate :
13-15 Sept. 2010
Firstpage :
608
Lastpage :
613
Abstract :
In this paper, we propose a novel method for a robot to detect robot-directed speech, that is, to distinguish speech that users speak to a robot from speech that users speak to other people or to themselves. The originality of this work is the introduction of a multimodal semantic confidence (MSC) measure, which is used for domain classification of input speech based on the decision on whether the speech can be interpreted as a feasible action under the current physical situation in an object manipulation task. This measure is calculated by integrating speech, object, and motion confidence with weightings that are optimized by logistic regression. Then we integrate this measure with gaze tracking and conduct experiments under conditions of natural human-robot interaction. Experimental results show that the proposed method achieves a high performance of 94% and 96% in average recall and precision rates, respectively, for robot-directed speech detection.
Keywords :
mobile robots; speaker recognition; user interfaces; domain classification; gaze tracking; logistic regression; multimodal semantic confidence; natural human-robot interaction; object manipulation tasks; robot-directed speech; speech detection; Current measurement; Motion measurement; Noise measurement; Robots; Speech; Speech recognition; Trajectory;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
RO-MAN, 2010 IEEE
Conference_Location :
Viareggio
ISSN :
1944-9445
Print_ISBN :
978-1-4244-7991-7
Type :
conf
DOI :
10.1109/ROMAN.2010.5598729
Filename :
5598729
Link To Document :
بازگشت