DocumentCode :
383138
Title :
Mutual assistance between speech and vision for human-robot interface
Author :
Yoshizaki, Mitsutoshi ; Nakamura, Akio ; Kuno, Yoshinori
Author_Institution :
Saitama Univ., Japan
Volume :
2
fYear :
2002
fDate :
2002
Firstpage :
1308
Abstract :
This paper presents a user interface for a service robot that can bring the objects asked by the user. Speech-based interface is appropriate for this application, but it alone is not sufficient. The system needs a vision-based interface to recognize gestures as well. Moreover, it needs vision capabilities to obtain the real world information about the objects mentioned in the user´s speech. For example, the robot needs to find the target object ordered by speech to carry out the task. This can be considered as a vision assisted speech. However, vision sometimes fails to detect the objects. Moreover, there are objects for which vision cannot be expected to work well. In these cases, the robot tells the current status to the user so that he/she can give advice by speech to the robot. This can be considered as a speech assisted vision through the user. This paper presents how the mutual assistance between speech and vision works and demonstrates promising results through experiments.
Keywords :
gesture recognition; interactive systems; man-machine systems; mobile robots; robot vision; speech recognition; gesture recognition; human-robot interaction; localization; robot vision; service robot; speech recognition; speech vision interface; user interface; Books; Humans; Object detection; Robot vision systems; Robustness; Senior citizens; Service robots; Speech recognition; Speech synthesis; User interfaces;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Robots and Systems, 2002. IEEE/RSJ International Conference on
Print_ISBN :
0-7803-7398-7
Type :
conf
DOI :
10.1109/IRDS.2002.1043935
Filename :
1043935
Link To Document :
بازگشت