DocumentCode :
1629754
Title :
A non-visual user interface using speech recognition and synthesis
Author :
Mitamura, K. ; Asakawa, Chieko
Author_Institution :
Res. Lab., IBM Japan Ltd., Tokyo, Japan
Volume :
3
fYear :
1999
fDate :
6/21/1905 12:00:00 AM
Firstpage :
1083
Abstract :
Non-visual environments are becoming important in supporting human activities through the use of man-machine systems. Here, a non-visual environment means a situation in which a screen and mouse are not available. For example, a user is occupied with some other task involving eye-hand coordination, or is visually disabled. In this paper, we propose “Speech Pointer”, a user interface for non-visual environments using speech recognition and synthesis, whose aim is to enable direct access or pointing to textual information. We prototyped the speech pointer for browsing the Web. We also prototyped another non-visual user interface on top of an existing visual application in order to find an efficient method for extending the non-visual applications. The purpose of these activities is to increase the scope of human activities in non-visual environments
Keywords :
speech recognition; speech synthesis; user interfaces; Web browsing; eye-hand coordination; human activities; non-visual user interface; speech recognition; speech synthesis; visually disabled; Humans; Information resources; Keyboards; Mice; Prototypes; Speech recognition; Speech synthesis; User interfaces;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Systems, Man, and Cybernetics, 1999. IEEE SMC '99 Conference Proceedings. 1999 IEEE International Conference on
Conference_Location :
Tokyo
ISSN :
1062-922X
Print_ISBN :
0-7803-5731-0
Type :
conf
DOI :
10.1109/ICSMC.1999.823379
Filename :
823379
Link To Document :
بازگشت