Title :
Integration of eye-gaze, voice and manual response in multimodal user interface
Author_Institution :
Nat. Key Lab. of Human Factors, Hangzhou Univ., China
Abstract :
This paper reports the utility of eye-gaze, voice and manual response in the design of multimodal user interface. A device and application-independent model (VisualMan) of selection and manipulation was developed and validated in 3D cube manipulation task. The multimodal inputs are integrated in a prototype interface based on priority of modalities and interaction context. The implications of the model for virtual reality interface are discussed and a simple virtual environment using present multimodal user interface model is proposed
Keywords :
multimedia computing; speech recognition; user interfaces; virtual reality; voice equipment; 3D cube manipulation task; VisualMan; application-independent model; eye-gaze response; manual response; multimodal user interface; virtual reality interface; voice response; Context; Graphical user interfaces; Human factors; Laboratories; Prototypes; Psychology; User interfaces; Virtual environment; Virtual reality; Visualization;
Conference_Titel :
Systems, Man and Cybernetics, 1995. Intelligent Systems for the 21st Century., IEEE International Conference on
Conference_Location :
Vancouver, BC
Print_ISBN :
0-7803-2559-1
DOI :
10.1109/ICSMC.1995.538404