DocumentCode :
2541324
Title :
Current trends in multimodal input recognition
Author :
Salem, Ben ; Yates, Rodric ; Saatchi, Reza
Author_Institution :
NVRCAD, Plymouth Univ., UK
fYear :
1998
fDate :
36096
Firstpage :
42430
Lastpage :
42435
Abstract :
In order to optimise the effectiveness of a personal virtual reality (VR) system, it is necessary to have a natural and efficient way of interacting with it. This can be achieved by incorporation of a direct multimodal user interface. To communicate with such an interface the user can perform speech input, hand gestures, facial expressions or body movements. The VR system therefore needs to have an appropriate pattern recognition module so that it can deal with the interpretation, classification and recognition of the different human actions, which are then translated into inputs. We look at the different pattern recognition techniques currently available in the area of speech recognition, facial gesture recognition, body tracking and hand gesture recognition (e.g. artificial neural networks, statistical tools and template matching). The implementation of these techniques in a multimodal user interface was investigated and the effectiveness of such user interface was compared with the now widely used Window Icon Mouse Pointer (WIMP) interface. This was done with the intent of developing a personal VR system user interface
Keywords :
virtual reality; WIMP; artificial neural networks; body movements; body tracking; facial expressions; facial gesture recognition; hand gesture recognition; hand gestures; multimodal input recognition; multimodal user interface; pattern recognition; personal virtual reality; speech input; speech recognition; template matching;
fLanguage :
English
Publisher :
iet
Conference_Titel :
Virtual Reality Personal Mobile and Practical Applications (Ref. No. 1998/454), IEE Colloquium on
Conference_Location :
London
Type :
conf
DOI :
10.1049/ic:19980750
Filename :
744428
Link To Document :
بازگشت