DocumentCode :
2329866
Title :
Multimodal interactive spaces: MagicTV and magicMAP
Author :
Worsley, Marcelo ; Johnston, Michael
fYear :
2010
fDate :
12-15 Dec. 2010
Firstpage :
161
Lastpage :
162
Abstract :
Through the growing popularity of voice-enabled search, multimodal applications are finally starting to get into the hands of consumers. However, these applications are principally for mobile platforms and generally involve highly-moded interaction where the user has to click or hold a button in order to speak. Significant technical challenges remain in bringing multimodal interaction to other environments such as smart living rooms and classrooms, where users speech and gesture is directed toward large displays or interactive kiosks and the microphone and other sensors are `always on´. In this demonstration, we present a framework combining low cost hardware and open source software that lowers the barrier of entry for exploration of multimodal interaction in smart environments. Specifically, we will demonstrate the combination of infrared tracking, face detection, and open microphone speech recognition for media search (magicTV) and map navigation (magicMap).
Keywords :
speech recognition; user interfaces; face detection; infrared tracking; magicMAP; magicTV; map navigation; media search; mobile platforms; multimodal interaction; multimodal interactive spaces; open microphone speech recognition; smart environments; voice enabled search; gesture recognition; multimodal integration; open microphone; speech recognition;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Spoken Language Technology Workshop (SLT), 2010 IEEE
Conference_Location :
Berkeley, CA
Print_ISBN :
978-1-4244-7904-7
Electronic_ISBN :
978-1-4244-7902-3
Type :
conf
DOI :
10.1109/SLT.2010.5700841
Filename :
5700841
Link To Document :
بازگشت