DocumentCode :
1598325
Title :
Towards multi-modal context recognition for hearing instruments
Author :
Tessendorf, Bernd ; Bulling, Andreas ; Roggen, Daniel ; Stiefmeier, Thomas ; Tröster, Gerhard ; Feilner, Manuela ; Derleth, Peter
Author_Institution :
Wearable Comput. Lab., ETH Zurich, Zurich, Switzerland
fYear :
2010
Firstpage :
1
Lastpage :
2
Abstract :
Current hearing instruments (HI) only rely on auditory scene analysis to adapt to the situation of the user. It is for this reason that these systems are limited in the number and type of situations they can detect. We investigate how context information derived from eye and head movements can be used to resolve such situations. We focus on two example problems that are challenging for current HIs: To distinguish concentrated from interaction, and to detect whether a person is walking alone or walking while having a conversation. We collect an eleven participant (6 male, 5 female, age 24-59) dataset that covers different typical office activities. Using person-independent training and isolated recognition we achieve an average precision of 71.7% (recall: 70.1%) for recognising concentrated work and 57.2% precision (recall: 81.3%) for detecting walking while conversing.
Keywords :
biological techniques; gesture recognition; hearing aids; image motion analysis; auditory scene analysis; concentrated work recognition; current hearing instrument; eye movement; head movement; hearing instrument; multimodal context recognition; person independent training; Auditory system; Context; Electrodes; Instruments; Legged locomotion; Magnetic heads; Training;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Wearable Computers (ISWC), 2010 International Symposium on
Conference_Location :
Seoul
ISSN :
1550-4816
Print_ISBN :
978-1-4244-9046-2
Electronic_ISBN :
1550-4816
Type :
conf
DOI :
10.1109/ISWC.2010.5665855
Filename :
5665855
Link To Document :
بازگشت