DocumentCode :
3517988
Title :
Multi-modal activity and dominance detection in smart meeting rooms
Author :
Hörnler, Benedikt ; Rigoll, Gerhard
Author_Institution :
Inst. for Human-Machine-Commun., Tech. Univ. Munchen, Munich
fYear :
2009
fDate :
19-24 April 2009
Firstpage :
1777
Lastpage :
1780
Abstract :
In this paper a new approach for activity and dominance modeling in meetings is presented. For this purpose low level acoustic and visual features are extracted from audio and video capture devices. Hidden Markov Models (HMM) are used for the segmentation and classification of activity levels for each participant. Additionally, more semantic features are applied in a two-layer HMM approach. The experiments show that the acoustic feature is the most important one. The early fusion of acoustic and global-motion features achieves nearly as good results as the acoustic feature alone. All the other early fusion approaches are outperformed by the acoustic feature. More over, the two-layer model could not achieve the results of the acoustic features.
Keywords :
feature extraction; hidden Markov models; learning (artificial intelligence); man-machine systems; dominance detection; feature extraction; hidden Markov models; human-machine interaction; machine learning; multi-modal activity; smart meeting rooms; Acoustic devices; Acoustic signal detection; Cameras; Face detection; Feature extraction; Hidden Markov models; Machine learning; Man machine systems; Microphone arrays; Videoconference; Activity Detection; Human-Machine Interaction; Machine Learning; Meeting Analysis; Multi-modal Low Level Features;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on
Conference_Location :
Taipei
ISSN :
1520-6149
Print_ISBN :
978-1-4244-2353-8
Electronic_ISBN :
1520-6149
Type :
conf
DOI :
10.1109/ICASSP.2009.4959949
Filename :
4959949
Link To Document :
بازگشت