DocumentCode
177646
Title
Head, Eye, and Hand Patterns for Driver Activity Recognition
Author
Ohn-Bar, E. ; Martin, S. ; Tawari, A. ; Trivedi, M.
Author_Institution
Univ. of California, San Diego, La Jolla, CA, USA
fYear
2014
fDate
24-28 Aug. 2014
Firstpage
660
Lastpage
665
Abstract
In this paper, a multiview, multimodal vision framework is proposed in order to characterize driver activity based on head, eye, and hand cues. Leveraging the three types of cues allows for a richer description of the driver´s state and for improved activity detection performance. First, regions of interest are extracted from two videos, one observing the driver´s hands and one the driver´s head. Next, hand location hypotheses are generated and integrated with a head pose and facial landmark module in order to classify driver activity into three states: wheel region interaction with two hands on the wheel, gear region activity, or instrument cluster region activity. The method is evaluated on a video dataset captured in on-road settings.
Keywords
computer vision; face recognition; pattern clustering; pose estimation; traffic engineering computing; activity detection performance improvement; driver activity recognition; eye patterns; facial landmark module; gear region activity; hand location hypotheses; hand patterns; head patterns; head pose; instrument cluster region activity; multimodal vision framework; multiview vision framework; on-road settings; regions of interest; video dataset; wheel region interaction; Gears; Head; Instruments; Magnetic heads; Support vector machines; Vehicles; Wheels;
fLanguage
English
Publisher
ieee
Conference_Titel
Pattern Recognition (ICPR), 2014 22nd International Conference on
Conference_Location
Stockholm
ISSN
1051-4651
Type
conf
DOI
10.1109/ICPR.2014.124
Filename
6976834
Link To Document