DocumentCode :
2915183
Title :
Learning to recognize objects in egocentric activities
Author :
Fathi, Alireza ; Ren, Xiaofeng ; Rehg, James M.
fYear :
2011
fDate :
20-25 June 2011
Firstpage :
3281
Lastpage :
3288
Abstract :
This paper addresses the problem of learning object models from egocentric video of household activities, using extremely weak supervision. For each activity sequence, we know only the names of the objects which are present within it, and have no other knowledge regarding the appearance or location of objects. The key to our approach is a robust, unsupervised bottom up segmentation method, which exploits the structure of the egocentric domain to partition each frame into hand, object, and background categories. By using Multiple Instance Learning to match object instances across sequences, we discover and localize object occurrences. Object representations are refined through transduction and object-level classifiers are trained. We demonstrate encouraging results in detecting novel object instances using models produced by weakly-supervised learning.
Keywords :
image classification; learning (artificial intelligence); object detection; object recognition; video signal processing; activity sequence; extremely weak supervision; household activities egocentric video; learning object models; multiple instance learning; object instance detection; object occurrence; object recognition learning; object representation; object-level classifiers; unsupervised bottom up segmentation method; weakly-supervised learning; Cameras; Computational modeling; Histograms; Image color analysis; Image segmentation; Motion segmentation; Training data;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on
Conference_Location :
Providence, RI
ISSN :
1063-6919
Print_ISBN :
978-1-4577-0394-2
Type :
conf
DOI :
10.1109/CVPR.2011.5995444
Filename :
5995444
Link To Document :
بازگشت