DocumentCode :
2916636
Title :
Learning hierarchical poselets for human parsing
Author :
Wang, Yang ; Tran, Duan ; Liao, Zicheng
Author_Institution :
Dept. of Comput. Sci., Univ. of Illinois at Urbana-Champaign, Urbana, IL, USA
fYear :
2011
fDate :
20-25 June 2011
Firstpage :
1705
Lastpage :
1712
Abstract :
We consider the problem of human parsing with part-based models. Most previous work in part-based models only considers rigid parts (e.g. torso, head, half limbs) guided by human anatomy. We argue that this representation of parts is not necessarily appropriate for human parsing. In this paper, we introduce hierarchical poselets-a new representation for human parsing. Hierarchical poselets can be rigid parts, but they can also be parts that cover large portions of human bodies (e.g. torso + left arm). In the extreme case, they can be the whole bodies. We develop a structured model to organize poselets in a hierarchical way and learn the model parameters in a max-margin framework. We demonstrate the superior performance of our proposed approach on two datasets with aggressive pose variations.
Keywords :
biology computing; physiological models; hierarchical poselets; human anatomy; human parsing; max-margin framework; part-based model; rigid parts; structured model; Biological system modeling; Head; Humans; Joints; Legged locomotion; Torso; Training;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on
Conference_Location :
Providence, RI
ISSN :
1063-6919
Print_ISBN :
978-1-4577-0394-2
Type :
conf
DOI :
10.1109/CVPR.2011.5995519
Filename :
5995519
Link To Document :
بازگشت