DocumentCode
716170
Title
Large Margin Coupled Feature Learning for cross-modal face recognition
Author
Yi Jin ; Jiwen Lu ; Qiuqi Ruan
Author_Institution
Beijing Jiaotong Univ., Beijing, China
fYear
2015
fDate
19-22 May 2015
Firstpage
286
Lastpage
292
Abstract
This paper presents a Large Margin Coupled Feature Learning (LMCFL) method for cross-modal face recognition, which recognizes persons from facial images captured from different modalities. Most previous cross-modal face recognition methods utilize hand-crafted feature descriptors for face representation, which require strong prior knowledge to engineer and cannot exploit data-adaptive characteristics in feature extraction. In this work, we propose a new LMCFL method to learn coupled face representation at the image pixel level by jointly utilizing the discriminative information of face images in each modality and the correlation information of face images from different modalities. Thus, LMCFL can maximize the margin between positive face pairs and negative face pairs in each modality, and maximize the correlation of face images from different modalities, where discriminative face features can be automatically learned in a discriminative and data-driven way. Our LMCFL is validated on two different cross-modal face recognition applications, and the experimental results demonstrate the effectiveness of our proposed approach.
Keywords
correlation methods; face recognition; image representation; learning (artificial intelligence); LMCFL method; coupled face representation; cross-modal face recognition; face image correlation; image pixel level; large margin coupled feature learning; person recognition; Databases; Face; Face recognition; Feature extraction; Measurement; Optimization; Probes;
fLanguage
English
Publisher
ieee
Conference_Titel
Biometrics (ICB), 2015 International Conference on
Conference_Location
Phuket
Type
conf
DOI
10.1109/ICB.2015.7139097
Filename
7139097
Link To Document