DocumentCode :
3672366
Title :
Deeply learned face representations are sparse, selective, and robust
Author :
Yi Sun;Xiaogang Wang;Xiaoou Tang
Author_Institution :
Department of Information Engineering, The Chinese University of Hong Kong, China
fYear :
2015
fDate :
6/1/2015 12:00:00 AM
Firstpage :
2892
Lastpage :
2900
Abstract :
This paper designs a high-performance deep convolutional network (DeepID2+) for face recognition. It is learned with the identification-verification supervisory signal. By increasing the dimension of hidden representations and adding supervision to early convolutional layers, DeepID2+ achieves new state-of-the-art on LFW and YouTube Faces benchmarks. Through empirical studies, we have discovered three properties of its deep neural activations critical for the high performance: sparsity, selectiveness and robustness. (1) It is observed that neural activations are moderately sparse. Moderate sparsity maximizes the discriminative power of the deep net as well as the distance between images. It is surprising that DeepID2+ still can achieve high recognition accuracy even after the neural responses are binarized. (2) Its neurons in higher layers are highly selective to identities and identity-related attributes. We can identify different subsets of neurons which are either constantly excited or inhibited when different identities or attributes are present. Although DeepID2+ is not taught to distinguish attributes during training, it has implicitly learned such high-level concepts. (3) It is much more robust to occlusions, although occlusion patterns are not included in the training set.
Keywords :
"Face","Neurons","Accuracy","Training","Face recognition","Robustness","Convolution"
Publisher :
ieee
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on
Electronic_ISBN :
1063-6919
Type :
conf
DOI :
10.1109/CVPR.2015.7298907
Filename :
7298907
Link To Document :
بازگشت