Title :
Locally Aligned Feature Transforms across Views
Author :
Wei Li ; Xiaogang Wang
Author_Institution :
Electron. Eng. Dept., Chinese Univ. of Hong Kong, Shatin, China
Abstract :
In this paper, we propose a new approach for matching images observed in different camera views with complex cross-view transforms and apply it to person re-identification. It jointly partitions the image spaces of two camera views into different configurations according to the similarity of cross-view transforms. The visual features of an image pair from different views are first locally aligned by being projected to a common feature space and then matched with softly assigned metrics which are locally optimized. The features optimal for recognizing identities are different from those for clustering cross-view transforms. They are jointly learned by utilizing sparsity-inducing norm and information theoretical regularization. This approach can be generalized to the settings where test images are from new camera views, not the same as those in the training set. Extensive experiments are conducted on public datasets and our own dataset. Comparisons with the state-of-the-art metric learning and person re-identification methods show the superior performance of our approach.
Keywords :
feature extraction; image matching; information theory; learning (artificial intelligence); common feature space; complex cross-view transforms; cross-view transform clustering; image matching; image spaces; information theoretical regularization; locally aligned feature transforms; metric learning; person re-identification methods; public datasets; softly assigned metrics; sparsity-inducing norm; training set; two camera views; visual features; Cameras; Learning systems; Measurement; Training; Transforms; Vectors; Visualization;
Conference_Titel :
Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on
Conference_Location :
Portland, OR
DOI :
10.1109/CVPR.2013.461