DocumentCode :
671409
Title :
Unsupervised multimodal feature learning for semantic image segmentation
Author :
Deli Pei ; Huaping Liu ; Yulong Liu ; Fuchun Sun
Author_Institution :
Dept. of Comput. Sci. & Technol., Tsinghua Univ., Beijing, China
fYear :
2013
fDate :
4-9 Aug. 2013
Firstpage :
1
Lastpage :
6
Abstract :
In this paper, we address the semantic segmentation problem using single-layer networks. This network is used for unsupervised feature learning for the available RGB image and the depth image. A significant contribution of the proposed approach is that the dictionary is selected from the existing samples using the L2, 1 optimization. Such a dictionary can capture more meaningful representative samples and exploit intrinsic correlation between features from different modalities. The experimental results on the public NYU dataset show that this strategy dramatically improves the classification performance, compared with existing dictionary learning approach. In addition, we perform experimental verification using the practical robot platforms and show promising results.
Keywords :
image classification; image segmentation; optimisation; unsupervised learning; L2,1 optimization; RGB image; classification performance; depth image; dictionary learning; intrinsic correlation; public NYU dataset; robot platforms; semantic image segmentation; single-layer networks; unsupervised multimodal feature learning; Cameras; Correlation; Dictionaries; Image segmentation; Semantics; Training; Vectors;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks (IJCNN), The 2013 International Joint Conference on
Conference_Location :
Dallas, TX
ISSN :
2161-4393
Print_ISBN :
978-1-4673-6128-6
Type :
conf
DOI :
10.1109/IJCNN.2013.6706748
Filename :
6706748
Link To Document :
بازگشت