DocumentCode
3708060
Title
Learning deep features for image emotion classification
Author
Ming Chen;Lu Zhang;Jan P. Allebach
Author_Institution
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47906, USA
fYear
2015
Firstpage
4491
Lastpage
4495
Abstract
Images can both express and affect people´s emotions. It is intriguing and important to understand what emotions are conveyed and how they are implied by the visual content of images. Inspired by the recent success of deep convolutional neural networks (CNN) in visual recognition, we explore two simple, yet effective deep learning-based methods for image emotion analysis. The first method uses off-the-shelf CNN features directly for classification. For the second method, we fine-tune a CNN that is pre-trained on a large dataset, i.e. ImageNet, on our target dataset first. Then we extract features using the fine-tuned CNN at different location at multiple levels to capture both the global and local information. The features at different location are aggregated using the Fisher Vector for each level and concatenated to form a compact representation. From our experimental results, both the deep learning-based methods outperforms traditional methods based on generic image descriptors and hand-crafted features.
Keywords
"Feature extraction","Training","Visualization","Image recognition","Support vector machines","Neural networks","Machine learning"
Publisher
ieee
Conference_Titel
Image Processing (ICIP), 2015 IEEE International Conference on
Type
conf
DOI
10.1109/ICIP.2015.7351656
Filename
7351656
Link To Document