Title :
Be natural: A saliency-guided deep framework for image quality
Author :
Weilong Hou ; Xinbo Gao
Author_Institution :
Sch. of Electron. Eng., Xidian Univ., Xi´an, China
Abstract :
Visual attention and human language is a natural way to observe and describe the world in our daily life. For image quality assessment (IQA), psychological evidence shows that humans prefer qualitative descriptions for image quality rather than numerical ones. However, the qualitative evaluation of image quality has to be converted into numerical scores in order to train the state-of-the-art learning based methods. Why cannot IQA models learn from the qualitative description directly? To this end, a united deep framework is proposed in this paper. The model could learn the relationship between image features and qualitative labels without converting them into numerical scores. With the aid of saliency-guided features, the learned deep model classifies the images into five grades, corresponding to five explicit linguistic variables, and then a new quality pooling is designed to transform the qualitative labels to scores for comparison and general use. The framework not only reduces the randomness of numerical scores by learning qualitatively, but is also robust against noise and the small training dataset. Experiments are conducted on popular databases to verify the effectiveness and robustness of the proposed IQA method.
Keywords :
image classification; IQA method; explicit linguistic variables; human language; image classification; image quality assessment; psychological evidence; saliency-guided deep framework; visual attention; Databases; Feature extraction; Image quality; Measurement; Robustness; Training; Visualization; blind image quality assessment; deep learning; natural scene statistics; visual attention;
Conference_Titel :
Multimedia and Expo (ICME), 2014 IEEE International Conference on
Conference_Location :
Chengdu
DOI :
10.1109/ICME.2014.6890168