Title : 
Fusion of visual attention cues by machine learning
         
        
            Author : 
Lee, Wen-Fu ; Huang, Tai-Hsiang ; Yeh, Su-Ling ; Chen, Homer H.
         
        
            Author_Institution : 
Grad. Inst. of Commun. Eng., Nat. Taiwan Univ., Taipei, Taiwan
         
        
        
        
        
        
            Abstract : 
A new computational scheme for visual attention modeling is proposed. It adopts both low-level and high-level features to predict visual attention from a video signal and fuses the features by using machine learning. We show that such a scheme is more robust than those using purely single level features. Unlike conventional techniques, our scheme is able to avoid perceptual mismatch between the estimated saliency and the actual human fixation. We show that selecting the representative training samples according to the fixation distribution improves the efficacy of regressive training. Experimental results are shown to demonstrate the advantages of the proposed scheme.
         
        
            Keywords : 
feature extraction; image fusion; iris recognition; learning (artificial intelligence); regression analysis; video signal processing; actual human fixation; high-level features; low-level features; machine learning; regressive training; saliency estimation; visual attention cue fusion; visual attention modeling; Estimation; Face; Feature extraction; Humans; Testing; Training; Visualization; Visual attention; eye tracker; fixation distribution; human visual system; regression; saliency map;
         
        
        
        
            Conference_Titel : 
Image Processing (ICIP), 2011 18th IEEE International Conference on
         
        
            Conference_Location : 
Brussels
         
        
        
            Print_ISBN : 
978-1-4577-1304-0
         
        
            Electronic_ISBN : 
1522-4880
         
        
        
            DOI : 
10.1109/ICIP.2011.6116377