Title :
An Object-Based Visual Attention Model for Robotic Applications
Author :
Yu, Yuanlong ; Mann, George K I ; Gosine, Raymond G.
Author_Institution :
Fac. of Eng. & Appl. Sci., Memorial Univ. of Newfoundland, St. John´´s, NL, Canada
Abstract :
By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.
Keywords :
image segmentation; robot vision; visual perception; Gestalt rules; LTM; attending phase; bottom-up competition; dual-coding object representation; image segmentation; integrated competition hypothesis; learning phase; location-based saliency map; long-term memory; low-dimensional features; object representation; object-based visual attention model; perceptual completion processing module; preattentive processing; robotic applications; top-down biasing; visual perception; Councils; Machine vision; Mediation; Mobile robots; Object detection; Psychology; Robot sensing systems; Robot vision systems; Terrorism; Visual perception; Integrated competition (IC) hypothesis; mobile robotics; object-based visual attention; top–down biasing; Algorithms; Artificial Intelligence; Attention; Biomimetics; Computer Simulation; Models, Theoretical; Pattern Recognition, Automated; Robotics; Visual Perception;
Journal_Title :
Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on
DOI :
10.1109/TSMCB.2009.2038895