Title :
Learning Saccadic Gaze Control via Motion Prediciton
Author :
Forssén, Per-Erik
Author_Institution :
Univ. of British Columbia, Canada
Abstract :
This paper describes a system that autonomously learns to perform saccadic gaze control on a stereo pan-tilt unit. Instead of learning a direct map from image positions to a centering action, the system first learns a forward model that predicts how image features move in the visual field as the gaze is shifted. Gaze control can then be performed by searching for the action that best centers a feature in both the left and the right image. By attacking the problem in a different way we are able to collect many training examples in each action, and thus learning converges much faster. The learning is performed using image features obtained from the scale invariant feature transform (SIFT) detected and matched before and after a saccade, and thus requires no special environment during the training stage. We demonstrate that our system stabilises already after 300 saccades, which is more than 100 times fewer than the best current approaches.
Keywords :
image motion analysis; learning (artificial intelligence); man-machine systems; motion control; robots; image positions; motion prediciton; saccadic gaze control; scale invariant feature transform; stereo pan-tilt unit; Cameras; Computer vision; Control systems; Error correction; Humans; Layout; Light sources; Motion control; Robot kinematics; Stereo vision;
Conference_Titel :
Computer and Robot Vision, 2007. CRV '07. Fourth Canadian Conference on
Conference_Location :
Montreal, Que.
Print_ISBN :
0-7695-2786-8
DOI :
10.1109/CRV.2007.42