Title :
Online object tracking using sparse prototypes by learning visual prior
Author :
Divya, S. ; Latha, K.
Abstract :
Object tracking is becoming a key ingredient in analysis of video imagery. For efficient and robust object tracking, visual prior of generic real world images are transferred for tracking the objects. The real world images are learned offline in an over-complete dictionary. The VOC2010 and CalTech101 data sets containing large variety of objects are used for learning visual prior. For visual tracking of online objects the learned visual prior is transferred for object representation using 11/12 Sparse coding and multi-scale max pooling. With the object representation, the tracking task is formulated within the Bayesian inference framework with the use of Sparse prototypes. In order to reduce tracking drift, we present a method that takes occlusion and motion blur into account rather than simply include image observations for model update.
Keywords :
belief networks; image motion analysis; image representation; inference mechanisms; learning (artificial intelligence); object tracking; video coding; Bayesian inference framework; CalTech101 data set; VOC2010 data set; generic real world image; image observation; learning visual prior; motion blur; multiscale max pooling; object representation; occlusion; online object tracking; over-complete dictionary; sparse coding; sparse prototype; tracking drift; tracking task; video imagery; visual tracking; Dictionaries; Encoding; Object tracking; Robustness; Target tracking; Visualization; 11/12 sparse coding; Learning prior; Object Representation; Object Tracking and Sparse Prototype;
Conference_Titel :
Communications and Signal Processing (ICCSP), 2013 International Conference on
Conference_Location :
Melmaruvathur
Print_ISBN :
978-1-4673-4865-2
DOI :
10.1109/iccsp.2013.6577124