Title :
Generative hierarchical models for image analysis
Author_Institution :
Div. of Appl. Math., Brown Univ., Providence, RI, USA
Abstract :
A probabilistic grammar for the groupings and labeling of parts and objects, when taken together with pose and part-dependent appearance models, constitutes a generative scene model and a Bayesian framework for image analysis. To the extent that the generative model generates features, as opposed to pixel intensities, the "inverse" or "posterior distribution" on interpretations given images is based on incomplete information; feature vectors are generally insufficient to recover the original intensities. I will argue for fully generative scene models, meaning models that in principle generate actual digital pictures. I will outline an approach to the construction of fully generative models through an extension of context-sensitive grammars and a re-formulation of the popular template models for image fragments. Mostly I will focus on the problem of constructing pixel-level appearance models. I will propose an approach based on image-fragment templates, as introduced by Ullman and others. However, rather than using a correlation between a template and a given image patch as an extracted feature.
Keywords :
Bayes methods; context-sensitive grammars; feature extraction; probability; Bayesian framework; context-sensitive grammar; digital pictures; feature extraction; generative hierarchical model; generative scene model; image analysis; image patch; image-fragment templates; part-dependent appearance model; posterior distribution; probabilistic grammar; template model reformulation; Context modeling; Data models; Eyes; Image analysis; Image color analysis; Layout; Mathematical model; Mathematics; Pixel; Testing;
Conference_Titel :
Computer Vision and Pattern Recognition Workshops, 2009. CVPR Workshops 2009. IEEE Computer Society Conference on
Conference_Location :
Miami, FL
Print_ISBN :
978-1-4244-3994-2
DOI :
10.1109/CVPRW.2009.5204335