Abstract :
It is widely conjectured that the excellent ROC performance of biological vision systems is due in large part to the exploitation of context at each of many levels in a part/whole hierarchy. We propose a mathematical framework (a "composition machine") for constructing probabilistic hierarchical image models, designed to accommodate arbitrary contextual relationships, and we build a demonstration system for reading Massachusetts license plates in an image set collected at Logan Airport. The demonstration system detects and correctly reads more than 98% of the plates, with a negligible rate of false detection. Unlike a formal grammar, the architecture of a composition machine does not exclude the sharing of sub-parts among multiple entities, and does not limit interpretations to single trees (e.g. a scene can have multiple license plates, or no plates at all). In this sense, the architecture is more like a general Bayesian network than a formal grammar. On the other hand, unlike a Bayesian network, the distribution is non-Markovian, and therefore more like a probabilistic context-sensitive grammar. The conceptualization and construction of a composition machine is facilitated by its formulation as the result of a series of non-Markovian perturbations of a "Markov backbone."