Author/Authors :
Resnik، نويسنده , , Philip، نويسنده ,
Abstract :
A new, information-theoretic model of selectional constraints is proposed. The strategy adopted here is a minimalist one: how far can one get making as few assumptions as possible? In keeping with that strategy, the proposed model consists of only two components: first, a fairly generic taxonomic representation of concepts, and, second, a probabilistic formalization of selectional constraints defined in terms of that taxonomy, computed on the basis of simple, observable frequencies of co-occurrence between predicates and their arguments. Unlike traditional selection restrictions, the information-theoretic approach avoids empirical problems associated with definitional theories of word meaning, accommodates the observation that semantic anomaly often appears to be a matter of degree, and provides an account of how selectional constraints can be learned. A computational implementation of the model “learns” selectional constraints from collections of naturally occurring text; the predictions of the implemented model are evaluated against judgments elicited from adult subjects, and used to explore the way that arguments are syntactically realized for a class of English verbs. The paper concludes with a discussion of the role of selectional constraints in the acquisition of verb meaning.