DocumentCode
1423906
Title
Symbolic connectionism in natural language disambiguation
Author
Chan, Samuel W K ; Franklin, James
Author_Institution
Dept. of Comput. Sci. & Language Inf. Sci. Res. Centre, City Polytech. of Hong Kong, Hong Kong
Volume
9
Issue
5
fYear
1998
fDate
9/1/1998 12:00:00 AM
Firstpage
739
Lastpage
755
Abstract
Natural language understanding involves the simultaneous consideration of a large number of different sources of information. Traditional methods employed in language analysis have focused on developing powerful formalisms to represent syntactic or semantic structures along with rules for transforming language into these formalisms. However, they make use of only small subsets of knowledge. This article describes how to use the whole range of information through a neurosymbolic architecture which is a hybridization of a symbolic network and subsymbol vectors generated from a connectionist network. Besides initializing the symbolic network with prior knowledge, the subsymbol vectors are used to enhance the system´s capability in disambiguation and provide flexibility in sentence understanding. The model captures a diversity of information including word associations, syntactic restrictions, case-role expectations, semantic rules and context. It attains highly interactive processing by representing knowledge in an associative network on which actual semantic inferences are performed. An integrated use of previously analyzed sentences in understanding is another important feature of our model. The model dynamically selects one hypothesis among multiple hypotheses. This notion is supported by three simulations which show the degree of disambiguation relies both on the amount of linguistic rules and the semantic-associative information available to support the inference processes in natural language understanding. Unlike many similar systems, our hybrid system is more sophisticated in tackling language disambiguation problems by using linguistic clues from disparate sources as well as modeling context effects into the sentence analysis. It is potentially more powerful than any systems relying on one processing paradigm
Keywords
inference mechanisms; natural languages; neural nets; semantic networks; associative network; case-role expectations; connectionist network; context; natural language disambiguation; neurosymbolic architecture; semantic inferences; semantic rules; subsymbol vectors; symbolic connectionism; symbolic network; syntactic restrictions; word associations; Animals; Bayesian methods; Context modeling; Humans; Hybrid power systems; Information resources; Intelligent networks; Natural languages; Neural networks; Power system modeling;
fLanguage
English
Journal_Title
Neural Networks, IEEE Transactions on
Publisher
ieee
ISSN
1045-9227
Type
jour
DOI
10.1109/72.712149
Filename
712149
Link To Document