Title :
An algebraic approach to learning in syntactic neural networks
Author_Institution :
Dept. of Electron. Syst. Eng., Essex Univ., Colchester, UK
Abstract :
The algebraic learning paradigm is described in relation to syntactic neural networks. In algebraic learning, each free parameter of the net is given a unique variable name, and the net output is then expressed as a sum of products of these variables, for each training sentence. The expressions are equated to true if the sentence is a positive sample and false if the sentence is a negative sample. A constraint satisfaction procedure is then used to find an assignment to the variables such that all the equations are satisfied. Such an assignment must yield a network that parses all the positive samples and none of the negative samples, and hence a correct grammar. Unfortunately, the algorithm grows exponentially in time and space with respect to string length. A number of ways of countering this growth, using the inference of a tiny subset of context-free English as a example, are explored
Keywords :
constraint handling; grammars; inference mechanisms; learning (artificial intelligence); neural nets; algebraic approach; constraint satisfaction procedure; free parameter; grammar; inference; learning; sum of products; syntactic neural networks; unique variable name; Crops; Equations; Inference algorithms; Intelligent networks; Natural languages; Neural networks; Speech recognition; Stochastic processes; Systems engineering and theory; Testing;
Conference_Titel :
Neural Networks, 1992. IJCNN., International Joint Conference on
Conference_Location :
Baltimore, MD
Print_ISBN :
0-7803-0559-0
DOI :
10.1109/IJCNN.1992.287076