DocumentCode :
2698131
Title :
Linear discriminants, logic functions, backpropagation, and improved convergence
Author :
Yang, Hedong ; Guest, Clark C.
fYear :
1990
fDate :
17-21 June 1990
Firstpage :
287
Abstract :
A modified learning algorithm for BP (backpropagation) neural networks is presented based on the interpretation of a neuron in an (N+1)-dimensional space, RN+1, and the analysis of how a multilayered network performs a classification task by collective use of the B (boundary) neurons in the first layer. The role of B neurons and L (logic) neurons is discussed. A B neuron represents a linear boundary in the input space RN. An L neuron in the second layer defines in RN a convex piecewise linear boundary that is formed by a set of line segments corresponding to connected B neurons. An L neuron in the third layer defines in RN a complicated boundary that can have both convex parts and concave parts. Each subboundary of it corresponds to an L neuron in the second layer. The nonlinear function does not change the structure of a boundary but smooths the angular vertices of a boundary. It is shown that for the two-class problems with convex boundaries, the nonlinear function in an L neuron can be replaced with a step and the weights of the L neuron can be fixed. Computer simulation shows that this algorithm can learn more quickly than the ordinary BP algorithm, even in cases where the ordinary BP algorithm will fail
Keywords :
learning systems; neural nets; backpropagation; computer simulation; convergence; convex boundaries; convex piecewise linear boundary; learning algorithm; line segments; linear discriminants; logic functions; neural networks;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1990., 1990 IJCNN International Joint Conference on
Conference_Location :
San Diego, CA, USA
Type :
conf
DOI :
10.1109/IJCNN.1990.137858
Filename :
5726816
Link To Document :
بازگشت