Abstract :
Presents an extension of the counter-propagation network which is aimed at improving the classification process during the learning phase. The basic idea is to prevent an input vector, the desired output of which significantly differs from the desired outputs of other similar input vectors, from disturbing the classification already obtained, and forcing such an input into a separate category. In order to achieve this, the author introduces an additional neuron which evaluates the quality of the network output by computing the quadratic error between the desired and the produced output vector. If the quadratic error is above a predefined threshold, the already existing weights are not changed at all, but a hitherto unused neuron in the hidden layer is selected and its input-to-hidden weight vector is made equal to the input vector