Title :
Limiting fault-induced output errors in ANNs
Author :
Clay, Reed D. ; Séquin, Carlo H.
Author_Institution :
Dept. of Electr. Eng. & Comput. Sci., California Univ., Berkeley, CA, USA
Abstract :
Summary form only given, as follows. The worst-case output errors produced by the failure of a hidden neuron in layered feedforward artificial neural networks were investigated. These errors can be much worse than simply the loss of the contribution of a neuron whose output goes to zero. A much larger erroneous signal can be produced when the failure sets the value of the hidden neuron to one of the power supply voltages. A method was investigated that limits the fractional error in the output signal of a feedforward net due to such saturated hidden unit faults in analog function approximation tasks. The number of hidden units is significantly increased, and the maximal contribution of each unit is limited to a small fraction of the net output signal. To achieve a large localized output signal, several Gaussian hidden units are moved into the same location in the input domain and the gain of the linear summing output unit is suitably adjusted. Since the contribution of each unit is equal in magnitude, there is only a modest error under any possible failure mode
Keywords :
errors; neural nets; Gaussian hidden units; analog function approximation tasks; hidden neuron; layered feedforward artificial neural networks; limiting fault-induced output errors; linear summing output unit gain; power supply voltages; worst-case output errors; Backpropagation algorithms; Computer errors; Computer science; Feedforward systems; Function approximation; Gain; Neurons; Power supplies; Robustness; Testing;
Conference_Titel :
Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on
Conference_Location :
Seattle, WA
Print_ISBN :
0-7803-0164-1
DOI :
10.1109/IJCNN.1991.155612