Abstract :
This paper presents a learning algorithm which may be used to train networks whose neurons may have discontinuous or nondifferentiable activation functions. The algorithm has been demonstrated using several different neuron activation functions. Although it shares several features with the error back-propagation algorithm, the heuristic derivation presented does not appeal to the highly mathematical derivation of the error back-propagation algorithm. The error back-propagation learning algorithm is shown to be at least reasonable. The learning algorithm derived could be argued to be successful just because of its similarity with the error back-propagation algorithm. Alternatively, it may be that the success of the error back-propagation algorithm, in that it does not seem to suffer from the problems normally associated with gradient descent procedures, is due to its similarity with the algorithm presented