Title :
Using localizing learning to improve supervised learning algorithms
Author :
Weaver, Scott ; Baird, Leemon ; Polycarpou, Marios
Author_Institution :
Genomatix USA, Cincinnati, OH, USA
fDate :
9/1/2001 12:00:00 AM
Abstract :
Slow learning of neural-network function approximators can frequently be attributed to interference, which occurs when learning in one area of the input space causes unlearning in another area. To mitigate the effect of unlearning, this paper develops an algorithm that adjusts the weights of an arbitrary, nonlinearly parameterized network such that the potential for future interference during learning is reduced. This is accomplished by the reduction of a biobjective cost function that combines the approximation error and a term that measures interference. An analysis of the algorithm´s convergence properties shows that learning with this algorithm reduces future unlearning. The algorithm can be used either during online learning or can be used to condition a network to have immunity from interference during a future learning stage. A simple example demonstrates how interference manifests itself in a network and how less interference can lead to more efficient learning. Simulations demonstrate how this new learning algorithm speeds up the training in various situations due to the extra cost function term
Keywords :
feedforward neural nets; function approximation; gradient methods; learning (artificial intelligence); optimisation; feedforward neural-network; function approximation; gradient methods; interference; localizing learning; multiobjective cost function; online learning; supervised learning algorithms; unlearning; Algorithm design and analysis; Approximation error; Convergence; Cost function; Gradient methods; Immune system; Interference; Learning systems; Neural networks; Supervised learning;
Journal_Title :
Neural Networks, IEEE Transactions on