DocumentCode
276589
Title
A diffusion process for global optimization in neural networks
Author
Guillerm, Thierry J. ; Cotter, Neil E.
Author_Institution
Dept. of Electr. Eng., Utah Univ., Salt Lake City, UT, USA
Volume
i
fYear
1991
fDate
8-14 Jul 1991
Firstpage
335
Abstract
The authors modify the usual gradient descent method to push the process in the weight space to have a Gibb or Boltzmann distribution, and find the global minima of the average performance measure of a neural network. The goal is to present a method which guarantees that a global minima of the average performance measure in the weight space will be located, given sufficient computational time. The method of simulated annealing is a mathematical tool which forces a system to behave like a natural annealing process. The method chosen for the global optimization of continuous networks is based on the modification of the differential equation associated with local optimization. The global optimization theory is derived for networks whose learning rules are supervised, whose nodes are bounded Lipschitz continuous functions, and whose performance measure is smooth
Keywords
learning systems; minimisation; neural nets; simulated annealing; Boltzmann distribution; Gibb distribution; average performance measure; bounded Lipschitz continuous functions; continuous networks; differential equation; diffusion process; global minima; global optimization; gradient descent method; learning rules; local optimization; neural networks; nodes; simulated annealing; weight space; Artificial neural networks; Biological system modeling; Computational modeling; Diffusion processes; Equations; Intelligent networks; Neural networks; Simulated annealing; Size measurement; Stochastic processes;
fLanguage
English
Publisher
ieee
Conference_Titel
Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on
Conference_Location
Seattle, WA
Print_ISBN
0-7803-0164-1
Type
conf
DOI
10.1109/IJCNN.1991.155199
Filename
155199
Link To Document