Title :
Distribution approximation, combinatorial optimization, and Lagrange-Barrier
Author_Institution :
Dept. of Comput. Sci. & Eng., Chinese Univ. of Hong Kong, China
Abstract :
In this paper, typical analog combinatorial optimization approaches, such as Hopfield net, Hopfield-Lagrange net, Maximum entropy approach, Lagrange-Barrier approach, are systematically examined from the perspective of learning distribution. The minimization of a combinatorial cost is turned into a procedure of learning a simple distribution to approximate the Gibbs distribution induced from this cost such that both the distributions share a same global peak. From this new perspective, a new general guideline is obtained for developing analog combinatorial optimization approaches. Moreover, the Lagrange-Barrier iterative procedure proposed in Xu (1994) is further elaborated with guaranteed convergence on a feasible solution that satisfies constraints.
Keywords :
Hopfield neural nets; approximation theory; iterative methods; learning (artificial intelligence); maximum entropy methods; minimisation; Gibbs distribution; Hopfield-Lagrange net; Lagrange-Barrier iterative methods; analog combinatorial optimization; combinatorial cost; distribution approximation; learning distribution; maximum entropy methods; minimisation; Artificial neural networks; Computer science; Costs; Councils; Entropy; Equations; Guidelines; Lagrangian functions; Traveling salesman problems;
Conference_Titel :
Neural Networks, 2003. Proceedings of the International Joint Conference on
Print_ISBN :
0-7803-7898-9
DOI :
10.1109/IJCNN.2003.1223780