Title :
A Recurrent Neural Network for Solving Nonconvex Optimization Problems
Author :
Hu, Xiaolin ; Wang, Jun
Author_Institution :
Chinese Univ. of Hong Kong, Hong Kong
Abstract :
An existing recurrent neural network for convex optimization is extended to solve nonconvex optimization problems. One of the prominent features of this neural network is the one-to-one correspondence between its equilibria and the Karush-Kuhn-Tucker (KKT) points of the nonconvex optimization problem. The conditions are derived under which the neural network (locally) converges to the KKT points. It is desired that the neural network is stable at minimum solutions, and unstable at maximum solutions or saddle solutions. It is found in the paper that most likely the neural network is unstable at the maximum solutions. Moreover, we found that if the derived conditions are not satisfied at minimum solutions, by transforming the original problem into an equivalent one with the p-power (or partial p-power) method, these conditions can be satisfied. As a result, the neural network will locally converge to a minimum solution. Finally, two illustrative examples are provided to demonstrate the performance of the recurrent neural network.
Keywords :
concave programming; recurrent neural nets; Karush-Kuhn-Tucker points; nonconvex optimization problem; recurrent neural network; Annealing; Automation; Convergence; Councils; Equations; Hopfield neural networks; Neural networks; Parallel algorithms; Recurrent neural networks; Stability;
Conference_Titel :
Neural Networks, 2006. IJCNN '06. International Joint Conference on
Conference_Location :
Vancouver, BC
Print_ISBN :
0-7803-9490-9
DOI :
10.1109/IJCNN.2006.247077