DocumentCode :
1405439
Title :
A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints
Author :
Liang, Xue-Bin ; Wang, Jun
Author_Institution :
Dept. of Electr. & Comput. Eng., Delaware Univ., Newark, DE, USA
Volume :
11
Issue :
6
fYear :
2000
fDate :
11/1/2000 12:00:00 AM
Firstpage :
1251
Lastpage :
1262
Abstract :
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.
Keywords :
convergence; minimisation; nonlinear programming; quadratic programming; recurrent neural nets; attractivity property; bound constraints; continuous-time recurrent neural network model; continuously differentiable objective function; convex objective function; minimization; nonlinear optimization; objective function optimum; primal quasiconvergent recurrent neural network; quadratic optimization; strictly convex quadratic objective function; Automation; Constraint optimization; Convergence; Councils; Large-scale systems; Neural networks; Optimization methods; Quadratic programming; Recurrent neural networks; Stability;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/72.883412
Filename :
883412
Link To Document :
بازگشت