DocumentCode :
1163474
Title :
A New Recurrent Neural Network for Solving Convex Quadratic Programming Problems With an Application to the k -Winners-Take-All Problem
Author :
Hu, Xiaolin ; Zhang, Bo
Author_Institution :
Dept. of Comput. Sci. & Technol., Tsinghua Univ., Beijing
Volume :
20
Issue :
4
fYear :
2009
fDate :
4/1/2009 12:00:00 AM
Firstpage :
654
Lastpage :
664
Abstract :
In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all (k-WTA) network with O(n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line.
Keywords :
computational complexity; convex programming; linear programming; quadratic programming; recurrent neural nets; convex quadratic programming problems; k-winners-take-all problem; linear programming problems; recurrent neural network; $k$-winners-take-all ($k$-WTA); Asymptotic stability; linear programming; neural network; quadratic programming;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/TNN.2008.2011266
Filename :
4785114
Link To Document :
بازگشت